00:00:00.001 Started by upstream project "autotest-per-patch" build number 132805 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.092 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:03.208 The recommended git tool is: git 00:00:03.208 using credential 00000000-0000-0000-0000-000000000002 00:00:03.210 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:03.220 Fetching changes from the remote Git repository 00:00:03.222 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:03.232 Using shallow fetch with depth 1 00:00:03.233 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:03.233 > git --version # timeout=10 00:00:03.243 > git --version # 'git version 2.39.2' 00:00:03.243 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:03.256 Setting http proxy: proxy-dmz.intel.com:911 00:00:03.256 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:08.907 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:08.919 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:08.932 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:08.932 > git config core.sparsecheckout # timeout=10 00:00:08.944 > git read-tree -mu HEAD # timeout=10 00:00:08.961 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:08.985 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:08.985 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:09.110 [Pipeline] Start of Pipeline 00:00:09.124 [Pipeline] library 00:00:09.126 Loading library shm_lib@master 00:00:09.126 Library shm_lib@master is cached. Copying from home. 00:00:09.144 [Pipeline] node 00:00:09.156 Running on WFP3 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:09.158 [Pipeline] { 00:00:09.169 [Pipeline] catchError 00:00:09.171 [Pipeline] { 00:00:09.184 [Pipeline] wrap 00:00:09.194 [Pipeline] { 00:00:09.203 [Pipeline] stage 00:00:09.205 [Pipeline] { (Prologue) 00:00:09.429 [Pipeline] sh 00:00:09.708 + logger -p user.info -t JENKINS-CI 00:00:09.726 [Pipeline] echo 00:00:09.727 Node: WFP3 00:00:09.734 [Pipeline] sh 00:00:10.033 [Pipeline] setCustomBuildProperty 00:00:10.044 [Pipeline] echo 00:00:10.045 Cleanup processes 00:00:10.050 [Pipeline] sh 00:00:10.333 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:10.333 2301384 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:10.345 [Pipeline] sh 00:00:10.631 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:10.631 ++ grep -v 'sudo pgrep' 00:00:10.631 ++ awk '{print $1}' 00:00:10.631 + sudo kill -9 00:00:10.631 + true 00:00:10.644 [Pipeline] cleanWs 00:00:10.654 [WS-CLEANUP] Deleting project workspace... 00:00:10.654 [WS-CLEANUP] Deferred wipeout is used... 00:00:10.659 [WS-CLEANUP] done 00:00:10.663 [Pipeline] setCustomBuildProperty 00:00:10.698 [Pipeline] sh 00:00:10.982 + sudo git config --global --replace-all safe.directory '*' 00:00:11.076 [Pipeline] httpRequest 00:00:11.476 [Pipeline] echo 00:00:11.478 Sorcerer 10.211.164.112 is alive 00:00:11.485 [Pipeline] retry 00:00:11.487 [Pipeline] { 00:00:11.498 [Pipeline] httpRequest 00:00:11.503 HttpMethod: GET 00:00:11.503 URL: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:11.504 Sending request to url: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:11.532 Response Code: HTTP/1.1 200 OK 00:00:11.533 Success: Status code 200 is in the accepted range: 200,404 00:00:11.533 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:33.400 [Pipeline] } 00:00:33.418 [Pipeline] // retry 00:00:33.426 [Pipeline] sh 00:00:33.712 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:33.727 [Pipeline] httpRequest 00:00:34.165 [Pipeline] echo 00:00:34.167 Sorcerer 10.211.164.112 is alive 00:00:34.177 [Pipeline] retry 00:00:34.178 [Pipeline] { 00:00:34.193 [Pipeline] httpRequest 00:00:34.197 HttpMethod: GET 00:00:34.198 URL: http://10.211.164.112/packages/spdk_6584139bf1f810d65390a8fc2baea3291bcf9e05.tar.gz 00:00:34.199 Sending request to url: http://10.211.164.112/packages/spdk_6584139bf1f810d65390a8fc2baea3291bcf9e05.tar.gz 00:00:34.206 Response Code: HTTP/1.1 200 OK 00:00:34.206 Success: Status code 200 is in the accepted range: 200,404 00:00:34.207 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_6584139bf1f810d65390a8fc2baea3291bcf9e05.tar.gz 00:02:09.210 [Pipeline] } 00:02:09.228 [Pipeline] // retry 00:02:09.236 [Pipeline] sh 00:02:09.522 + tar --no-same-owner -xf spdk_6584139bf1f810d65390a8fc2baea3291bcf9e05.tar.gz 00:02:12.068 [Pipeline] sh 00:02:12.353 + git -C spdk log --oneline -n5 00:02:12.353 6584139bf build: use VERSION file for storing version 00:02:12.353 a5e6ecf28 lib/reduce: Data copy logic in thin read operations 00:02:12.353 a333974e5 nvme/rdma: Flush queued send WRs when disconnecting a qpair 00:02:12.353 2b8672176 nvme/rdma: Prevent submitting new recv WR when disconnecting 00:02:12.353 e2dfdf06c accel/mlx5: Register post_poller handler 00:02:12.363 [Pipeline] } 00:02:12.378 [Pipeline] // stage 00:02:12.387 [Pipeline] stage 00:02:12.389 [Pipeline] { (Prepare) 00:02:12.405 [Pipeline] writeFile 00:02:12.421 [Pipeline] sh 00:02:12.701 + logger -p user.info -t JENKINS-CI 00:02:12.713 [Pipeline] sh 00:02:12.998 + logger -p user.info -t JENKINS-CI 00:02:13.010 [Pipeline] sh 00:02:13.295 + cat autorun-spdk.conf 00:02:13.295 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:13.295 SPDK_TEST_NVMF=1 00:02:13.295 SPDK_TEST_NVME_CLI=1 00:02:13.295 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:13.295 SPDK_TEST_NVMF_NICS=e810 00:02:13.295 SPDK_TEST_VFIOUSER=1 00:02:13.295 SPDK_RUN_UBSAN=1 00:02:13.295 NET_TYPE=phy 00:02:13.303 RUN_NIGHTLY=0 00:02:13.307 [Pipeline] readFile 00:02:13.332 [Pipeline] withEnv 00:02:13.334 [Pipeline] { 00:02:13.346 [Pipeline] sh 00:02:13.634 + set -ex 00:02:13.635 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:02:13.635 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:13.635 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:13.635 ++ SPDK_TEST_NVMF=1 00:02:13.635 ++ SPDK_TEST_NVME_CLI=1 00:02:13.635 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:13.635 ++ SPDK_TEST_NVMF_NICS=e810 00:02:13.635 ++ SPDK_TEST_VFIOUSER=1 00:02:13.635 ++ SPDK_RUN_UBSAN=1 00:02:13.635 ++ NET_TYPE=phy 00:02:13.635 ++ RUN_NIGHTLY=0 00:02:13.635 + case $SPDK_TEST_NVMF_NICS in 00:02:13.635 + DRIVERS=ice 00:02:13.635 + [[ tcp == \r\d\m\a ]] 00:02:13.635 + [[ -n ice ]] 00:02:13.635 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:02:13.635 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:13.635 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:02:13.635 rmmod: ERROR: Module i40iw is not currently loaded 00:02:13.635 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:13.635 + true 00:02:13.635 + for D in $DRIVERS 00:02:13.635 + sudo modprobe ice 00:02:13.635 + exit 0 00:02:13.645 [Pipeline] } 00:02:13.660 [Pipeline] // withEnv 00:02:13.665 [Pipeline] } 00:02:13.679 [Pipeline] // stage 00:02:13.688 [Pipeline] catchError 00:02:13.690 [Pipeline] { 00:02:13.704 [Pipeline] timeout 00:02:13.704 Timeout set to expire in 1 hr 0 min 00:02:13.706 [Pipeline] { 00:02:13.720 [Pipeline] stage 00:02:13.722 [Pipeline] { (Tests) 00:02:13.736 [Pipeline] sh 00:02:14.022 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:14.022 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:14.022 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:14.022 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:02:14.022 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:14.022 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:14.022 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:02:14.022 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:14.022 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:14.022 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:14.022 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:02:14.022 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:14.022 + source /etc/os-release 00:02:14.022 ++ NAME='Fedora Linux' 00:02:14.022 ++ VERSION='39 (Cloud Edition)' 00:02:14.022 ++ ID=fedora 00:02:14.022 ++ VERSION_ID=39 00:02:14.022 ++ VERSION_CODENAME= 00:02:14.022 ++ PLATFORM_ID=platform:f39 00:02:14.022 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:14.022 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:14.022 ++ LOGO=fedora-logo-icon 00:02:14.022 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:14.022 ++ HOME_URL=https://fedoraproject.org/ 00:02:14.022 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:14.022 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:14.022 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:14.022 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:14.022 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:14.022 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:14.022 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:14.022 ++ SUPPORT_END=2024-11-12 00:02:14.022 ++ VARIANT='Cloud Edition' 00:02:14.022 ++ VARIANT_ID=cloud 00:02:14.022 + uname -a 00:02:14.022 Linux spdk-wfp-03 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 05:41:37 UTC 2024 x86_64 GNU/Linux 00:02:14.022 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:16.561 Hugepages 00:02:16.561 node hugesize free / total 00:02:16.561 node0 1048576kB 0 / 0 00:02:16.561 node0 2048kB 0 / 0 00:02:16.561 node1 1048576kB 0 / 0 00:02:16.561 node1 2048kB 0 / 0 00:02:16.561 00:02:16.561 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:16.561 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:02:16.561 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:02:16.561 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:02:16.561 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:02:16.561 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:02:16.561 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:02:16.561 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:02:16.561 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:02:16.821 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:02:16.821 NVMe 0000:5f:00.0 1b96 2600 0 nvme nvme1 nvme1n1 nvme1n2 00:02:16.821 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:02:16.821 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:02:16.821 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:02:16.821 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:02:16.821 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:02:16.821 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:02:16.821 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:02:16.821 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:02:16.821 + rm -f /tmp/spdk-ld-path 00:02:16.821 + source autorun-spdk.conf 00:02:16.821 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:16.821 ++ SPDK_TEST_NVMF=1 00:02:16.821 ++ SPDK_TEST_NVME_CLI=1 00:02:16.821 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:16.821 ++ SPDK_TEST_NVMF_NICS=e810 00:02:16.821 ++ SPDK_TEST_VFIOUSER=1 00:02:16.821 ++ SPDK_RUN_UBSAN=1 00:02:16.821 ++ NET_TYPE=phy 00:02:16.821 ++ RUN_NIGHTLY=0 00:02:16.821 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:16.821 + [[ -n '' ]] 00:02:16.821 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:16.821 + for M in /var/spdk/build-*-manifest.txt 00:02:16.821 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:16.821 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:16.821 + for M in /var/spdk/build-*-manifest.txt 00:02:16.821 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:16.821 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:16.821 + for M in /var/spdk/build-*-manifest.txt 00:02:16.821 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:16.821 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:16.821 ++ uname 00:02:16.821 + [[ Linux == \L\i\n\u\x ]] 00:02:16.821 + sudo dmesg -T 00:02:16.821 + sudo dmesg --clear 00:02:17.081 + dmesg_pid=2302893 00:02:17.081 + [[ Fedora Linux == FreeBSD ]] 00:02:17.081 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:17.081 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:17.081 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:17.081 + [[ -x /usr/src/fio-static/fio ]] 00:02:17.081 + export FIO_BIN=/usr/src/fio-static/fio 00:02:17.081 + FIO_BIN=/usr/src/fio-static/fio 00:02:17.081 + sudo dmesg -Tw 00:02:17.081 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:17.081 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:17.081 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:17.081 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:17.081 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:17.081 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:17.081 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:17.081 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:17.081 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:17.081 17:12:46 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:17.081 17:12:46 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:17.081 17:12:46 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:17.081 17:12:46 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:02:17.081 17:12:46 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:02:17.081 17:12:46 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:17.081 17:12:46 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:02:17.081 17:12:46 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:02:17.081 17:12:46 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:02:17.081 17:12:46 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:02:17.081 17:12:46 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:02:17.081 17:12:46 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:17.081 17:12:46 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:17.081 17:12:46 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:17.081 17:12:46 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:17.081 17:12:46 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:17.081 17:12:46 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:17.081 17:12:46 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:17.081 17:12:46 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:17.081 17:12:46 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:17.081 17:12:46 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:17.081 17:12:46 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:17.081 17:12:46 -- paths/export.sh@5 -- $ export PATH 00:02:17.082 17:12:46 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:17.082 17:12:46 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:17.082 17:12:46 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:17.082 Traceback (most recent call last): 00:02:17.082 File "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py", line 24, in 00:02:17.082 import spdk.rpc as rpc # noqa 00:02:17.082 ^^^^^^^^^^^^^^^^^^^^^^ 00:02:17.082 File "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python/spdk/__init__.py", line 5, in 00:02:17.082 from .version import __version__ 00:02:17.082 ModuleNotFoundError: No module named 'spdk.version' 00:02:17.082 17:12:46 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733760766.XXXXXX 00:02:17.082 17:12:46 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733760766.wQK2gl 00:02:17.082 17:12:46 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:17.082 17:12:46 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:02:17.082 17:12:46 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:02:17.082 17:12:46 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:17.082 17:12:46 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:17.082 17:12:46 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:17.082 17:12:46 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:17.082 17:12:46 -- common/autotest_common.sh@10 -- $ set +x 00:02:17.082 17:12:46 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:02:17.082 17:12:46 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:17.082 17:12:46 -- pm/common@17 -- $ local monitor 00:02:17.082 17:12:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:17.082 17:12:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:17.082 17:12:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:17.082 17:12:46 -- pm/common@21 -- $ date +%s 00:02:17.082 17:12:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:17.082 17:12:46 -- pm/common@21 -- $ date +%s 00:02:17.082 17:12:46 -- pm/common@25 -- $ sleep 1 00:02:17.082 17:12:46 -- pm/common@21 -- $ date +%s 00:02:17.082 17:12:46 -- pm/common@21 -- $ date +%s 00:02:17.082 17:12:46 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733760766 00:02:17.082 17:12:46 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733760766 00:02:17.082 17:12:46 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733760766 00:02:17.082 17:12:46 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733760766 00:02:17.082 Traceback (most recent call last): 00:02:17.082 File "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py", line 24, in 00:02:17.082 import spdk.rpc as rpc # noqa 00:02:17.082 ^^^^^^^^^^^^^^^^^^^^^^ 00:02:17.082 File "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python/spdk/__init__.py", line 5, in 00:02:17.082 from .version import __version__ 00:02:17.082 ModuleNotFoundError: No module named 'spdk.version' 00:02:17.082 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733760766_collect-cpu-load.pm.log 00:02:17.082 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733760766_collect-vmstat.pm.log 00:02:17.082 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733760766_collect-cpu-temp.pm.log 00:02:17.082 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733760766_collect-bmc-pm.bmc.pm.log 00:02:18.021 17:12:47 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:18.021 17:12:47 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:18.021 17:12:47 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:18.021 17:12:47 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:18.021 17:12:47 -- spdk/autobuild.sh@16 -- $ date -u 00:02:18.021 Mon Dec 9 04:12:47 PM UTC 2024 00:02:18.021 17:12:47 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:18.281 v25.01-pre-304-g6584139bf 00:02:18.281 17:12:47 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:18.281 17:12:47 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:18.281 17:12:47 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:18.281 17:12:47 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:18.281 17:12:47 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:18.281 17:12:47 -- common/autotest_common.sh@10 -- $ set +x 00:02:18.281 ************************************ 00:02:18.281 START TEST ubsan 00:02:18.281 ************************************ 00:02:18.281 17:12:47 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:18.281 using ubsan 00:02:18.281 00:02:18.281 real 0m0.001s 00:02:18.281 user 0m0.000s 00:02:18.281 sys 0m0.000s 00:02:18.281 17:12:47 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:18.281 17:12:47 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:18.281 ************************************ 00:02:18.281 END TEST ubsan 00:02:18.281 ************************************ 00:02:18.281 17:12:47 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:18.281 17:12:47 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:18.281 17:12:47 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:18.281 17:12:47 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:18.281 17:12:47 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:18.281 17:12:47 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:18.281 17:12:47 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:18.281 17:12:47 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:18.281 17:12:47 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:02:18.281 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:18.281 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:18.850 Using 'verbs' RDMA provider 00:02:31.654 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:43.870 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:43.870 Creating mk/config.mk...done. 00:02:43.870 Creating mk/cc.flags.mk...done. 00:02:43.870 Type 'make' to build. 00:02:43.870 17:13:12 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:02:43.870 17:13:12 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:43.870 17:13:12 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:43.870 17:13:12 -- common/autotest_common.sh@10 -- $ set +x 00:02:43.870 ************************************ 00:02:43.870 START TEST make 00:02:43.870 ************************************ 00:02:43.870 17:13:12 make -- common/autotest_common.sh@1129 -- $ make -j96 00:02:45.786 The Meson build system 00:02:45.786 Version: 1.5.0 00:02:45.786 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:45.786 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:45.786 Build type: native build 00:02:45.786 Project name: libvfio-user 00:02:45.786 Project version: 0.0.1 00:02:45.786 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:45.786 C linker for the host machine: cc ld.bfd 2.40-14 00:02:45.786 Host machine cpu family: x86_64 00:02:45.786 Host machine cpu: x86_64 00:02:45.786 Run-time dependency threads found: YES 00:02:45.786 Library dl found: YES 00:02:45.786 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:45.786 Run-time dependency json-c found: YES 0.17 00:02:45.786 Run-time dependency cmocka found: YES 1.1.7 00:02:45.786 Program pytest-3 found: NO 00:02:45.786 Program flake8 found: NO 00:02:45.786 Program misspell-fixer found: NO 00:02:45.786 Program restructuredtext-lint found: NO 00:02:45.786 Program valgrind found: YES (/usr/bin/valgrind) 00:02:45.786 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:45.786 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:45.786 Compiler for C supports arguments -Wwrite-strings: YES 00:02:45.786 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:45.786 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:45.786 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:45.786 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:45.786 Build targets in project: 8 00:02:45.786 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:45.786 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:45.786 00:02:45.786 libvfio-user 0.0.1 00:02:45.786 00:02:45.786 User defined options 00:02:45.786 buildtype : debug 00:02:45.786 default_library: shared 00:02:45.786 libdir : /usr/local/lib 00:02:45.786 00:02:45.786 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:46.354 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:46.354 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:46.354 [2/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:46.354 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:46.354 [4/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:46.354 [5/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:46.354 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:46.354 [7/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:46.354 [8/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:46.354 [9/37] Compiling C object samples/null.p/null.c.o 00:02:46.354 [10/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:46.354 [11/37] Compiling C object samples/server.p/server.c.o 00:02:46.354 [12/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:46.354 [13/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:46.354 [14/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:46.354 [15/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:46.354 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:46.354 [17/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:46.354 [18/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:46.354 [19/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:46.354 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:46.354 [21/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:46.354 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:46.354 [23/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:46.354 [24/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:46.354 [25/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:46.354 [26/37] Compiling C object samples/client.p/client.c.o 00:02:46.354 [27/37] Linking target samples/client 00:02:46.612 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:46.612 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:46.612 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:02:46.612 [31/37] Linking target test/unit_tests 00:02:46.612 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:46.612 [33/37] Linking target samples/gpio-pci-idio-16 00:02:46.612 [34/37] Linking target samples/null 00:02:46.612 [35/37] Linking target samples/shadow_ioeventfd_server 00:02:46.612 [36/37] Linking target samples/server 00:02:46.612 [37/37] Linking target samples/lspci 00:02:46.612 INFO: autodetecting backend as ninja 00:02:46.612 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:46.870 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:47.128 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:47.129 ninja: no work to do. 00:02:52.400 The Meson build system 00:02:52.400 Version: 1.5.0 00:02:52.400 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:52.400 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:52.400 Build type: native build 00:02:52.400 Program cat found: YES (/usr/bin/cat) 00:02:52.400 Project name: DPDK 00:02:52.400 Project version: 24.03.0 00:02:52.400 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:52.400 C linker for the host machine: cc ld.bfd 2.40-14 00:02:52.400 Host machine cpu family: x86_64 00:02:52.400 Host machine cpu: x86_64 00:02:52.400 Message: ## Building in Developer Mode ## 00:02:52.400 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:52.400 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:52.400 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:52.400 Program python3 found: YES (/usr/bin/python3) 00:02:52.400 Program cat found: YES (/usr/bin/cat) 00:02:52.400 Compiler for C supports arguments -march=native: YES 00:02:52.400 Checking for size of "void *" : 8 00:02:52.400 Checking for size of "void *" : 8 (cached) 00:02:52.400 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:52.400 Library m found: YES 00:02:52.400 Library numa found: YES 00:02:52.400 Has header "numaif.h" : YES 00:02:52.400 Library fdt found: NO 00:02:52.400 Library execinfo found: NO 00:02:52.400 Has header "execinfo.h" : YES 00:02:52.400 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:52.400 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:52.400 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:52.400 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:52.400 Run-time dependency openssl found: YES 3.1.1 00:02:52.400 Run-time dependency libpcap found: YES 1.10.4 00:02:52.400 Has header "pcap.h" with dependency libpcap: YES 00:02:52.400 Compiler for C supports arguments -Wcast-qual: YES 00:02:52.400 Compiler for C supports arguments -Wdeprecated: YES 00:02:52.400 Compiler for C supports arguments -Wformat: YES 00:02:52.400 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:52.400 Compiler for C supports arguments -Wformat-security: NO 00:02:52.400 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:52.400 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:52.400 Compiler for C supports arguments -Wnested-externs: YES 00:02:52.400 Compiler for C supports arguments -Wold-style-definition: YES 00:02:52.400 Compiler for C supports arguments -Wpointer-arith: YES 00:02:52.400 Compiler for C supports arguments -Wsign-compare: YES 00:02:52.400 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:52.400 Compiler for C supports arguments -Wundef: YES 00:02:52.400 Compiler for C supports arguments -Wwrite-strings: YES 00:02:52.400 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:52.400 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:52.400 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:52.400 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:52.400 Program objdump found: YES (/usr/bin/objdump) 00:02:52.400 Compiler for C supports arguments -mavx512f: YES 00:02:52.400 Checking if "AVX512 checking" compiles: YES 00:02:52.400 Fetching value of define "__SSE4_2__" : 1 00:02:52.400 Fetching value of define "__AES__" : 1 00:02:52.400 Fetching value of define "__AVX__" : 1 00:02:52.400 Fetching value of define "__AVX2__" : 1 00:02:52.400 Fetching value of define "__AVX512BW__" : 1 00:02:52.400 Fetching value of define "__AVX512CD__" : 1 00:02:52.400 Fetching value of define "__AVX512DQ__" : 1 00:02:52.400 Fetching value of define "__AVX512F__" : 1 00:02:52.400 Fetching value of define "__AVX512VL__" : 1 00:02:52.400 Fetching value of define "__PCLMUL__" : 1 00:02:52.400 Fetching value of define "__RDRND__" : 1 00:02:52.400 Fetching value of define "__RDSEED__" : 1 00:02:52.400 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:52.400 Fetching value of define "__znver1__" : (undefined) 00:02:52.400 Fetching value of define "__znver2__" : (undefined) 00:02:52.400 Fetching value of define "__znver3__" : (undefined) 00:02:52.400 Fetching value of define "__znver4__" : (undefined) 00:02:52.400 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:52.400 Message: lib/log: Defining dependency "log" 00:02:52.400 Message: lib/kvargs: Defining dependency "kvargs" 00:02:52.400 Message: lib/telemetry: Defining dependency "telemetry" 00:02:52.400 Checking for function "getentropy" : NO 00:02:52.400 Message: lib/eal: Defining dependency "eal" 00:02:52.400 Message: lib/ring: Defining dependency "ring" 00:02:52.400 Message: lib/rcu: Defining dependency "rcu" 00:02:52.400 Message: lib/mempool: Defining dependency "mempool" 00:02:52.400 Message: lib/mbuf: Defining dependency "mbuf" 00:02:52.400 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:52.400 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:52.400 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:52.400 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:52.400 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:52.400 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:52.400 Compiler for C supports arguments -mpclmul: YES 00:02:52.400 Compiler for C supports arguments -maes: YES 00:02:52.400 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:52.400 Compiler for C supports arguments -mavx512bw: YES 00:02:52.400 Compiler for C supports arguments -mavx512dq: YES 00:02:52.400 Compiler for C supports arguments -mavx512vl: YES 00:02:52.400 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:52.400 Compiler for C supports arguments -mavx2: YES 00:02:52.400 Compiler for C supports arguments -mavx: YES 00:02:52.400 Message: lib/net: Defining dependency "net" 00:02:52.400 Message: lib/meter: Defining dependency "meter" 00:02:52.400 Message: lib/ethdev: Defining dependency "ethdev" 00:02:52.400 Message: lib/pci: Defining dependency "pci" 00:02:52.400 Message: lib/cmdline: Defining dependency "cmdline" 00:02:52.400 Message: lib/hash: Defining dependency "hash" 00:02:52.400 Message: lib/timer: Defining dependency "timer" 00:02:52.400 Message: lib/compressdev: Defining dependency "compressdev" 00:02:52.400 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:52.400 Message: lib/dmadev: Defining dependency "dmadev" 00:02:52.400 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:52.400 Message: lib/power: Defining dependency "power" 00:02:52.400 Message: lib/reorder: Defining dependency "reorder" 00:02:52.400 Message: lib/security: Defining dependency "security" 00:02:52.400 Has header "linux/userfaultfd.h" : YES 00:02:52.400 Has header "linux/vduse.h" : YES 00:02:52.400 Message: lib/vhost: Defining dependency "vhost" 00:02:52.400 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:52.400 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:52.400 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:52.400 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:52.400 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:52.400 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:52.400 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:52.400 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:52.400 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:52.400 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:52.400 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:52.400 Configuring doxy-api-html.conf using configuration 00:02:52.400 Configuring doxy-api-man.conf using configuration 00:02:52.400 Program mandb found: YES (/usr/bin/mandb) 00:02:52.400 Program sphinx-build found: NO 00:02:52.400 Configuring rte_build_config.h using configuration 00:02:52.400 Message: 00:02:52.400 ================= 00:02:52.400 Applications Enabled 00:02:52.400 ================= 00:02:52.400 00:02:52.400 apps: 00:02:52.400 00:02:52.400 00:02:52.400 Message: 00:02:52.400 ================= 00:02:52.401 Libraries Enabled 00:02:52.401 ================= 00:02:52.401 00:02:52.401 libs: 00:02:52.401 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:52.401 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:52.401 cryptodev, dmadev, power, reorder, security, vhost, 00:02:52.401 00:02:52.401 Message: 00:02:52.401 =============== 00:02:52.401 Drivers Enabled 00:02:52.401 =============== 00:02:52.401 00:02:52.401 common: 00:02:52.401 00:02:52.401 bus: 00:02:52.401 pci, vdev, 00:02:52.401 mempool: 00:02:52.401 ring, 00:02:52.401 dma: 00:02:52.401 00:02:52.401 net: 00:02:52.401 00:02:52.401 crypto: 00:02:52.401 00:02:52.401 compress: 00:02:52.401 00:02:52.401 vdpa: 00:02:52.401 00:02:52.401 00:02:52.401 Message: 00:02:52.401 ================= 00:02:52.401 Content Skipped 00:02:52.401 ================= 00:02:52.401 00:02:52.401 apps: 00:02:52.401 dumpcap: explicitly disabled via build config 00:02:52.401 graph: explicitly disabled via build config 00:02:52.401 pdump: explicitly disabled via build config 00:02:52.401 proc-info: explicitly disabled via build config 00:02:52.401 test-acl: explicitly disabled via build config 00:02:52.401 test-bbdev: explicitly disabled via build config 00:02:52.401 test-cmdline: explicitly disabled via build config 00:02:52.401 test-compress-perf: explicitly disabled via build config 00:02:52.401 test-crypto-perf: explicitly disabled via build config 00:02:52.401 test-dma-perf: explicitly disabled via build config 00:02:52.401 test-eventdev: explicitly disabled via build config 00:02:52.401 test-fib: explicitly disabled via build config 00:02:52.401 test-flow-perf: explicitly disabled via build config 00:02:52.401 test-gpudev: explicitly disabled via build config 00:02:52.401 test-mldev: explicitly disabled via build config 00:02:52.401 test-pipeline: explicitly disabled via build config 00:02:52.401 test-pmd: explicitly disabled via build config 00:02:52.401 test-regex: explicitly disabled via build config 00:02:52.401 test-sad: explicitly disabled via build config 00:02:52.401 test-security-perf: explicitly disabled via build config 00:02:52.401 00:02:52.401 libs: 00:02:52.401 argparse: explicitly disabled via build config 00:02:52.401 metrics: explicitly disabled via build config 00:02:52.401 acl: explicitly disabled via build config 00:02:52.401 bbdev: explicitly disabled via build config 00:02:52.401 bitratestats: explicitly disabled via build config 00:02:52.401 bpf: explicitly disabled via build config 00:02:52.401 cfgfile: explicitly disabled via build config 00:02:52.401 distributor: explicitly disabled via build config 00:02:52.401 efd: explicitly disabled via build config 00:02:52.401 eventdev: explicitly disabled via build config 00:02:52.401 dispatcher: explicitly disabled via build config 00:02:52.401 gpudev: explicitly disabled via build config 00:02:52.401 gro: explicitly disabled via build config 00:02:52.401 gso: explicitly disabled via build config 00:02:52.401 ip_frag: explicitly disabled via build config 00:02:52.401 jobstats: explicitly disabled via build config 00:02:52.401 latencystats: explicitly disabled via build config 00:02:52.401 lpm: explicitly disabled via build config 00:02:52.401 member: explicitly disabled via build config 00:02:52.401 pcapng: explicitly disabled via build config 00:02:52.401 rawdev: explicitly disabled via build config 00:02:52.401 regexdev: explicitly disabled via build config 00:02:52.401 mldev: explicitly disabled via build config 00:02:52.401 rib: explicitly disabled via build config 00:02:52.401 sched: explicitly disabled via build config 00:02:52.401 stack: explicitly disabled via build config 00:02:52.401 ipsec: explicitly disabled via build config 00:02:52.401 pdcp: explicitly disabled via build config 00:02:52.401 fib: explicitly disabled via build config 00:02:52.401 port: explicitly disabled via build config 00:02:52.401 pdump: explicitly disabled via build config 00:02:52.401 table: explicitly disabled via build config 00:02:52.401 pipeline: explicitly disabled via build config 00:02:52.401 graph: explicitly disabled via build config 00:02:52.401 node: explicitly disabled via build config 00:02:52.401 00:02:52.401 drivers: 00:02:52.401 common/cpt: not in enabled drivers build config 00:02:52.401 common/dpaax: not in enabled drivers build config 00:02:52.401 common/iavf: not in enabled drivers build config 00:02:52.401 common/idpf: not in enabled drivers build config 00:02:52.401 common/ionic: not in enabled drivers build config 00:02:52.401 common/mvep: not in enabled drivers build config 00:02:52.401 common/octeontx: not in enabled drivers build config 00:02:52.401 bus/auxiliary: not in enabled drivers build config 00:02:52.401 bus/cdx: not in enabled drivers build config 00:02:52.401 bus/dpaa: not in enabled drivers build config 00:02:52.401 bus/fslmc: not in enabled drivers build config 00:02:52.401 bus/ifpga: not in enabled drivers build config 00:02:52.401 bus/platform: not in enabled drivers build config 00:02:52.401 bus/uacce: not in enabled drivers build config 00:02:52.401 bus/vmbus: not in enabled drivers build config 00:02:52.401 common/cnxk: not in enabled drivers build config 00:02:52.401 common/mlx5: not in enabled drivers build config 00:02:52.401 common/nfp: not in enabled drivers build config 00:02:52.401 common/nitrox: not in enabled drivers build config 00:02:52.401 common/qat: not in enabled drivers build config 00:02:52.401 common/sfc_efx: not in enabled drivers build config 00:02:52.401 mempool/bucket: not in enabled drivers build config 00:02:52.401 mempool/cnxk: not in enabled drivers build config 00:02:52.401 mempool/dpaa: not in enabled drivers build config 00:02:52.401 mempool/dpaa2: not in enabled drivers build config 00:02:52.401 mempool/octeontx: not in enabled drivers build config 00:02:52.401 mempool/stack: not in enabled drivers build config 00:02:52.401 dma/cnxk: not in enabled drivers build config 00:02:52.401 dma/dpaa: not in enabled drivers build config 00:02:52.401 dma/dpaa2: not in enabled drivers build config 00:02:52.401 dma/hisilicon: not in enabled drivers build config 00:02:52.401 dma/idxd: not in enabled drivers build config 00:02:52.401 dma/ioat: not in enabled drivers build config 00:02:52.401 dma/skeleton: not in enabled drivers build config 00:02:52.401 net/af_packet: not in enabled drivers build config 00:02:52.401 net/af_xdp: not in enabled drivers build config 00:02:52.401 net/ark: not in enabled drivers build config 00:02:52.401 net/atlantic: not in enabled drivers build config 00:02:52.401 net/avp: not in enabled drivers build config 00:02:52.401 net/axgbe: not in enabled drivers build config 00:02:52.401 net/bnx2x: not in enabled drivers build config 00:02:52.401 net/bnxt: not in enabled drivers build config 00:02:52.401 net/bonding: not in enabled drivers build config 00:02:52.401 net/cnxk: not in enabled drivers build config 00:02:52.401 net/cpfl: not in enabled drivers build config 00:02:52.401 net/cxgbe: not in enabled drivers build config 00:02:52.401 net/dpaa: not in enabled drivers build config 00:02:52.401 net/dpaa2: not in enabled drivers build config 00:02:52.401 net/e1000: not in enabled drivers build config 00:02:52.401 net/ena: not in enabled drivers build config 00:02:52.401 net/enetc: not in enabled drivers build config 00:02:52.401 net/enetfec: not in enabled drivers build config 00:02:52.401 net/enic: not in enabled drivers build config 00:02:52.401 net/failsafe: not in enabled drivers build config 00:02:52.401 net/fm10k: not in enabled drivers build config 00:02:52.401 net/gve: not in enabled drivers build config 00:02:52.401 net/hinic: not in enabled drivers build config 00:02:52.401 net/hns3: not in enabled drivers build config 00:02:52.401 net/i40e: not in enabled drivers build config 00:02:52.401 net/iavf: not in enabled drivers build config 00:02:52.401 net/ice: not in enabled drivers build config 00:02:52.401 net/idpf: not in enabled drivers build config 00:02:52.401 net/igc: not in enabled drivers build config 00:02:52.401 net/ionic: not in enabled drivers build config 00:02:52.401 net/ipn3ke: not in enabled drivers build config 00:02:52.401 net/ixgbe: not in enabled drivers build config 00:02:52.401 net/mana: not in enabled drivers build config 00:02:52.401 net/memif: not in enabled drivers build config 00:02:52.401 net/mlx4: not in enabled drivers build config 00:02:52.401 net/mlx5: not in enabled drivers build config 00:02:52.401 net/mvneta: not in enabled drivers build config 00:02:52.401 net/mvpp2: not in enabled drivers build config 00:02:52.401 net/netvsc: not in enabled drivers build config 00:02:52.401 net/nfb: not in enabled drivers build config 00:02:52.401 net/nfp: not in enabled drivers build config 00:02:52.401 net/ngbe: not in enabled drivers build config 00:02:52.401 net/null: not in enabled drivers build config 00:02:52.401 net/octeontx: not in enabled drivers build config 00:02:52.401 net/octeon_ep: not in enabled drivers build config 00:02:52.401 net/pcap: not in enabled drivers build config 00:02:52.401 net/pfe: not in enabled drivers build config 00:02:52.401 net/qede: not in enabled drivers build config 00:02:52.401 net/ring: not in enabled drivers build config 00:02:52.401 net/sfc: not in enabled drivers build config 00:02:52.401 net/softnic: not in enabled drivers build config 00:02:52.401 net/tap: not in enabled drivers build config 00:02:52.401 net/thunderx: not in enabled drivers build config 00:02:52.401 net/txgbe: not in enabled drivers build config 00:02:52.401 net/vdev_netvsc: not in enabled drivers build config 00:02:52.401 net/vhost: not in enabled drivers build config 00:02:52.401 net/virtio: not in enabled drivers build config 00:02:52.401 net/vmxnet3: not in enabled drivers build config 00:02:52.401 raw/*: missing internal dependency, "rawdev" 00:02:52.401 crypto/armv8: not in enabled drivers build config 00:02:52.401 crypto/bcmfs: not in enabled drivers build config 00:02:52.401 crypto/caam_jr: not in enabled drivers build config 00:02:52.401 crypto/ccp: not in enabled drivers build config 00:02:52.401 crypto/cnxk: not in enabled drivers build config 00:02:52.401 crypto/dpaa_sec: not in enabled drivers build config 00:02:52.401 crypto/dpaa2_sec: not in enabled drivers build config 00:02:52.401 crypto/ipsec_mb: not in enabled drivers build config 00:02:52.401 crypto/mlx5: not in enabled drivers build config 00:02:52.401 crypto/mvsam: not in enabled drivers build config 00:02:52.401 crypto/nitrox: not in enabled drivers build config 00:02:52.401 crypto/null: not in enabled drivers build config 00:02:52.401 crypto/octeontx: not in enabled drivers build config 00:02:52.401 crypto/openssl: not in enabled drivers build config 00:02:52.401 crypto/scheduler: not in enabled drivers build config 00:02:52.401 crypto/uadk: not in enabled drivers build config 00:02:52.401 crypto/virtio: not in enabled drivers build config 00:02:52.401 compress/isal: not in enabled drivers build config 00:02:52.402 compress/mlx5: not in enabled drivers build config 00:02:52.402 compress/nitrox: not in enabled drivers build config 00:02:52.402 compress/octeontx: not in enabled drivers build config 00:02:52.402 compress/zlib: not in enabled drivers build config 00:02:52.402 regex/*: missing internal dependency, "regexdev" 00:02:52.402 ml/*: missing internal dependency, "mldev" 00:02:52.402 vdpa/ifc: not in enabled drivers build config 00:02:52.402 vdpa/mlx5: not in enabled drivers build config 00:02:52.402 vdpa/nfp: not in enabled drivers build config 00:02:52.402 vdpa/sfc: not in enabled drivers build config 00:02:52.402 event/*: missing internal dependency, "eventdev" 00:02:52.402 baseband/*: missing internal dependency, "bbdev" 00:02:52.402 gpu/*: missing internal dependency, "gpudev" 00:02:52.402 00:02:52.402 00:02:52.402 Build targets in project: 85 00:02:52.402 00:02:52.402 DPDK 24.03.0 00:02:52.402 00:02:52.402 User defined options 00:02:52.402 buildtype : debug 00:02:52.402 default_library : shared 00:02:52.402 libdir : lib 00:02:52.402 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:52.402 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:52.402 c_link_args : 00:02:52.402 cpu_instruction_set: native 00:02:52.402 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:02:52.402 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:02:52.402 enable_docs : false 00:02:52.402 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:52.402 enable_kmods : false 00:02:52.402 max_lcores : 128 00:02:52.402 tests : false 00:02:52.402 00:02:52.402 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:52.972 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:52.972 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:52.972 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:52.972 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:52.972 [4/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:52.972 [5/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:52.972 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:52.972 [7/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:52.972 [8/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:52.972 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:52.972 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:52.972 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:52.972 [12/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:52.972 [13/268] Linking static target lib/librte_kvargs.a 00:02:52.972 [14/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:52.972 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:52.972 [16/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:52.972 [17/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:52.972 [18/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:52.972 [19/268] Linking static target lib/librte_log.a 00:02:53.234 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:53.234 [21/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:53.234 [22/268] Linking static target lib/librte_pci.a 00:02:53.234 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:53.234 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:53.497 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:53.497 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:53.497 [27/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:53.497 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:53.497 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:53.497 [30/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:53.497 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:53.497 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:53.497 [33/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:53.497 [34/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:53.497 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:53.497 [36/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:53.497 [37/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:53.497 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:53.497 [39/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:53.497 [40/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:53.497 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:53.497 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:53.497 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:53.497 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:53.497 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:53.497 [46/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:53.497 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:53.497 [48/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:53.497 [49/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:53.497 [50/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:53.497 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:53.497 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:53.497 [53/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:53.497 [54/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:53.497 [55/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.497 [56/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:53.497 [57/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:53.497 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:53.497 [59/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:53.497 [60/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:53.497 [61/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:53.497 [62/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:53.497 [63/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:53.497 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:53.497 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:53.497 [66/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:53.497 [67/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:53.497 [68/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:53.497 [69/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:53.756 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:53.756 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:53.756 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:53.756 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:53.756 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:53.756 [75/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:53.756 [76/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:53.756 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:53.756 [78/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:53.756 [79/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:53.756 [80/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:53.756 [81/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:53.756 [82/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:53.756 [83/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:53.756 [84/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:53.756 [85/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:53.756 [86/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:53.756 [87/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:53.756 [88/268] Linking static target lib/librte_meter.a 00:02:53.756 [89/268] Linking static target lib/librte_telemetry.a 00:02:53.756 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:53.756 [91/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:53.756 [92/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:53.756 [93/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:53.756 [94/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:53.756 [95/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:53.756 [96/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:53.756 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:53.756 [98/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:53.756 [99/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:53.756 [100/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:53.757 [101/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:53.757 [102/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:53.757 [103/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:53.757 [104/268] Linking static target lib/librte_ring.a 00:02:53.757 [105/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:53.757 [106/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.757 [107/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:53.757 [108/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:53.757 [109/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:53.757 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:53.757 [111/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:53.757 [112/268] Linking static target lib/librte_rcu.a 00:02:53.757 [113/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:53.757 [114/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:53.757 [115/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:53.757 [116/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:53.757 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:53.757 [118/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:53.757 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:53.757 [120/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:53.757 [121/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:53.757 [122/268] Linking static target lib/librte_net.a 00:02:53.757 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:53.757 [124/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:53.757 [125/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:53.757 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:53.757 [127/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:53.757 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:53.757 [129/268] Linking static target lib/librte_mempool.a 00:02:53.757 [130/268] Linking static target lib/librte_eal.a 00:02:53.757 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:53.757 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:53.757 [133/268] Linking static target lib/librte_cmdline.a 00:02:54.016 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:54.016 [135/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:54.016 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:54.016 [137/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:54.016 [138/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:54.016 [139/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.016 [140/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.016 [141/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:54.016 [142/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:54.016 [143/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:54.016 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:54.016 [145/268] Linking target lib/librte_log.so.24.1 00:02:54.016 [146/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:54.016 [147/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:54.016 [148/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.016 [149/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:54.016 [150/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:54.016 [151/268] Linking static target lib/librte_timer.a 00:02:54.016 [152/268] Linking static target lib/librte_mbuf.a 00:02:54.016 [153/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:54.016 [154/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.016 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:54.016 [156/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:54.016 [157/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:54.016 [158/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:54.016 [159/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:54.016 [160/268] Linking static target lib/librte_reorder.a 00:02:54.016 [161/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:54.016 [162/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.016 [163/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:54.016 [164/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:54.016 [165/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:54.016 [166/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:54.016 [167/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:54.016 [168/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:54.016 [169/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:54.276 [170/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:54.276 [171/268] Linking static target lib/librte_security.a 00:02:54.276 [172/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:54.276 [173/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.276 [174/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:54.276 [175/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:54.276 [176/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:54.276 [177/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:54.276 [178/268] Linking static target lib/librte_dmadev.a 00:02:54.276 [179/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:54.276 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:54.276 [181/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:54.276 [182/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:54.276 [183/268] Linking static target lib/librte_power.a 00:02:54.276 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:54.276 [185/268] Linking static target lib/librte_compressdev.a 00:02:54.276 [186/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:54.276 [187/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:54.276 [188/268] Linking target lib/librte_kvargs.so.24.1 00:02:54.276 [189/268] Linking target lib/librte_telemetry.so.24.1 00:02:54.276 [190/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:54.276 [191/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:54.276 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:54.276 [193/268] Linking static target lib/librte_hash.a 00:02:54.276 [194/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:54.276 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:54.276 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:54.276 [197/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:54.276 [198/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:54.276 [199/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:54.276 [200/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:54.276 [201/268] Linking static target drivers/librte_bus_vdev.a 00:02:54.276 [202/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:54.276 [203/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:54.534 [204/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:54.534 [205/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:54.534 [206/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:54.534 [207/268] Linking static target drivers/librte_bus_pci.a 00:02:54.534 [208/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:54.534 [209/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:54.534 [210/268] Linking static target drivers/librte_mempool_ring.a 00:02:54.535 [211/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.535 [212/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.535 [213/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:54.535 [214/268] Linking static target lib/librte_cryptodev.a 00:02:54.793 [215/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.793 [216/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.793 [217/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.793 [218/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.793 [219/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.793 [220/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.051 [221/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:55.051 [222/268] Linking static target lib/librte_ethdev.a 00:02:55.051 [223/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:55.051 [224/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.051 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.051 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.309 [227/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.244 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:56.244 [229/268] Linking static target lib/librte_vhost.a 00:02:56.502 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.878 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.145 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.081 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.081 [234/268] Linking target lib/librte_eal.so.24.1 00:03:04.081 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:04.081 [236/268] Linking target lib/librte_ring.so.24.1 00:03:04.081 [237/268] Linking target lib/librte_meter.so.24.1 00:03:04.081 [238/268] Linking target lib/librte_timer.so.24.1 00:03:04.081 [239/268] Linking target lib/librte_pci.so.24.1 00:03:04.081 [240/268] Linking target lib/librte_dmadev.so.24.1 00:03:04.081 [241/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:04.339 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:04.339 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:04.339 [244/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:04.339 [245/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:04.339 [246/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:04.339 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:04.339 [248/268] Linking target lib/librte_rcu.so.24.1 00:03:04.339 [249/268] Linking target lib/librte_mempool.so.24.1 00:03:04.645 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:04.645 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:04.645 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:04.645 [253/268] Linking target lib/librte_mbuf.so.24.1 00:03:04.645 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:04.645 [255/268] Linking target lib/librte_reorder.so.24.1 00:03:04.645 [256/268] Linking target lib/librte_net.so.24.1 00:03:04.645 [257/268] Linking target lib/librte_compressdev.so.24.1 00:03:04.645 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:03:04.998 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:04.998 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:04.998 [261/268] Linking target lib/librte_cmdline.so.24.1 00:03:04.998 [262/268] Linking target lib/librte_hash.so.24.1 00:03:04.998 [263/268] Linking target lib/librte_security.so.24.1 00:03:04.998 [264/268] Linking target lib/librte_ethdev.so.24.1 00:03:04.998 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:04.998 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:05.257 [267/268] Linking target lib/librte_power.so.24.1 00:03:05.257 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:05.257 INFO: autodetecting backend as ninja 00:03:05.257 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:03:15.239 CC lib/ut_mock/mock.o 00:03:15.239 CC lib/log/log.o 00:03:15.239 CC lib/log/log_flags.o 00:03:15.239 CC lib/ut/ut.o 00:03:15.239 CC lib/log/log_deprecated.o 00:03:15.498 LIB libspdk_ut_mock.a 00:03:15.498 LIB libspdk_ut.a 00:03:15.498 LIB libspdk_log.a 00:03:15.498 SO libspdk_ut_mock.so.6.0 00:03:15.498 SO libspdk_ut.so.2.0 00:03:15.498 SO libspdk_log.so.7.1 00:03:15.498 SYMLINK libspdk_ut_mock.so 00:03:15.498 SYMLINK libspdk_ut.so 00:03:15.498 SYMLINK libspdk_log.so 00:03:16.067 CXX lib/trace_parser/trace.o 00:03:16.067 CC lib/util/base64.o 00:03:16.067 CC lib/util/bit_array.o 00:03:16.067 CC lib/util/cpuset.o 00:03:16.067 CC lib/util/crc16.o 00:03:16.067 CC lib/util/crc32.o 00:03:16.067 CC lib/ioat/ioat.o 00:03:16.067 CC lib/util/crc32c.o 00:03:16.067 CC lib/dma/dma.o 00:03:16.067 CC lib/util/crc32_ieee.o 00:03:16.067 CC lib/util/crc64.o 00:03:16.067 CC lib/util/dif.o 00:03:16.067 CC lib/util/fd.o 00:03:16.067 CC lib/util/fd_group.o 00:03:16.067 CC lib/util/file.o 00:03:16.067 CC lib/util/hexlify.o 00:03:16.067 CC lib/util/iov.o 00:03:16.067 CC lib/util/math.o 00:03:16.067 CC lib/util/net.o 00:03:16.067 CC lib/util/pipe.o 00:03:16.067 CC lib/util/strerror_tls.o 00:03:16.067 CC lib/util/string.o 00:03:16.067 CC lib/util/uuid.o 00:03:16.067 CC lib/util/xor.o 00:03:16.067 CC lib/util/zipf.o 00:03:16.067 CC lib/util/md5.o 00:03:16.067 CC lib/vfio_user/host/vfio_user_pci.o 00:03:16.067 CC lib/vfio_user/host/vfio_user.o 00:03:16.067 LIB libspdk_dma.a 00:03:16.067 SO libspdk_dma.so.5.0 00:03:16.325 LIB libspdk_ioat.a 00:03:16.325 SYMLINK libspdk_dma.so 00:03:16.325 SO libspdk_ioat.so.7.0 00:03:16.325 SYMLINK libspdk_ioat.so 00:03:16.325 LIB libspdk_vfio_user.a 00:03:16.325 SO libspdk_vfio_user.so.5.0 00:03:16.325 LIB libspdk_util.a 00:03:16.325 SYMLINK libspdk_vfio_user.so 00:03:16.584 SO libspdk_util.so.10.1 00:03:16.584 SYMLINK libspdk_util.so 00:03:16.584 LIB libspdk_trace_parser.a 00:03:16.584 SO libspdk_trace_parser.so.6.0 00:03:16.841 SYMLINK libspdk_trace_parser.so 00:03:16.841 CC lib/json/json_parse.o 00:03:16.841 CC lib/conf/conf.o 00:03:16.841 CC lib/json/json_util.o 00:03:16.841 CC lib/json/json_write.o 00:03:16.841 CC lib/idxd/idxd.o 00:03:16.841 CC lib/idxd/idxd_user.o 00:03:16.841 CC lib/idxd/idxd_kernel.o 00:03:16.841 CC lib/rdma_utils/rdma_utils.o 00:03:16.841 CC lib/vmd/vmd.o 00:03:16.841 CC lib/vmd/led.o 00:03:16.841 CC lib/env_dpdk/env.o 00:03:16.841 CC lib/env_dpdk/memory.o 00:03:16.841 CC lib/env_dpdk/pci.o 00:03:16.841 CC lib/env_dpdk/init.o 00:03:16.841 CC lib/env_dpdk/threads.o 00:03:16.841 CC lib/env_dpdk/pci_ioat.o 00:03:16.841 CC lib/env_dpdk/pci_virtio.o 00:03:16.841 CC lib/env_dpdk/pci_vmd.o 00:03:16.841 CC lib/env_dpdk/pci_idxd.o 00:03:16.841 CC lib/env_dpdk/pci_event.o 00:03:16.841 CC lib/env_dpdk/sigbus_handler.o 00:03:16.841 CC lib/env_dpdk/pci_dpdk.o 00:03:16.842 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:16.842 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:17.099 LIB libspdk_conf.a 00:03:17.099 SO libspdk_conf.so.6.0 00:03:17.099 LIB libspdk_rdma_utils.a 00:03:17.099 LIB libspdk_json.a 00:03:17.357 SO libspdk_rdma_utils.so.1.0 00:03:17.357 SO libspdk_json.so.6.0 00:03:17.357 SYMLINK libspdk_conf.so 00:03:17.357 SYMLINK libspdk_rdma_utils.so 00:03:17.357 SYMLINK libspdk_json.so 00:03:17.357 LIB libspdk_idxd.a 00:03:17.357 SO libspdk_idxd.so.12.1 00:03:17.357 LIB libspdk_vmd.a 00:03:17.615 SYMLINK libspdk_idxd.so 00:03:17.615 SO libspdk_vmd.so.6.0 00:03:17.615 SYMLINK libspdk_vmd.so 00:03:17.615 CC lib/rdma_provider/common.o 00:03:17.615 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:17.615 CC lib/jsonrpc/jsonrpc_server.o 00:03:17.615 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:17.615 CC lib/jsonrpc/jsonrpc_client.o 00:03:17.615 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:17.875 LIB libspdk_rdma_provider.a 00:03:17.875 LIB libspdk_jsonrpc.a 00:03:17.875 SO libspdk_rdma_provider.so.7.0 00:03:17.875 SO libspdk_jsonrpc.so.6.0 00:03:17.875 SYMLINK libspdk_rdma_provider.so 00:03:17.875 SYMLINK libspdk_jsonrpc.so 00:03:17.875 LIB libspdk_env_dpdk.a 00:03:18.135 SO libspdk_env_dpdk.so.15.1 00:03:18.135 SYMLINK libspdk_env_dpdk.so 00:03:18.405 CC lib/rpc/rpc.o 00:03:18.405 LIB libspdk_rpc.a 00:03:18.405 SO libspdk_rpc.so.6.0 00:03:18.665 SYMLINK libspdk_rpc.so 00:03:18.925 CC lib/notify/notify.o 00:03:18.925 CC lib/notify/notify_rpc.o 00:03:18.925 CC lib/trace/trace.o 00:03:18.925 CC lib/trace/trace_flags.o 00:03:18.925 CC lib/keyring/keyring.o 00:03:18.925 CC lib/trace/trace_rpc.o 00:03:18.925 CC lib/keyring/keyring_rpc.o 00:03:19.185 LIB libspdk_notify.a 00:03:19.185 SO libspdk_notify.so.6.0 00:03:19.185 LIB libspdk_keyring.a 00:03:19.185 LIB libspdk_trace.a 00:03:19.185 SYMLINK libspdk_notify.so 00:03:19.185 SO libspdk_keyring.so.2.0 00:03:19.185 SO libspdk_trace.so.11.0 00:03:19.185 SYMLINK libspdk_keyring.so 00:03:19.185 SYMLINK libspdk_trace.so 00:03:19.446 CC lib/thread/thread.o 00:03:19.446 CC lib/sock/sock.o 00:03:19.446 CC lib/thread/iobuf.o 00:03:19.446 CC lib/sock/sock_rpc.o 00:03:20.017 LIB libspdk_sock.a 00:03:20.017 SO libspdk_sock.so.10.0 00:03:20.017 SYMLINK libspdk_sock.so 00:03:20.276 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:20.276 CC lib/nvme/nvme_ctrlr.o 00:03:20.276 CC lib/nvme/nvme_fabric.o 00:03:20.276 CC lib/nvme/nvme_ns_cmd.o 00:03:20.276 CC lib/nvme/nvme_ns.o 00:03:20.276 CC lib/nvme/nvme_pcie_common.o 00:03:20.276 CC lib/nvme/nvme_pcie.o 00:03:20.276 CC lib/nvme/nvme_qpair.o 00:03:20.276 CC lib/nvme/nvme.o 00:03:20.276 CC lib/nvme/nvme_quirks.o 00:03:20.276 CC lib/nvme/nvme_transport.o 00:03:20.276 CC lib/nvme/nvme_discovery.o 00:03:20.276 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:20.276 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:20.276 CC lib/nvme/nvme_tcp.o 00:03:20.276 CC lib/nvme/nvme_opal.o 00:03:20.276 CC lib/nvme/nvme_io_msg.o 00:03:20.276 CC lib/nvme/nvme_poll_group.o 00:03:20.276 CC lib/nvme/nvme_zns.o 00:03:20.276 CC lib/nvme/nvme_stubs.o 00:03:20.276 CC lib/nvme/nvme_auth.o 00:03:20.276 CC lib/nvme/nvme_cuse.o 00:03:20.276 CC lib/nvme/nvme_vfio_user.o 00:03:20.276 CC lib/nvme/nvme_rdma.o 00:03:20.843 LIB libspdk_thread.a 00:03:20.843 SO libspdk_thread.so.11.0 00:03:20.843 SYMLINK libspdk_thread.so 00:03:21.101 CC lib/accel/accel.o 00:03:21.101 CC lib/accel/accel_rpc.o 00:03:21.101 CC lib/accel/accel_sw.o 00:03:21.101 CC lib/blob/blobstore.o 00:03:21.101 CC lib/blob/request.o 00:03:21.101 CC lib/blob/zeroes.o 00:03:21.101 CC lib/blob/blob_bs_dev.o 00:03:21.101 CC lib/init/json_config.o 00:03:21.101 CC lib/init/subsystem.o 00:03:21.101 CC lib/init/subsystem_rpc.o 00:03:21.101 CC lib/init/rpc.o 00:03:21.101 CC lib/virtio/virtio.o 00:03:21.101 CC lib/fsdev/fsdev.o 00:03:21.101 CC lib/virtio/virtio_vfio_user.o 00:03:21.101 CC lib/fsdev/fsdev_rpc.o 00:03:21.101 CC lib/virtio/virtio_vhost_user.o 00:03:21.101 CC lib/fsdev/fsdev_io.o 00:03:21.101 CC lib/virtio/virtio_pci.o 00:03:21.101 CC lib/vfu_tgt/tgt_endpoint.o 00:03:21.101 CC lib/vfu_tgt/tgt_rpc.o 00:03:21.362 LIB libspdk_init.a 00:03:21.362 SO libspdk_init.so.6.0 00:03:21.362 LIB libspdk_virtio.a 00:03:21.362 LIB libspdk_vfu_tgt.a 00:03:21.362 SYMLINK libspdk_init.so 00:03:21.362 SO libspdk_virtio.so.7.0 00:03:21.362 SO libspdk_vfu_tgt.so.3.0 00:03:21.620 SYMLINK libspdk_virtio.so 00:03:21.620 SYMLINK libspdk_vfu_tgt.so 00:03:21.620 LIB libspdk_fsdev.a 00:03:21.620 SO libspdk_fsdev.so.2.0 00:03:21.878 SYMLINK libspdk_fsdev.so 00:03:21.878 CC lib/event/app.o 00:03:21.878 CC lib/event/reactor.o 00:03:21.878 CC lib/event/log_rpc.o 00:03:21.878 CC lib/event/app_rpc.o 00:03:21.878 CC lib/event/scheduler_static.o 00:03:21.878 LIB libspdk_accel.a 00:03:21.878 SO libspdk_accel.so.16.0 00:03:22.137 SYMLINK libspdk_accel.so 00:03:22.137 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:22.137 LIB libspdk_nvme.a 00:03:22.137 LIB libspdk_event.a 00:03:22.137 SO libspdk_event.so.14.0 00:03:22.137 SO libspdk_nvme.so.15.0 00:03:22.137 SYMLINK libspdk_event.so 00:03:22.396 CC lib/bdev/bdev.o 00:03:22.396 CC lib/bdev/bdev_rpc.o 00:03:22.396 CC lib/bdev/bdev_zone.o 00:03:22.396 CC lib/bdev/part.o 00:03:22.396 CC lib/bdev/scsi_nvme.o 00:03:22.396 SYMLINK libspdk_nvme.so 00:03:22.654 LIB libspdk_fuse_dispatcher.a 00:03:22.655 SO libspdk_fuse_dispatcher.so.1.0 00:03:22.655 SYMLINK libspdk_fuse_dispatcher.so 00:03:23.222 LIB libspdk_blob.a 00:03:23.222 SO libspdk_blob.so.12.0 00:03:23.480 SYMLINK libspdk_blob.so 00:03:23.739 CC lib/blobfs/blobfs.o 00:03:23.739 CC lib/lvol/lvol.o 00:03:23.739 CC lib/blobfs/tree.o 00:03:24.306 LIB libspdk_bdev.a 00:03:24.307 SO libspdk_bdev.so.17.0 00:03:24.307 LIB libspdk_blobfs.a 00:03:24.307 SYMLINK libspdk_bdev.so 00:03:24.307 SO libspdk_blobfs.so.11.0 00:03:24.307 LIB libspdk_lvol.a 00:03:24.566 SYMLINK libspdk_blobfs.so 00:03:24.566 SO libspdk_lvol.so.11.0 00:03:24.566 SYMLINK libspdk_lvol.so 00:03:24.827 CC lib/ublk/ublk.o 00:03:24.827 CC lib/ublk/ublk_rpc.o 00:03:24.827 CC lib/nbd/nbd.o 00:03:24.827 CC lib/nbd/nbd_rpc.o 00:03:24.827 CC lib/ftl/ftl_core.o 00:03:24.827 CC lib/ftl/ftl_init.o 00:03:24.827 CC lib/scsi/dev.o 00:03:24.827 CC lib/ftl/ftl_layout.o 00:03:24.827 CC lib/ftl/ftl_debug.o 00:03:24.827 CC lib/scsi/lun.o 00:03:24.827 CC lib/nvmf/ctrlr.o 00:03:24.827 CC lib/scsi/port.o 00:03:24.827 CC lib/ftl/ftl_io.o 00:03:24.827 CC lib/nvmf/ctrlr_discovery.o 00:03:24.827 CC lib/scsi/scsi.o 00:03:24.827 CC lib/ftl/ftl_sb.o 00:03:24.827 CC lib/nvmf/ctrlr_bdev.o 00:03:24.827 CC lib/ftl/ftl_l2p.o 00:03:24.827 CC lib/nvmf/subsystem.o 00:03:24.827 CC lib/scsi/scsi_bdev.o 00:03:24.827 CC lib/ftl/ftl_l2p_flat.o 00:03:24.827 CC lib/nvmf/nvmf.o 00:03:24.827 CC lib/scsi/scsi_pr.o 00:03:24.827 CC lib/ftl/ftl_nv_cache.o 00:03:24.827 CC lib/scsi/scsi_rpc.o 00:03:24.827 CC lib/ftl/ftl_band.o 00:03:24.827 CC lib/nvmf/nvmf_rpc.o 00:03:24.827 CC lib/nvmf/transport.o 00:03:24.827 CC lib/scsi/task.o 00:03:24.827 CC lib/ftl/ftl_band_ops.o 00:03:24.827 CC lib/ftl/ftl_writer.o 00:03:24.827 CC lib/nvmf/tcp.o 00:03:24.827 CC lib/ftl/ftl_rq.o 00:03:24.827 CC lib/nvmf/stubs.o 00:03:24.827 CC lib/nvmf/vfio_user.o 00:03:24.827 CC lib/nvmf/mdns_server.o 00:03:24.827 CC lib/ftl/ftl_reloc.o 00:03:24.827 CC lib/ftl/ftl_l2p_cache.o 00:03:24.827 CC lib/nvmf/auth.o 00:03:24.827 CC lib/ftl/ftl_p2l_log.o 00:03:24.827 CC lib/nvmf/rdma.o 00:03:24.827 CC lib/ftl/ftl_p2l.o 00:03:24.827 CC lib/ftl/mngt/ftl_mngt.o 00:03:24.827 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:24.827 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:24.827 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:24.827 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:24.827 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:24.827 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:24.827 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:24.827 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:24.827 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:24.827 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:24.827 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:24.827 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:24.827 CC lib/ftl/utils/ftl_conf.o 00:03:24.827 CC lib/ftl/utils/ftl_md.o 00:03:24.827 CC lib/ftl/utils/ftl_mempool.o 00:03:24.827 CC lib/ftl/utils/ftl_property.o 00:03:24.827 CC lib/ftl/utils/ftl_bitmap.o 00:03:24.827 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:24.827 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:24.827 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:24.827 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:24.827 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:24.827 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:24.827 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:24.827 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:24.827 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:24.827 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:24.827 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:24.827 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:24.827 CC lib/ftl/base/ftl_base_dev.o 00:03:24.827 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:24.827 CC lib/ftl/ftl_trace.o 00:03:24.827 CC lib/ftl/base/ftl_base_bdev.o 00:03:25.394 LIB libspdk_nbd.a 00:03:25.395 SO libspdk_nbd.so.7.0 00:03:25.395 LIB libspdk_scsi.a 00:03:25.395 SO libspdk_scsi.so.9.0 00:03:25.395 SYMLINK libspdk_nbd.so 00:03:25.395 LIB libspdk_ublk.a 00:03:25.395 SYMLINK libspdk_scsi.so 00:03:25.654 SO libspdk_ublk.so.3.0 00:03:25.654 SYMLINK libspdk_ublk.so 00:03:25.912 CC lib/vhost/vhost.o 00:03:25.912 CC lib/vhost/vhost_rpc.o 00:03:25.912 CC lib/iscsi/conn.o 00:03:25.912 CC lib/vhost/vhost_scsi.o 00:03:25.912 CC lib/vhost/rte_vhost_user.o 00:03:25.912 CC lib/iscsi/init_grp.o 00:03:25.912 CC lib/vhost/vhost_blk.o 00:03:25.912 CC lib/iscsi/iscsi.o 00:03:25.912 CC lib/iscsi/param.o 00:03:25.912 CC lib/iscsi/portal_grp.o 00:03:25.912 CC lib/iscsi/tgt_node.o 00:03:25.912 CC lib/iscsi/iscsi_subsystem.o 00:03:25.912 CC lib/iscsi/iscsi_rpc.o 00:03:25.912 CC lib/iscsi/task.o 00:03:25.912 LIB libspdk_ftl.a 00:03:25.912 SO libspdk_ftl.so.9.0 00:03:26.171 SYMLINK libspdk_ftl.so 00:03:26.428 LIB libspdk_nvmf.a 00:03:26.687 SO libspdk_nvmf.so.20.0 00:03:26.687 LIB libspdk_vhost.a 00:03:26.687 SO libspdk_vhost.so.8.0 00:03:26.687 SYMLINK libspdk_nvmf.so 00:03:26.687 SYMLINK libspdk_vhost.so 00:03:26.687 LIB libspdk_iscsi.a 00:03:26.945 SO libspdk_iscsi.so.8.0 00:03:26.945 SYMLINK libspdk_iscsi.so 00:03:27.512 CC module/env_dpdk/env_dpdk_rpc.o 00:03:27.512 CC module/vfu_device/vfu_virtio.o 00:03:27.512 CC module/vfu_device/vfu_virtio_blk.o 00:03:27.512 CC module/vfu_device/vfu_virtio_scsi.o 00:03:27.512 CC module/vfu_device/vfu_virtio_rpc.o 00:03:27.512 CC module/vfu_device/vfu_virtio_fs.o 00:03:27.769 CC module/accel/error/accel_error.o 00:03:27.769 LIB libspdk_env_dpdk_rpc.a 00:03:27.769 CC module/accel/error/accel_error_rpc.o 00:03:27.769 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:27.769 CC module/accel/dsa/accel_dsa.o 00:03:27.769 CC module/accel/dsa/accel_dsa_rpc.o 00:03:27.769 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:27.769 CC module/sock/posix/posix.o 00:03:27.769 CC module/scheduler/gscheduler/gscheduler.o 00:03:27.769 CC module/keyring/linux/keyring_rpc.o 00:03:27.769 CC module/keyring/linux/keyring.o 00:03:27.769 CC module/blob/bdev/blob_bdev.o 00:03:27.769 CC module/accel/iaa/accel_iaa.o 00:03:27.769 CC module/keyring/file/keyring.o 00:03:27.769 CC module/keyring/file/keyring_rpc.o 00:03:27.769 CC module/accel/iaa/accel_iaa_rpc.o 00:03:27.769 CC module/accel/ioat/accel_ioat.o 00:03:27.769 CC module/fsdev/aio/fsdev_aio.o 00:03:27.769 CC module/accel/ioat/accel_ioat_rpc.o 00:03:27.769 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:27.769 CC module/fsdev/aio/linux_aio_mgr.o 00:03:27.769 SO libspdk_env_dpdk_rpc.so.6.0 00:03:27.769 SYMLINK libspdk_env_dpdk_rpc.so 00:03:27.769 LIB libspdk_keyring_linux.a 00:03:27.769 LIB libspdk_scheduler_gscheduler.a 00:03:28.027 LIB libspdk_keyring_file.a 00:03:28.027 SO libspdk_scheduler_gscheduler.so.4.0 00:03:28.027 SO libspdk_keyring_linux.so.1.0 00:03:28.027 LIB libspdk_scheduler_dynamic.a 00:03:28.027 LIB libspdk_scheduler_dpdk_governor.a 00:03:28.027 LIB libspdk_accel_error.a 00:03:28.027 LIB libspdk_accel_iaa.a 00:03:28.027 LIB libspdk_accel_ioat.a 00:03:28.027 SO libspdk_keyring_file.so.2.0 00:03:28.027 SO libspdk_scheduler_dynamic.so.4.0 00:03:28.027 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:28.027 SYMLINK libspdk_scheduler_gscheduler.so 00:03:28.027 SO libspdk_accel_error.so.2.0 00:03:28.027 SO libspdk_accel_iaa.so.3.0 00:03:28.027 SYMLINK libspdk_keyring_linux.so 00:03:28.027 SO libspdk_accel_ioat.so.6.0 00:03:28.027 LIB libspdk_accel_dsa.a 00:03:28.027 LIB libspdk_blob_bdev.a 00:03:28.027 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:28.027 SYMLINK libspdk_scheduler_dynamic.so 00:03:28.027 SYMLINK libspdk_keyring_file.so 00:03:28.027 SO libspdk_blob_bdev.so.12.0 00:03:28.027 SO libspdk_accel_dsa.so.5.0 00:03:28.027 SYMLINK libspdk_accel_error.so 00:03:28.027 SYMLINK libspdk_accel_iaa.so 00:03:28.027 SYMLINK libspdk_accel_ioat.so 00:03:28.027 SYMLINK libspdk_blob_bdev.so 00:03:28.027 SYMLINK libspdk_accel_dsa.so 00:03:28.027 LIB libspdk_vfu_device.a 00:03:28.027 SO libspdk_vfu_device.so.3.0 00:03:28.285 SYMLINK libspdk_vfu_device.so 00:03:28.285 LIB libspdk_fsdev_aio.a 00:03:28.285 SO libspdk_fsdev_aio.so.1.0 00:03:28.285 LIB libspdk_sock_posix.a 00:03:28.285 SO libspdk_sock_posix.so.6.0 00:03:28.285 SYMLINK libspdk_fsdev_aio.so 00:03:28.543 SYMLINK libspdk_sock_posix.so 00:03:28.543 CC module/bdev/null/bdev_null.o 00:03:28.543 CC module/bdev/null/bdev_null_rpc.o 00:03:28.543 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:28.543 CC module/bdev/delay/vbdev_delay.o 00:03:28.543 CC module/bdev/gpt/gpt.o 00:03:28.543 CC module/bdev/nvme/bdev_nvme.o 00:03:28.543 CC module/bdev/iscsi/bdev_iscsi.o 00:03:28.543 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:28.543 CC module/bdev/gpt/vbdev_gpt.o 00:03:28.543 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:28.543 CC module/bdev/error/vbdev_error.o 00:03:28.543 CC module/bdev/nvme/nvme_rpc.o 00:03:28.543 CC module/bdev/nvme/bdev_mdns_client.o 00:03:28.543 CC module/bdev/error/vbdev_error_rpc.o 00:03:28.543 CC module/bdev/nvme/vbdev_opal.o 00:03:28.543 CC module/bdev/ftl/bdev_ftl.o 00:03:28.543 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:28.543 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:28.543 CC module/bdev/lvol/vbdev_lvol.o 00:03:28.543 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:28.543 CC module/bdev/passthru/vbdev_passthru.o 00:03:28.543 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:28.543 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:28.543 CC module/bdev/malloc/bdev_malloc.o 00:03:28.543 CC module/bdev/raid/bdev_raid.o 00:03:28.543 CC module/bdev/raid/bdev_raid_rpc.o 00:03:28.543 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:28.543 CC module/bdev/split/vbdev_split_rpc.o 00:03:28.543 CC module/bdev/split/vbdev_split.o 00:03:28.543 CC module/bdev/raid/raid0.o 00:03:28.543 CC module/bdev/raid/bdev_raid_sb.o 00:03:28.543 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:28.544 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:28.544 CC module/bdev/raid/raid1.o 00:03:28.544 CC module/bdev/raid/concat.o 00:03:28.544 CC module/blobfs/bdev/blobfs_bdev.o 00:03:28.544 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:28.544 CC module/bdev/aio/bdev_aio.o 00:03:28.544 CC module/bdev/aio/bdev_aio_rpc.o 00:03:28.544 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:28.544 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:28.544 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:28.802 LIB libspdk_bdev_null.a 00:03:28.802 LIB libspdk_blobfs_bdev.a 00:03:28.802 LIB libspdk_bdev_error.a 00:03:28.802 LIB libspdk_bdev_split.a 00:03:28.802 SO libspdk_bdev_error.so.6.0 00:03:28.802 SO libspdk_bdev_null.so.6.0 00:03:28.802 SO libspdk_blobfs_bdev.so.6.0 00:03:28.802 SO libspdk_bdev_split.so.6.0 00:03:29.060 LIB libspdk_bdev_passthru.a 00:03:29.060 LIB libspdk_bdev_gpt.a 00:03:29.060 SYMLINK libspdk_bdev_null.so 00:03:29.060 SYMLINK libspdk_blobfs_bdev.so 00:03:29.060 SYMLINK libspdk_bdev_split.so 00:03:29.060 LIB libspdk_bdev_delay.a 00:03:29.060 SYMLINK libspdk_bdev_error.so 00:03:29.060 SO libspdk_bdev_gpt.so.6.0 00:03:29.060 SO libspdk_bdev_passthru.so.6.0 00:03:29.060 LIB libspdk_bdev_zone_block.a 00:03:29.060 LIB libspdk_bdev_aio.a 00:03:29.060 LIB libspdk_bdev_ftl.a 00:03:29.060 SO libspdk_bdev_delay.so.6.0 00:03:29.060 SO libspdk_bdev_zone_block.so.6.0 00:03:29.060 SO libspdk_bdev_aio.so.6.0 00:03:29.060 SO libspdk_bdev_ftl.so.6.0 00:03:29.060 SYMLINK libspdk_bdev_gpt.so 00:03:29.060 LIB libspdk_bdev_iscsi.a 00:03:29.060 SYMLINK libspdk_bdev_passthru.so 00:03:29.060 SYMLINK libspdk_bdev_delay.so 00:03:29.060 LIB libspdk_bdev_malloc.a 00:03:29.060 SYMLINK libspdk_bdev_zone_block.so 00:03:29.060 SO libspdk_bdev_iscsi.so.6.0 00:03:29.060 SYMLINK libspdk_bdev_aio.so 00:03:29.060 SYMLINK libspdk_bdev_ftl.so 00:03:29.060 SO libspdk_bdev_malloc.so.6.0 00:03:29.060 SYMLINK libspdk_bdev_iscsi.so 00:03:29.060 SYMLINK libspdk_bdev_malloc.so 00:03:29.060 LIB libspdk_bdev_lvol.a 00:03:29.060 LIB libspdk_bdev_virtio.a 00:03:29.319 SO libspdk_bdev_lvol.so.6.0 00:03:29.319 SO libspdk_bdev_virtio.so.6.0 00:03:29.319 SYMLINK libspdk_bdev_lvol.so 00:03:29.319 SYMLINK libspdk_bdev_virtio.so 00:03:29.578 LIB libspdk_bdev_raid.a 00:03:29.578 SO libspdk_bdev_raid.so.6.0 00:03:29.578 SYMLINK libspdk_bdev_raid.so 00:03:30.514 LIB libspdk_bdev_nvme.a 00:03:30.514 SO libspdk_bdev_nvme.so.7.1 00:03:30.514 SYMLINK libspdk_bdev_nvme.so 00:03:31.450 CC module/event/subsystems/iobuf/iobuf.o 00:03:31.450 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:31.450 CC module/event/subsystems/keyring/keyring.o 00:03:31.450 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:31.450 CC module/event/subsystems/fsdev/fsdev.o 00:03:31.450 CC module/event/subsystems/sock/sock.o 00:03:31.450 CC module/event/subsystems/vmd/vmd.o 00:03:31.450 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:31.450 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:31.450 CC module/event/subsystems/scheduler/scheduler.o 00:03:31.450 LIB libspdk_event_keyring.a 00:03:31.450 LIB libspdk_event_scheduler.a 00:03:31.450 LIB libspdk_event_sock.a 00:03:31.450 LIB libspdk_event_iobuf.a 00:03:31.450 LIB libspdk_event_vhost_blk.a 00:03:31.450 LIB libspdk_event_fsdev.a 00:03:31.450 LIB libspdk_event_vfu_tgt.a 00:03:31.450 LIB libspdk_event_vmd.a 00:03:31.450 SO libspdk_event_scheduler.so.4.0 00:03:31.450 SO libspdk_event_keyring.so.1.0 00:03:31.450 SO libspdk_event_sock.so.5.0 00:03:31.450 SO libspdk_event_fsdev.so.1.0 00:03:31.450 SO libspdk_event_vhost_blk.so.3.0 00:03:31.450 SO libspdk_event_iobuf.so.3.0 00:03:31.450 SO libspdk_event_vfu_tgt.so.3.0 00:03:31.450 SO libspdk_event_vmd.so.6.0 00:03:31.450 SYMLINK libspdk_event_scheduler.so 00:03:31.450 SYMLINK libspdk_event_keyring.so 00:03:31.450 SYMLINK libspdk_event_sock.so 00:03:31.450 SYMLINK libspdk_event_vhost_blk.so 00:03:31.450 SYMLINK libspdk_event_fsdev.so 00:03:31.709 SYMLINK libspdk_event_vfu_tgt.so 00:03:31.709 SYMLINK libspdk_event_iobuf.so 00:03:31.709 SYMLINK libspdk_event_vmd.so 00:03:31.969 CC module/event/subsystems/accel/accel.o 00:03:31.969 LIB libspdk_event_accel.a 00:03:32.229 SO libspdk_event_accel.so.6.0 00:03:32.229 SYMLINK libspdk_event_accel.so 00:03:32.488 CC module/event/subsystems/bdev/bdev.o 00:03:32.747 LIB libspdk_event_bdev.a 00:03:32.747 SO libspdk_event_bdev.so.6.0 00:03:32.747 SYMLINK libspdk_event_bdev.so 00:03:33.006 CC module/event/subsystems/nbd/nbd.o 00:03:33.006 CC module/event/subsystems/ublk/ublk.o 00:03:33.006 CC module/event/subsystems/scsi/scsi.o 00:03:33.006 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:33.006 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:33.264 LIB libspdk_event_ublk.a 00:03:33.264 LIB libspdk_event_nbd.a 00:03:33.264 LIB libspdk_event_scsi.a 00:03:33.264 SO libspdk_event_ublk.so.3.0 00:03:33.264 SO libspdk_event_nbd.so.6.0 00:03:33.264 SO libspdk_event_scsi.so.6.0 00:03:33.264 SYMLINK libspdk_event_ublk.so 00:03:33.264 LIB libspdk_event_nvmf.a 00:03:33.264 SYMLINK libspdk_event_nbd.so 00:03:33.264 SYMLINK libspdk_event_scsi.so 00:03:33.264 SO libspdk_event_nvmf.so.6.0 00:03:33.523 SYMLINK libspdk_event_nvmf.so 00:03:33.781 CC module/event/subsystems/iscsi/iscsi.o 00:03:33.781 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:33.781 LIB libspdk_event_vhost_scsi.a 00:03:33.781 LIB libspdk_event_iscsi.a 00:03:33.781 SO libspdk_event_vhost_scsi.so.3.0 00:03:33.781 SO libspdk_event_iscsi.so.6.0 00:03:34.040 SYMLINK libspdk_event_vhost_scsi.so 00:03:34.040 SYMLINK libspdk_event_iscsi.so 00:03:34.040 SO libspdk.so.6.0 00:03:34.040 SYMLINK libspdk.so 00:03:34.627 CXX app/trace/trace.o 00:03:34.627 CC app/trace_record/trace_record.o 00:03:34.627 CC app/spdk_top/spdk_top.o 00:03:34.627 CC app/spdk_lspci/spdk_lspci.o 00:03:34.627 CC app/spdk_nvme_perf/perf.o 00:03:34.627 CC test/rpc_client/rpc_client_test.o 00:03:34.627 CC app/spdk_nvme_identify/identify.o 00:03:34.627 TEST_HEADER include/spdk/accel.h 00:03:34.627 TEST_HEADER include/spdk/accel_module.h 00:03:34.627 CC app/spdk_nvme_discover/discovery_aer.o 00:03:34.627 TEST_HEADER include/spdk/assert.h 00:03:34.627 TEST_HEADER include/spdk/bdev_module.h 00:03:34.627 TEST_HEADER include/spdk/base64.h 00:03:34.627 TEST_HEADER include/spdk/barrier.h 00:03:34.627 TEST_HEADER include/spdk/bdev.h 00:03:34.627 TEST_HEADER include/spdk/bdev_zone.h 00:03:34.627 TEST_HEADER include/spdk/bit_array.h 00:03:34.627 TEST_HEADER include/spdk/bit_pool.h 00:03:34.627 TEST_HEADER include/spdk/blob_bdev.h 00:03:34.627 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:34.627 TEST_HEADER include/spdk/blobfs.h 00:03:34.627 TEST_HEADER include/spdk/blob.h 00:03:34.627 TEST_HEADER include/spdk/conf.h 00:03:34.627 TEST_HEADER include/spdk/cpuset.h 00:03:34.627 TEST_HEADER include/spdk/config.h 00:03:34.627 TEST_HEADER include/spdk/crc16.h 00:03:34.627 TEST_HEADER include/spdk/crc64.h 00:03:34.627 TEST_HEADER include/spdk/crc32.h 00:03:34.627 CC app/spdk_dd/spdk_dd.o 00:03:34.627 TEST_HEADER include/spdk/endian.h 00:03:34.627 TEST_HEADER include/spdk/dif.h 00:03:34.627 TEST_HEADER include/spdk/dma.h 00:03:34.627 TEST_HEADER include/spdk/env_dpdk.h 00:03:34.627 TEST_HEADER include/spdk/env.h 00:03:34.627 TEST_HEADER include/spdk/event.h 00:03:34.627 TEST_HEADER include/spdk/fd.h 00:03:34.627 TEST_HEADER include/spdk/fd_group.h 00:03:34.627 TEST_HEADER include/spdk/file.h 00:03:34.628 TEST_HEADER include/spdk/fsdev.h 00:03:34.628 TEST_HEADER include/spdk/fsdev_module.h 00:03:34.628 TEST_HEADER include/spdk/ftl.h 00:03:34.628 TEST_HEADER include/spdk/gpt_spec.h 00:03:34.628 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:34.628 TEST_HEADER include/spdk/hexlify.h 00:03:34.628 TEST_HEADER include/spdk/histogram_data.h 00:03:34.628 TEST_HEADER include/spdk/idxd_spec.h 00:03:34.628 TEST_HEADER include/spdk/idxd.h 00:03:34.628 CC app/iscsi_tgt/iscsi_tgt.o 00:03:34.628 TEST_HEADER include/spdk/ioat.h 00:03:34.628 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:34.628 TEST_HEADER include/spdk/init.h 00:03:34.628 TEST_HEADER include/spdk/iscsi_spec.h 00:03:34.628 TEST_HEADER include/spdk/ioat_spec.h 00:03:34.628 TEST_HEADER include/spdk/json.h 00:03:34.628 TEST_HEADER include/spdk/jsonrpc.h 00:03:34.628 TEST_HEADER include/spdk/keyring_module.h 00:03:34.628 TEST_HEADER include/spdk/keyring.h 00:03:34.628 TEST_HEADER include/spdk/lvol.h 00:03:34.628 TEST_HEADER include/spdk/likely.h 00:03:34.628 TEST_HEADER include/spdk/log.h 00:03:34.628 TEST_HEADER include/spdk/md5.h 00:03:34.628 TEST_HEADER include/spdk/memory.h 00:03:34.628 TEST_HEADER include/spdk/nbd.h 00:03:34.628 TEST_HEADER include/spdk/mmio.h 00:03:34.628 TEST_HEADER include/spdk/net.h 00:03:34.628 TEST_HEADER include/spdk/notify.h 00:03:34.628 TEST_HEADER include/spdk/nvme.h 00:03:34.628 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:34.628 TEST_HEADER include/spdk/nvme_intel.h 00:03:34.628 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:34.628 CC app/nvmf_tgt/nvmf_main.o 00:03:34.628 TEST_HEADER include/spdk/nvme_spec.h 00:03:34.628 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:34.628 TEST_HEADER include/spdk/nvme_zns.h 00:03:34.628 TEST_HEADER include/spdk/nvmf.h 00:03:34.628 TEST_HEADER include/spdk/nvmf_spec.h 00:03:34.628 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:34.628 CC app/spdk_tgt/spdk_tgt.o 00:03:34.628 TEST_HEADER include/spdk/nvmf_transport.h 00:03:34.628 TEST_HEADER include/spdk/opal_spec.h 00:03:34.628 TEST_HEADER include/spdk/opal.h 00:03:34.628 TEST_HEADER include/spdk/pipe.h 00:03:34.628 TEST_HEADER include/spdk/pci_ids.h 00:03:34.628 TEST_HEADER include/spdk/reduce.h 00:03:34.628 TEST_HEADER include/spdk/queue.h 00:03:34.628 TEST_HEADER include/spdk/scheduler.h 00:03:34.628 TEST_HEADER include/spdk/rpc.h 00:03:34.628 TEST_HEADER include/spdk/scsi.h 00:03:34.628 TEST_HEADER include/spdk/sock.h 00:03:34.628 TEST_HEADER include/spdk/scsi_spec.h 00:03:34.628 TEST_HEADER include/spdk/stdinc.h 00:03:34.628 TEST_HEADER include/spdk/string.h 00:03:34.628 TEST_HEADER include/spdk/thread.h 00:03:34.628 TEST_HEADER include/spdk/trace.h 00:03:34.628 TEST_HEADER include/spdk/trace_parser.h 00:03:34.628 TEST_HEADER include/spdk/tree.h 00:03:34.628 TEST_HEADER include/spdk/ublk.h 00:03:34.628 TEST_HEADER include/spdk/util.h 00:03:34.628 TEST_HEADER include/spdk/version.h 00:03:34.628 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:34.628 TEST_HEADER include/spdk/uuid.h 00:03:34.628 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:34.628 TEST_HEADER include/spdk/vmd.h 00:03:34.628 TEST_HEADER include/spdk/vhost.h 00:03:34.628 CXX test/cpp_headers/accel.o 00:03:34.628 TEST_HEADER include/spdk/xor.h 00:03:34.628 CXX test/cpp_headers/assert.o 00:03:34.628 CXX test/cpp_headers/accel_module.o 00:03:34.628 TEST_HEADER include/spdk/zipf.h 00:03:34.628 CXX test/cpp_headers/barrier.o 00:03:34.628 CXX test/cpp_headers/base64.o 00:03:34.628 CXX test/cpp_headers/bdev.o 00:03:34.628 CXX test/cpp_headers/bdev_zone.o 00:03:34.628 CXX test/cpp_headers/bdev_module.o 00:03:34.628 CXX test/cpp_headers/bit_pool.o 00:03:34.628 CXX test/cpp_headers/bit_array.o 00:03:34.628 CXX test/cpp_headers/blob_bdev.o 00:03:34.628 CXX test/cpp_headers/blobfs_bdev.o 00:03:34.628 CXX test/cpp_headers/blobfs.o 00:03:34.628 CXX test/cpp_headers/blob.o 00:03:34.628 CXX test/cpp_headers/conf.o 00:03:34.628 CXX test/cpp_headers/config.o 00:03:34.628 CXX test/cpp_headers/cpuset.o 00:03:34.628 CXX test/cpp_headers/crc16.o 00:03:34.628 CXX test/cpp_headers/crc32.o 00:03:34.628 CXX test/cpp_headers/crc64.o 00:03:34.628 CXX test/cpp_headers/endian.o 00:03:34.628 CXX test/cpp_headers/dma.o 00:03:34.628 CXX test/cpp_headers/dif.o 00:03:34.628 CXX test/cpp_headers/env_dpdk.o 00:03:34.628 CXX test/cpp_headers/env.o 00:03:34.628 CXX test/cpp_headers/event.o 00:03:34.628 CXX test/cpp_headers/fsdev.o 00:03:34.628 CXX test/cpp_headers/fd.o 00:03:34.628 CXX test/cpp_headers/fd_group.o 00:03:34.628 CXX test/cpp_headers/file.o 00:03:34.628 CXX test/cpp_headers/fsdev_module.o 00:03:34.628 CXX test/cpp_headers/fuse_dispatcher.o 00:03:34.628 CXX test/cpp_headers/ftl.o 00:03:34.628 CXX test/cpp_headers/idxd.o 00:03:34.628 CXX test/cpp_headers/hexlify.o 00:03:34.628 CXX test/cpp_headers/histogram_data.o 00:03:34.628 CXX test/cpp_headers/gpt_spec.o 00:03:34.628 CXX test/cpp_headers/idxd_spec.o 00:03:34.628 CXX test/cpp_headers/init.o 00:03:34.628 CXX test/cpp_headers/ioat.o 00:03:34.628 CXX test/cpp_headers/ioat_spec.o 00:03:34.628 CXX test/cpp_headers/iscsi_spec.o 00:03:34.628 CXX test/cpp_headers/json.o 00:03:34.628 CXX test/cpp_headers/keyring.o 00:03:34.628 CXX test/cpp_headers/jsonrpc.o 00:03:34.628 CXX test/cpp_headers/likely.o 00:03:34.628 CXX test/cpp_headers/keyring_module.o 00:03:34.628 CXX test/cpp_headers/log.o 00:03:34.628 CXX test/cpp_headers/lvol.o 00:03:34.628 CXX test/cpp_headers/md5.o 00:03:34.628 CXX test/cpp_headers/memory.o 00:03:34.628 CXX test/cpp_headers/mmio.o 00:03:34.628 CXX test/cpp_headers/nbd.o 00:03:34.628 CXX test/cpp_headers/net.o 00:03:34.628 CXX test/cpp_headers/notify.o 00:03:34.628 CXX test/cpp_headers/nvme.o 00:03:34.628 CXX test/cpp_headers/nvme_intel.o 00:03:34.628 CXX test/cpp_headers/nvme_ocssd.o 00:03:34.628 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:34.628 CXX test/cpp_headers/nvme_spec.o 00:03:34.628 CXX test/cpp_headers/nvmf_cmd.o 00:03:34.628 CXX test/cpp_headers/nvme_zns.o 00:03:34.628 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:34.628 CXX test/cpp_headers/nvmf.o 00:03:34.628 CXX test/cpp_headers/nvmf_spec.o 00:03:34.628 CXX test/cpp_headers/nvmf_transport.o 00:03:34.628 CC examples/util/zipf/zipf.o 00:03:34.628 CXX test/cpp_headers/opal.o 00:03:34.628 CC test/thread/poller_perf/poller_perf.o 00:03:34.628 CC examples/ioat/perf/perf.o 00:03:34.903 CC test/app/histogram_perf/histogram_perf.o 00:03:34.903 CC test/env/pci/pci_ut.o 00:03:34.903 CXX test/cpp_headers/opal_spec.o 00:03:34.903 CC test/env/vtophys/vtophys.o 00:03:34.903 CC test/app/jsoncat/jsoncat.o 00:03:34.903 CC test/env/memory/memory_ut.o 00:03:34.903 CC test/app/stub/stub.o 00:03:34.903 CC examples/ioat/verify/verify.o 00:03:34.903 CC app/fio/nvme/fio_plugin.o 00:03:34.903 CC test/app/bdev_svc/bdev_svc.o 00:03:34.903 CC test/dma/test_dma/test_dma.o 00:03:34.903 LINK spdk_lspci 00:03:34.903 CC app/fio/bdev/fio_plugin.o 00:03:34.903 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:35.174 LINK spdk_nvme_discover 00:03:35.174 LINK iscsi_tgt 00:03:35.174 LINK rpc_client_test 00:03:35.174 LINK interrupt_tgt 00:03:35.174 LINK nvmf_tgt 00:03:35.174 LINK zipf 00:03:35.174 LINK histogram_perf 00:03:35.174 LINK poller_perf 00:03:35.174 CC test/env/mem_callbacks/mem_callbacks.o 00:03:35.174 LINK vtophys 00:03:35.437 CXX test/cpp_headers/pci_ids.o 00:03:35.437 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:35.437 CXX test/cpp_headers/pipe.o 00:03:35.437 CXX test/cpp_headers/queue.o 00:03:35.437 LINK spdk_tgt 00:03:35.437 CXX test/cpp_headers/reduce.o 00:03:35.437 CXX test/cpp_headers/rpc.o 00:03:35.437 CXX test/cpp_headers/scheduler.o 00:03:35.437 CXX test/cpp_headers/scsi.o 00:03:35.437 CXX test/cpp_headers/scsi_spec.o 00:03:35.437 CXX test/cpp_headers/sock.o 00:03:35.437 CXX test/cpp_headers/stdinc.o 00:03:35.437 CXX test/cpp_headers/string.o 00:03:35.437 LINK spdk_dd 00:03:35.437 LINK jsoncat 00:03:35.437 CXX test/cpp_headers/thread.o 00:03:35.437 CXX test/cpp_headers/trace.o 00:03:35.437 CXX test/cpp_headers/trace_parser.o 00:03:35.437 CXX test/cpp_headers/tree.o 00:03:35.437 CXX test/cpp_headers/ublk.o 00:03:35.437 CXX test/cpp_headers/util.o 00:03:35.437 LINK spdk_trace_record 00:03:35.437 CXX test/cpp_headers/vfio_user_pci.o 00:03:35.437 CXX test/cpp_headers/version.o 00:03:35.437 CXX test/cpp_headers/uuid.o 00:03:35.437 CXX test/cpp_headers/vfio_user_spec.o 00:03:35.437 CXX test/cpp_headers/vhost.o 00:03:35.437 CXX test/cpp_headers/vmd.o 00:03:35.437 CXX test/cpp_headers/xor.o 00:03:35.437 CXX test/cpp_headers/zipf.o 00:03:35.437 LINK env_dpdk_post_init 00:03:35.437 LINK bdev_svc 00:03:35.437 LINK stub 00:03:35.437 LINK ioat_perf 00:03:35.437 LINK verify 00:03:35.437 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:35.437 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:35.437 LINK spdk_trace 00:03:35.437 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:35.696 LINK pci_ut 00:03:35.954 CC examples/vmd/lsvmd/lsvmd.o 00:03:35.954 CC examples/idxd/perf/perf.o 00:03:35.954 CC examples/vmd/led/led.o 00:03:35.954 LINK test_dma 00:03:35.954 CC test/event/event_perf/event_perf.o 00:03:35.954 CC test/event/reactor_perf/reactor_perf.o 00:03:35.954 CC test/event/reactor/reactor.o 00:03:35.954 LINK spdk_bdev 00:03:35.954 CC examples/sock/hello_world/hello_sock.o 00:03:35.954 LINK spdk_top 00:03:35.954 CC test/event/app_repeat/app_repeat.o 00:03:35.954 CC examples/thread/thread/thread_ex.o 00:03:35.954 CC test/event/scheduler/scheduler.o 00:03:35.954 LINK spdk_nvme_perf 00:03:35.954 LINK spdk_nvme 00:03:35.954 LINK nvme_fuzz 00:03:35.954 LINK spdk_nvme_identify 00:03:35.954 LINK reactor_perf 00:03:35.954 LINK vhost_fuzz 00:03:35.954 LINK lsvmd 00:03:35.954 LINK event_perf 00:03:35.954 CC app/vhost/vhost.o 00:03:35.954 LINK led 00:03:35.954 LINK reactor 00:03:35.954 LINK mem_callbacks 00:03:36.213 LINK app_repeat 00:03:36.213 LINK hello_sock 00:03:36.213 LINK scheduler 00:03:36.213 LINK idxd_perf 00:03:36.213 LINK thread 00:03:36.213 LINK vhost 00:03:36.213 LINK memory_ut 00:03:36.471 CC test/nvme/err_injection/err_injection.o 00:03:36.471 CC test/nvme/e2edp/nvme_dp.o 00:03:36.471 CC test/nvme/cuse/cuse.o 00:03:36.471 CC test/nvme/aer/aer.o 00:03:36.471 CC test/nvme/reset/reset.o 00:03:36.471 CC test/nvme/simple_copy/simple_copy.o 00:03:36.471 CC test/nvme/connect_stress/connect_stress.o 00:03:36.471 CC test/nvme/boot_partition/boot_partition.o 00:03:36.471 CC test/nvme/reserve/reserve.o 00:03:36.471 CC test/nvme/startup/startup.o 00:03:36.471 CC test/nvme/overhead/overhead.o 00:03:36.471 CC test/nvme/compliance/nvme_compliance.o 00:03:36.471 CC test/nvme/fdp/fdp.o 00:03:36.471 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:36.471 CC test/nvme/sgl/sgl.o 00:03:36.471 CC test/nvme/fused_ordering/fused_ordering.o 00:03:36.471 CC test/accel/dif/dif.o 00:03:36.471 CC test/blobfs/mkfs/mkfs.o 00:03:36.471 CC test/lvol/esnap/esnap.o 00:03:36.729 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:36.729 CC examples/nvme/abort/abort.o 00:03:36.729 LINK err_injection 00:03:36.729 CC examples/nvme/hello_world/hello_world.o 00:03:36.729 LINK boot_partition 00:03:36.729 CC examples/nvme/arbitration/arbitration.o 00:03:36.729 CC examples/nvme/reconnect/reconnect.o 00:03:36.729 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:36.729 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:36.729 CC examples/nvme/hotplug/hotplug.o 00:03:36.729 LINK reserve 00:03:36.729 LINK doorbell_aers 00:03:36.730 LINK startup 00:03:36.730 LINK connect_stress 00:03:36.730 LINK fused_ordering 00:03:36.730 LINK simple_copy 00:03:36.730 LINK reset 00:03:36.730 LINK mkfs 00:03:36.730 LINK nvme_dp 00:03:36.730 CC examples/accel/perf/accel_perf.o 00:03:36.730 CC examples/blob/hello_world/hello_blob.o 00:03:36.730 CC examples/blob/cli/blobcli.o 00:03:36.730 LINK sgl 00:03:36.730 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:36.730 LINK overhead 00:03:36.730 LINK aer 00:03:36.730 LINK nvme_compliance 00:03:36.730 LINK fdp 00:03:36.730 LINK pmr_persistence 00:03:36.730 LINK cmb_copy 00:03:36.988 LINK hello_world 00:03:36.988 LINK hotplug 00:03:36.988 LINK abort 00:03:36.988 LINK arbitration 00:03:36.988 LINK reconnect 00:03:36.988 LINK hello_blob 00:03:36.988 LINK hello_fsdev 00:03:36.988 LINK dif 00:03:36.988 LINK iscsi_fuzz 00:03:36.988 LINK nvme_manage 00:03:37.246 LINK accel_perf 00:03:37.246 LINK blobcli 00:03:37.505 LINK cuse 00:03:37.505 CC test/bdev/bdevio/bdevio.o 00:03:37.764 CC examples/bdev/hello_world/hello_bdev.o 00:03:37.764 CC examples/bdev/bdevperf/bdevperf.o 00:03:38.024 LINK bdevio 00:03:38.024 LINK hello_bdev 00:03:38.283 LINK bdevperf 00:03:38.851 CC examples/nvmf/nvmf/nvmf.o 00:03:39.110 LINK nvmf 00:03:40.490 LINK esnap 00:03:40.490 00:03:40.490 real 0m56.765s 00:03:40.490 user 8m22.698s 00:03:40.490 sys 3m57.297s 00:03:40.490 17:14:09 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:40.490 17:14:09 make -- common/autotest_common.sh@10 -- $ set +x 00:03:40.490 ************************************ 00:03:40.490 END TEST make 00:03:40.490 ************************************ 00:03:40.490 17:14:09 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:40.490 17:14:09 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:40.490 17:14:09 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:40.490 17:14:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:40.490 17:14:09 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:40.490 17:14:09 -- pm/common@44 -- $ pid=2302935 00:03:40.490 17:14:09 -- pm/common@50 -- $ kill -TERM 2302935 00:03:40.490 17:14:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:40.490 17:14:09 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:40.490 17:14:09 -- pm/common@44 -- $ pid=2302937 00:03:40.490 17:14:09 -- pm/common@50 -- $ kill -TERM 2302937 00:03:40.490 17:14:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:40.490 17:14:09 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:40.490 17:14:09 -- pm/common@44 -- $ pid=2302939 00:03:40.490 17:14:09 -- pm/common@50 -- $ kill -TERM 2302939 00:03:40.490 17:14:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:40.490 17:14:09 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:40.490 17:14:09 -- pm/common@44 -- $ pid=2302963 00:03:40.490 17:14:09 -- pm/common@50 -- $ sudo -E kill -TERM 2302963 00:03:40.490 17:14:09 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:40.490 17:14:09 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:40.750 17:14:09 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:40.750 17:14:09 -- common/autotest_common.sh@1711 -- # lcov --version 00:03:40.750 17:14:09 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:40.750 17:14:09 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:40.750 17:14:09 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:40.750 17:14:09 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:40.750 17:14:09 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:40.750 17:14:09 -- scripts/common.sh@336 -- # IFS=.-: 00:03:40.750 17:14:09 -- scripts/common.sh@336 -- # read -ra ver1 00:03:40.750 17:14:09 -- scripts/common.sh@337 -- # IFS=.-: 00:03:40.750 17:14:09 -- scripts/common.sh@337 -- # read -ra ver2 00:03:40.750 17:14:09 -- scripts/common.sh@338 -- # local 'op=<' 00:03:40.750 17:14:09 -- scripts/common.sh@340 -- # ver1_l=2 00:03:40.750 17:14:09 -- scripts/common.sh@341 -- # ver2_l=1 00:03:40.750 17:14:09 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:40.750 17:14:09 -- scripts/common.sh@344 -- # case "$op" in 00:03:40.750 17:14:09 -- scripts/common.sh@345 -- # : 1 00:03:40.750 17:14:09 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:40.750 17:14:09 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:40.750 17:14:09 -- scripts/common.sh@365 -- # decimal 1 00:03:40.751 17:14:09 -- scripts/common.sh@353 -- # local d=1 00:03:40.751 17:14:09 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:40.751 17:14:09 -- scripts/common.sh@355 -- # echo 1 00:03:40.751 17:14:09 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:40.751 17:14:09 -- scripts/common.sh@366 -- # decimal 2 00:03:40.751 17:14:09 -- scripts/common.sh@353 -- # local d=2 00:03:40.751 17:14:09 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:40.751 17:14:09 -- scripts/common.sh@355 -- # echo 2 00:03:40.751 17:14:09 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:40.751 17:14:09 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:40.751 17:14:09 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:40.751 17:14:09 -- scripts/common.sh@368 -- # return 0 00:03:40.751 17:14:09 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:40.751 17:14:09 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:40.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:40.751 --rc genhtml_branch_coverage=1 00:03:40.751 --rc genhtml_function_coverage=1 00:03:40.751 --rc genhtml_legend=1 00:03:40.751 --rc geninfo_all_blocks=1 00:03:40.751 --rc geninfo_unexecuted_blocks=1 00:03:40.751 00:03:40.751 ' 00:03:40.751 17:14:09 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:40.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:40.751 --rc genhtml_branch_coverage=1 00:03:40.751 --rc genhtml_function_coverage=1 00:03:40.751 --rc genhtml_legend=1 00:03:40.751 --rc geninfo_all_blocks=1 00:03:40.751 --rc geninfo_unexecuted_blocks=1 00:03:40.751 00:03:40.751 ' 00:03:40.751 17:14:09 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:40.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:40.751 --rc genhtml_branch_coverage=1 00:03:40.751 --rc genhtml_function_coverage=1 00:03:40.751 --rc genhtml_legend=1 00:03:40.751 --rc geninfo_all_blocks=1 00:03:40.751 --rc geninfo_unexecuted_blocks=1 00:03:40.751 00:03:40.751 ' 00:03:40.751 17:14:09 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:40.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:40.751 --rc genhtml_branch_coverage=1 00:03:40.751 --rc genhtml_function_coverage=1 00:03:40.751 --rc genhtml_legend=1 00:03:40.751 --rc geninfo_all_blocks=1 00:03:40.751 --rc geninfo_unexecuted_blocks=1 00:03:40.751 00:03:40.751 ' 00:03:40.751 17:14:09 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:40.751 17:14:09 -- nvmf/common.sh@7 -- # uname -s 00:03:40.751 17:14:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:40.751 17:14:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:40.751 17:14:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:40.751 17:14:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:40.751 17:14:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:40.751 17:14:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:40.751 17:14:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:40.751 17:14:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:40.751 17:14:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:40.751 17:14:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:40.751 17:14:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:03:40.751 17:14:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:03:40.751 17:14:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:40.751 17:14:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:40.751 17:14:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:40.751 17:14:09 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:40.751 17:14:09 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:40.751 17:14:09 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:40.751 17:14:09 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:40.751 17:14:09 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:40.751 17:14:09 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:40.751 17:14:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:40.751 17:14:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:40.751 17:14:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:40.751 17:14:09 -- paths/export.sh@5 -- # export PATH 00:03:40.751 17:14:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:40.751 17:14:09 -- nvmf/common.sh@51 -- # : 0 00:03:40.751 17:14:09 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:40.751 17:14:09 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:40.751 17:14:09 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:40.751 17:14:09 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:40.751 17:14:09 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:40.751 17:14:09 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:40.751 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:40.751 17:14:09 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:40.751 17:14:09 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:40.751 17:14:09 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:40.751 17:14:09 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:40.751 17:14:09 -- spdk/autotest.sh@32 -- # uname -s 00:03:40.751 17:14:09 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:40.751 17:14:09 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:40.751 17:14:09 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:40.751 17:14:09 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:40.751 17:14:09 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:40.751 17:14:09 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:40.751 17:14:09 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:40.751 17:14:09 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:40.751 17:14:09 -- spdk/autotest.sh@48 -- # udevadm_pid=2366652 00:03:40.751 17:14:09 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:40.751 17:14:09 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:40.751 17:14:09 -- pm/common@17 -- # local monitor 00:03:40.751 17:14:09 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:40.751 17:14:09 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:40.751 17:14:09 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:40.751 17:14:09 -- pm/common@21 -- # date +%s 00:03:40.751 17:14:09 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:40.751 17:14:09 -- pm/common@21 -- # date +%s 00:03:40.751 17:14:09 -- pm/common@25 -- # sleep 1 00:03:40.751 17:14:09 -- pm/common@21 -- # date +%s 00:03:40.751 17:14:09 -- pm/common@21 -- # date +%s 00:03:40.751 17:14:09 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733760849 00:03:40.751 17:14:09 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733760849 00:03:40.751 17:14:09 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733760849 00:03:40.751 17:14:09 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733760849 00:03:40.751 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733760849_collect-vmstat.pm.log 00:03:40.751 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733760849_collect-cpu-load.pm.log 00:03:40.751 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733760849_collect-cpu-temp.pm.log 00:03:40.751 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733760849_collect-bmc-pm.bmc.pm.log 00:03:41.690 17:14:10 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:41.690 17:14:10 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:41.690 17:14:10 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:41.690 17:14:10 -- common/autotest_common.sh@10 -- # set +x 00:03:41.690 17:14:10 -- spdk/autotest.sh@59 -- # create_test_list 00:03:41.690 17:14:10 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:41.690 17:14:10 -- common/autotest_common.sh@10 -- # set +x 00:03:41.690 17:14:10 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:41.949 17:14:10 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:41.949 17:14:10 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:41.949 17:14:10 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:41.949 17:14:10 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:41.949 17:14:10 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:41.949 17:14:10 -- common/autotest_common.sh@1457 -- # uname 00:03:41.949 17:14:10 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:41.949 17:14:10 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:41.949 17:14:10 -- common/autotest_common.sh@1477 -- # uname 00:03:41.949 17:14:10 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:41.949 17:14:10 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:41.949 17:14:10 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:41.949 lcov: LCOV version 1.15 00:03:41.949 17:14:10 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:54.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:54.158 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:09.041 17:14:35 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:09.041 17:14:35 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:09.042 17:14:35 -- common/autotest_common.sh@10 -- # set +x 00:04:09.042 17:14:35 -- spdk/autotest.sh@78 -- # rm -f 00:04:09.042 17:14:35 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:10.130 0000:5f:00.0 (1b96 2600): Already using the nvme driver 00:04:10.130 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:04:10.130 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:04:10.130 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:04:10.130 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:04:10.130 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:04:10.130 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:04:10.130 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:04:10.130 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:04:10.130 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:04:10.130 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:04:10.130 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:04:10.130 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:04:10.130 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:04:10.130 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:04:10.130 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:04:10.130 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:04:10.390 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:04:10.390 17:14:39 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:10.390 17:14:39 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:10.390 17:14:39 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:10.390 17:14:39 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:04:10.390 17:14:39 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:04:10.390 17:14:39 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:04:10.390 17:14:39 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:10.390 17:14:39 -- common/autotest_common.sh@1669 -- # bdf=0000:5e:00.0 00:04:10.390 17:14:39 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:10.390 17:14:39 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:04:10.390 17:14:39 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:10.390 17:14:39 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:10.390 17:14:39 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:10.390 17:14:39 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:10.390 17:14:39 -- common/autotest_common.sh@1669 -- # bdf=0000:5f:00.0 00:04:10.390 17:14:39 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:10.390 17:14:39 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:04:10.390 17:14:39 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:04:10.390 17:14:39 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:10.390 17:14:39 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:10.390 17:14:39 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:10.390 17:14:39 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n2 00:04:10.390 17:14:39 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:04:10.390 17:14:39 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:10.390 17:14:39 -- common/autotest_common.sh@1653 -- # [[ host-managed != none ]] 00:04:10.390 17:14:39 -- common/autotest_common.sh@1672 -- # zoned_ctrls["$nvme"]=0000:5f:00.0 00:04:10.390 17:14:39 -- common/autotest_common.sh@1673 -- # continue 2 00:04:10.390 17:14:39 -- common/autotest_common.sh@1678 -- # for nvme in "${!zoned_ctrls[@]}" 00:04:10.390 17:14:39 -- common/autotest_common.sh@1679 -- # for ns in "$nvme/"nvme*n* 00:04:10.390 17:14:39 -- common/autotest_common.sh@1680 -- # zoned_devs["${ns##*/}"]=0000:5f:00.0 00:04:10.390 17:14:39 -- common/autotest_common.sh@1679 -- # for ns in "$nvme/"nvme*n* 00:04:10.390 17:14:39 -- common/autotest_common.sh@1680 -- # zoned_devs["${ns##*/}"]=0000:5f:00.0 00:04:10.390 17:14:39 -- spdk/autotest.sh@85 -- # (( 2 > 0 )) 00:04:10.390 17:14:39 -- spdk/autotest.sh@90 -- # export 'PCI_BLOCKED=0000:5f:00.0 0000:5f:00.0' 00:04:10.390 17:14:39 -- spdk/autotest.sh@90 -- # PCI_BLOCKED='0000:5f:00.0 0000:5f:00.0' 00:04:10.390 17:14:39 -- spdk/autotest.sh@91 -- # export 'PCI_ZONED=0000:5f:00.0 0000:5f:00.0' 00:04:10.390 17:14:39 -- spdk/autotest.sh@91 -- # PCI_ZONED='0000:5f:00.0 0000:5f:00.0' 00:04:10.390 17:14:39 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:10.390 17:14:39 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:10.390 17:14:39 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:10.390 17:14:39 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:10.390 17:14:39 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:10.390 No valid GPT data, bailing 00:04:10.390 17:14:39 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:10.390 17:14:39 -- scripts/common.sh@394 -- # pt= 00:04:10.390 17:14:39 -- scripts/common.sh@395 -- # return 1 00:04:10.390 17:14:39 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:10.390 1+0 records in 00:04:10.390 1+0 records out 00:04:10.390 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00537088 s, 195 MB/s 00:04:10.390 17:14:39 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:10.390 17:14:39 -- spdk/autotest.sh@99 -- # [[ -z 0000:5f:00.0 ]] 00:04:10.390 17:14:39 -- spdk/autotest.sh@99 -- # continue 00:04:10.390 17:14:39 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:10.390 17:14:39 -- spdk/autotest.sh@99 -- # [[ -z 0000:5f:00.0 ]] 00:04:10.390 17:14:39 -- spdk/autotest.sh@99 -- # continue 00:04:10.390 17:14:39 -- spdk/autotest.sh@105 -- # sync 00:04:10.390 17:14:39 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:10.390 17:14:39 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:10.390 17:14:39 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:15.668 17:14:44 -- spdk/autotest.sh@111 -- # uname -s 00:04:15.668 17:14:44 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:15.668 17:14:44 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:15.668 17:14:44 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:18.958 Hugepages 00:04:18.958 node hugesize free / total 00:04:18.958 node0 1048576kB 0 / 0 00:04:18.958 node0 2048kB 0 / 0 00:04:18.958 node1 1048576kB 0 / 0 00:04:18.958 node1 2048kB 0 / 0 00:04:18.958 00:04:18.958 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:18.958 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:18.958 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:18.958 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:18.958 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:18.958 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:18.958 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:18.958 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:18.958 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:18.958 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:18.958 NVMe 0000:5f:00.0 1b96 2600 0 nvme nvme1 nvme1n1 nvme1n2 00:04:18.958 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:18.958 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:18.958 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:18.958 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:18.958 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:18.958 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:18.958 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:18.958 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:18.958 17:14:48 -- spdk/autotest.sh@117 -- # uname -s 00:04:18.958 17:14:48 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:18.958 17:14:48 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:18.958 17:14:48 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:21.495 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:04:22.064 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:22.064 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:22.064 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:22.064 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:22.064 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:22.064 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:22.064 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:22.064 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:22.064 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:22.064 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:22.064 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:22.064 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:22.064 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:22.064 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:22.064 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:22.064 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:23.001 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:23.001 17:14:52 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:24.381 17:14:53 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:24.381 17:14:53 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:24.381 17:14:53 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:24.381 17:14:53 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:24.381 17:14:53 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:24.381 17:14:53 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:24.381 17:14:53 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:24.381 17:14:53 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:24.381 17:14:53 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:24.381 17:14:53 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:24.381 17:14:53 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:04:24.381 17:14:53 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:26.924 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:04:27.183 Waiting for block devices as requested 00:04:27.183 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:04:27.183 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:27.442 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:27.442 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:27.442 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:27.701 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:27.701 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:27.701 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:27.701 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:27.960 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:27.960 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:27.960 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:28.220 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:28.220 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:28.220 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:28.220 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:28.479 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:28.479 17:14:57 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:28.479 17:14:57 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:04:28.479 17:14:57 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:28.479 17:14:57 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:04:28.479 17:14:57 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:28.479 17:14:57 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:04:28.479 17:14:57 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:28.479 17:14:57 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:28.479 17:14:57 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:28.479 17:14:57 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:28.479 17:14:57 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:28.479 17:14:57 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:28.479 17:14:57 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:28.479 17:14:57 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:04:28.479 17:14:57 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:28.479 17:14:57 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:28.479 17:14:57 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:28.479 17:14:57 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:28.479 17:14:57 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:28.479 17:14:57 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:28.479 17:14:57 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:28.479 17:14:57 -- common/autotest_common.sh@1543 -- # continue 00:04:28.479 17:14:57 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:28.479 17:14:57 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:28.479 17:14:57 -- common/autotest_common.sh@10 -- # set +x 00:04:28.479 17:14:57 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:28.479 17:14:57 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:28.479 17:14:57 -- common/autotest_common.sh@10 -- # set +x 00:04:28.479 17:14:57 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:31.767 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:04:31.767 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:31.767 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:31.767 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:31.767 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:31.767 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:31.767 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:31.767 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:31.767 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:31.767 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:31.767 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:31.767 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:31.767 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:31.767 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:31.767 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:31.767 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:31.767 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:32.703 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:32.703 17:15:01 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:32.703 17:15:01 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:32.703 17:15:01 -- common/autotest_common.sh@10 -- # set +x 00:04:32.703 17:15:01 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:32.703 17:15:01 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:32.703 17:15:01 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:32.703 17:15:01 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:32.703 17:15:01 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:32.703 17:15:01 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:32.703 17:15:01 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:32.703 17:15:01 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:32.703 17:15:01 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:32.703 17:15:01 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:32.703 17:15:01 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:32.703 17:15:01 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:32.703 17:15:01 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:32.703 17:15:01 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:32.703 17:15:01 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:04:32.703 17:15:01 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:32.703 17:15:01 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:04:32.703 17:15:01 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:04:32.703 17:15:01 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:32.703 17:15:01 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:04:32.703 17:15:01 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:04:32.703 17:15:01 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:5e:00.0 00:04:32.703 17:15:01 -- common/autotest_common.sh@1579 -- # [[ -z 0000:5e:00.0 ]] 00:04:32.703 17:15:01 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:32.703 17:15:01 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=2381325 00:04:32.703 17:15:01 -- common/autotest_common.sh@1585 -- # waitforlisten 2381325 00:04:32.703 17:15:01 -- common/autotest_common.sh@835 -- # '[' -z 2381325 ']' 00:04:32.703 17:15:01 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:32.703 17:15:01 -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:32.703 17:15:01 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:32.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:32.703 17:15:01 -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:32.703 17:15:01 -- common/autotest_common.sh@10 -- # set +x 00:04:32.703 [2024-12-09 17:15:01.858455] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:04:32.703 [2024-12-09 17:15:01.858501] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2381325 ] 00:04:32.963 [2024-12-09 17:15:01.930810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.963 [2024-12-09 17:15:01.970397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.222 17:15:02 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:33.222 17:15:02 -- common/autotest_common.sh@868 -- # return 0 00:04:33.222 17:15:02 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:04:33.222 17:15:02 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:04:33.222 17:15:02 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:04:36.511 nvme0n1 00:04:36.511 17:15:05 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:36.511 [2024-12-09 17:15:05.372462] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 1 00:04:36.511 [2024-12-09 17:15:05.372492] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 1 00:04:36.511 request: 00:04:36.511 { 00:04:36.511 "nvme_ctrlr_name": "nvme0", 00:04:36.511 "password": "test", 00:04:36.511 "method": "bdev_nvme_opal_revert", 00:04:36.511 "req_id": 1 00:04:36.511 } 00:04:36.511 Got JSON-RPC error response 00:04:36.511 response: 00:04:36.511 { 00:04:36.511 "code": -32603, 00:04:36.511 "message": "Internal error" 00:04:36.511 } 00:04:36.511 17:15:05 -- common/autotest_common.sh@1591 -- # true 00:04:36.511 17:15:05 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:04:36.511 17:15:05 -- common/autotest_common.sh@1595 -- # killprocess 2381325 00:04:36.511 17:15:05 -- common/autotest_common.sh@954 -- # '[' -z 2381325 ']' 00:04:36.511 17:15:05 -- common/autotest_common.sh@958 -- # kill -0 2381325 00:04:36.511 17:15:05 -- common/autotest_common.sh@959 -- # uname 00:04:36.511 17:15:05 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:36.511 17:15:05 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2381325 00:04:36.511 17:15:05 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:36.511 17:15:05 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:36.511 17:15:05 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2381325' 00:04:36.511 killing process with pid 2381325 00:04:36.511 17:15:05 -- common/autotest_common.sh@973 -- # kill 2381325 00:04:36.511 17:15:05 -- common/autotest_common.sh@978 -- # wait 2381325 00:04:37.889 17:15:07 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:37.889 17:15:07 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:37.889 17:15:07 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:37.889 17:15:07 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:37.889 17:15:07 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:37.889 17:15:07 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:37.889 17:15:07 -- common/autotest_common.sh@10 -- # set +x 00:04:37.889 17:15:07 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:37.889 17:15:07 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:37.889 17:15:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:37.889 17:15:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:37.889 17:15:07 -- common/autotest_common.sh@10 -- # set +x 00:04:38.149 ************************************ 00:04:38.149 START TEST env 00:04:38.149 ************************************ 00:04:38.149 17:15:07 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:38.149 * Looking for test storage... 00:04:38.149 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:38.149 17:15:07 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:38.149 17:15:07 env -- common/autotest_common.sh@1711 -- # lcov --version 00:04:38.149 17:15:07 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:38.149 17:15:07 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:38.149 17:15:07 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:38.149 17:15:07 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:38.149 17:15:07 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:38.149 17:15:07 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:38.149 17:15:07 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:38.149 17:15:07 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:38.149 17:15:07 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:38.149 17:15:07 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:38.149 17:15:07 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:38.149 17:15:07 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:38.149 17:15:07 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:38.149 17:15:07 env -- scripts/common.sh@344 -- # case "$op" in 00:04:38.149 17:15:07 env -- scripts/common.sh@345 -- # : 1 00:04:38.149 17:15:07 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:38.149 17:15:07 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:38.149 17:15:07 env -- scripts/common.sh@365 -- # decimal 1 00:04:38.149 17:15:07 env -- scripts/common.sh@353 -- # local d=1 00:04:38.149 17:15:07 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:38.149 17:15:07 env -- scripts/common.sh@355 -- # echo 1 00:04:38.149 17:15:07 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:38.149 17:15:07 env -- scripts/common.sh@366 -- # decimal 2 00:04:38.149 17:15:07 env -- scripts/common.sh@353 -- # local d=2 00:04:38.149 17:15:07 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:38.149 17:15:07 env -- scripts/common.sh@355 -- # echo 2 00:04:38.149 17:15:07 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:38.149 17:15:07 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:38.149 17:15:07 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:38.149 17:15:07 env -- scripts/common.sh@368 -- # return 0 00:04:38.149 17:15:07 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:38.149 17:15:07 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:38.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.149 --rc genhtml_branch_coverage=1 00:04:38.149 --rc genhtml_function_coverage=1 00:04:38.149 --rc genhtml_legend=1 00:04:38.149 --rc geninfo_all_blocks=1 00:04:38.149 --rc geninfo_unexecuted_blocks=1 00:04:38.149 00:04:38.149 ' 00:04:38.149 17:15:07 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:38.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.149 --rc genhtml_branch_coverage=1 00:04:38.149 --rc genhtml_function_coverage=1 00:04:38.149 --rc genhtml_legend=1 00:04:38.149 --rc geninfo_all_blocks=1 00:04:38.149 --rc geninfo_unexecuted_blocks=1 00:04:38.149 00:04:38.149 ' 00:04:38.149 17:15:07 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:38.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.149 --rc genhtml_branch_coverage=1 00:04:38.149 --rc genhtml_function_coverage=1 00:04:38.149 --rc genhtml_legend=1 00:04:38.149 --rc geninfo_all_blocks=1 00:04:38.149 --rc geninfo_unexecuted_blocks=1 00:04:38.149 00:04:38.149 ' 00:04:38.149 17:15:07 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:38.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.149 --rc genhtml_branch_coverage=1 00:04:38.149 --rc genhtml_function_coverage=1 00:04:38.149 --rc genhtml_legend=1 00:04:38.149 --rc geninfo_all_blocks=1 00:04:38.149 --rc geninfo_unexecuted_blocks=1 00:04:38.149 00:04:38.149 ' 00:04:38.149 17:15:07 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:38.149 17:15:07 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:38.149 17:15:07 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.149 17:15:07 env -- common/autotest_common.sh@10 -- # set +x 00:04:38.149 ************************************ 00:04:38.149 START TEST env_memory 00:04:38.149 ************************************ 00:04:38.149 17:15:07 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:38.149 00:04:38.149 00:04:38.149 CUnit - A unit testing framework for C - Version 2.1-3 00:04:38.149 http://cunit.sourceforge.net/ 00:04:38.149 00:04:38.149 00:04:38.149 Suite: memory 00:04:38.409 Test: alloc and free memory map ...[2024-12-09 17:15:07.349091] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:38.409 passed 00:04:38.409 Test: mem map translation ...[2024-12-09 17:15:07.367657] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:38.409 [2024-12-09 17:15:07.367672] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:38.409 [2024-12-09 17:15:07.367708] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:38.409 [2024-12-09 17:15:07.367715] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:38.409 passed 00:04:38.409 Test: mem map registration ...[2024-12-09 17:15:07.403857] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:38.409 [2024-12-09 17:15:07.403873] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:38.409 passed 00:04:38.409 Test: mem map adjacent registrations ...passed 00:04:38.409 00:04:38.409 Run Summary: Type Total Ran Passed Failed Inactive 00:04:38.409 suites 1 1 n/a 0 0 00:04:38.409 tests 4 4 4 0 0 00:04:38.409 asserts 152 152 152 0 n/a 00:04:38.409 00:04:38.409 Elapsed time = 0.136 seconds 00:04:38.409 00:04:38.409 real 0m0.149s 00:04:38.409 user 0m0.138s 00:04:38.409 sys 0m0.011s 00:04:38.409 17:15:07 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:38.409 17:15:07 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:38.409 ************************************ 00:04:38.409 END TEST env_memory 00:04:38.409 ************************************ 00:04:38.409 17:15:07 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:38.409 17:15:07 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:38.409 17:15:07 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.409 17:15:07 env -- common/autotest_common.sh@10 -- # set +x 00:04:38.409 ************************************ 00:04:38.409 START TEST env_vtophys 00:04:38.409 ************************************ 00:04:38.409 17:15:07 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:38.409 EAL: lib.eal log level changed from notice to debug 00:04:38.409 EAL: Detected lcore 0 as core 0 on socket 0 00:04:38.409 EAL: Detected lcore 1 as core 1 on socket 0 00:04:38.409 EAL: Detected lcore 2 as core 2 on socket 0 00:04:38.410 EAL: Detected lcore 3 as core 3 on socket 0 00:04:38.410 EAL: Detected lcore 4 as core 4 on socket 0 00:04:38.410 EAL: Detected lcore 5 as core 5 on socket 0 00:04:38.410 EAL: Detected lcore 6 as core 6 on socket 0 00:04:38.410 EAL: Detected lcore 7 as core 8 on socket 0 00:04:38.410 EAL: Detected lcore 8 as core 9 on socket 0 00:04:38.410 EAL: Detected lcore 9 as core 10 on socket 0 00:04:38.410 EAL: Detected lcore 10 as core 11 on socket 0 00:04:38.410 EAL: Detected lcore 11 as core 12 on socket 0 00:04:38.410 EAL: Detected lcore 12 as core 13 on socket 0 00:04:38.410 EAL: Detected lcore 13 as core 16 on socket 0 00:04:38.410 EAL: Detected lcore 14 as core 17 on socket 0 00:04:38.410 EAL: Detected lcore 15 as core 18 on socket 0 00:04:38.410 EAL: Detected lcore 16 as core 19 on socket 0 00:04:38.410 EAL: Detected lcore 17 as core 20 on socket 0 00:04:38.410 EAL: Detected lcore 18 as core 21 on socket 0 00:04:38.410 EAL: Detected lcore 19 as core 25 on socket 0 00:04:38.410 EAL: Detected lcore 20 as core 26 on socket 0 00:04:38.410 EAL: Detected lcore 21 as core 27 on socket 0 00:04:38.410 EAL: Detected lcore 22 as core 28 on socket 0 00:04:38.410 EAL: Detected lcore 23 as core 29 on socket 0 00:04:38.410 EAL: Detected lcore 24 as core 0 on socket 1 00:04:38.410 EAL: Detected lcore 25 as core 1 on socket 1 00:04:38.410 EAL: Detected lcore 26 as core 2 on socket 1 00:04:38.410 EAL: Detected lcore 27 as core 3 on socket 1 00:04:38.410 EAL: Detected lcore 28 as core 4 on socket 1 00:04:38.410 EAL: Detected lcore 29 as core 5 on socket 1 00:04:38.410 EAL: Detected lcore 30 as core 6 on socket 1 00:04:38.410 EAL: Detected lcore 31 as core 8 on socket 1 00:04:38.410 EAL: Detected lcore 32 as core 9 on socket 1 00:04:38.410 EAL: Detected lcore 33 as core 10 on socket 1 00:04:38.410 EAL: Detected lcore 34 as core 11 on socket 1 00:04:38.410 EAL: Detected lcore 35 as core 12 on socket 1 00:04:38.410 EAL: Detected lcore 36 as core 13 on socket 1 00:04:38.410 EAL: Detected lcore 37 as core 16 on socket 1 00:04:38.410 EAL: Detected lcore 38 as core 17 on socket 1 00:04:38.410 EAL: Detected lcore 39 as core 18 on socket 1 00:04:38.410 EAL: Detected lcore 40 as core 19 on socket 1 00:04:38.410 EAL: Detected lcore 41 as core 20 on socket 1 00:04:38.410 EAL: Detected lcore 42 as core 21 on socket 1 00:04:38.410 EAL: Detected lcore 43 as core 25 on socket 1 00:04:38.410 EAL: Detected lcore 44 as core 26 on socket 1 00:04:38.410 EAL: Detected lcore 45 as core 27 on socket 1 00:04:38.410 EAL: Detected lcore 46 as core 28 on socket 1 00:04:38.410 EAL: Detected lcore 47 as core 29 on socket 1 00:04:38.410 EAL: Detected lcore 48 as core 0 on socket 0 00:04:38.410 EAL: Detected lcore 49 as core 1 on socket 0 00:04:38.410 EAL: Detected lcore 50 as core 2 on socket 0 00:04:38.410 EAL: Detected lcore 51 as core 3 on socket 0 00:04:38.410 EAL: Detected lcore 52 as core 4 on socket 0 00:04:38.410 EAL: Detected lcore 53 as core 5 on socket 0 00:04:38.410 EAL: Detected lcore 54 as core 6 on socket 0 00:04:38.410 EAL: Detected lcore 55 as core 8 on socket 0 00:04:38.410 EAL: Detected lcore 56 as core 9 on socket 0 00:04:38.410 EAL: Detected lcore 57 as core 10 on socket 0 00:04:38.410 EAL: Detected lcore 58 as core 11 on socket 0 00:04:38.410 EAL: Detected lcore 59 as core 12 on socket 0 00:04:38.410 EAL: Detected lcore 60 as core 13 on socket 0 00:04:38.410 EAL: Detected lcore 61 as core 16 on socket 0 00:04:38.410 EAL: Detected lcore 62 as core 17 on socket 0 00:04:38.410 EAL: Detected lcore 63 as core 18 on socket 0 00:04:38.410 EAL: Detected lcore 64 as core 19 on socket 0 00:04:38.410 EAL: Detected lcore 65 as core 20 on socket 0 00:04:38.410 EAL: Detected lcore 66 as core 21 on socket 0 00:04:38.410 EAL: Detected lcore 67 as core 25 on socket 0 00:04:38.410 EAL: Detected lcore 68 as core 26 on socket 0 00:04:38.410 EAL: Detected lcore 69 as core 27 on socket 0 00:04:38.410 EAL: Detected lcore 70 as core 28 on socket 0 00:04:38.410 EAL: Detected lcore 71 as core 29 on socket 0 00:04:38.410 EAL: Detected lcore 72 as core 0 on socket 1 00:04:38.410 EAL: Detected lcore 73 as core 1 on socket 1 00:04:38.410 EAL: Detected lcore 74 as core 2 on socket 1 00:04:38.410 EAL: Detected lcore 75 as core 3 on socket 1 00:04:38.410 EAL: Detected lcore 76 as core 4 on socket 1 00:04:38.410 EAL: Detected lcore 77 as core 5 on socket 1 00:04:38.410 EAL: Detected lcore 78 as core 6 on socket 1 00:04:38.410 EAL: Detected lcore 79 as core 8 on socket 1 00:04:38.410 EAL: Detected lcore 80 as core 9 on socket 1 00:04:38.410 EAL: Detected lcore 81 as core 10 on socket 1 00:04:38.410 EAL: Detected lcore 82 as core 11 on socket 1 00:04:38.410 EAL: Detected lcore 83 as core 12 on socket 1 00:04:38.410 EAL: Detected lcore 84 as core 13 on socket 1 00:04:38.410 EAL: Detected lcore 85 as core 16 on socket 1 00:04:38.410 EAL: Detected lcore 86 as core 17 on socket 1 00:04:38.410 EAL: Detected lcore 87 as core 18 on socket 1 00:04:38.410 EAL: Detected lcore 88 as core 19 on socket 1 00:04:38.410 EAL: Detected lcore 89 as core 20 on socket 1 00:04:38.410 EAL: Detected lcore 90 as core 21 on socket 1 00:04:38.410 EAL: Detected lcore 91 as core 25 on socket 1 00:04:38.410 EAL: Detected lcore 92 as core 26 on socket 1 00:04:38.410 EAL: Detected lcore 93 as core 27 on socket 1 00:04:38.410 EAL: Detected lcore 94 as core 28 on socket 1 00:04:38.410 EAL: Detected lcore 95 as core 29 on socket 1 00:04:38.410 EAL: Maximum logical cores by configuration: 128 00:04:38.410 EAL: Detected CPU lcores: 96 00:04:38.410 EAL: Detected NUMA nodes: 2 00:04:38.410 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:38.410 EAL: Detected shared linkage of DPDK 00:04:38.410 EAL: No shared files mode enabled, IPC will be disabled 00:04:38.410 EAL: Bus pci wants IOVA as 'DC' 00:04:38.410 EAL: Buses did not request a specific IOVA mode. 00:04:38.410 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:38.410 EAL: Selected IOVA mode 'VA' 00:04:38.410 EAL: Probing VFIO support... 00:04:38.410 EAL: IOMMU type 1 (Type 1) is supported 00:04:38.410 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:38.410 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:38.410 EAL: VFIO support initialized 00:04:38.410 EAL: Ask a virtual area of 0x2e000 bytes 00:04:38.410 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:38.410 EAL: Setting up physically contiguous memory... 00:04:38.410 EAL: Setting maximum number of open files to 524288 00:04:38.410 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:38.410 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:38.410 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:38.410 EAL: Ask a virtual area of 0x61000 bytes 00:04:38.410 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:38.410 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:38.410 EAL: Ask a virtual area of 0x400000000 bytes 00:04:38.410 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:38.410 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:38.410 EAL: Ask a virtual area of 0x61000 bytes 00:04:38.410 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:38.410 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:38.410 EAL: Ask a virtual area of 0x400000000 bytes 00:04:38.410 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:38.410 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:38.410 EAL: Ask a virtual area of 0x61000 bytes 00:04:38.410 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:38.410 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:38.410 EAL: Ask a virtual area of 0x400000000 bytes 00:04:38.410 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:38.410 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:38.410 EAL: Ask a virtual area of 0x61000 bytes 00:04:38.410 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:38.410 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:38.410 EAL: Ask a virtual area of 0x400000000 bytes 00:04:38.410 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:38.410 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:38.410 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:38.410 EAL: Ask a virtual area of 0x61000 bytes 00:04:38.410 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:38.410 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:38.410 EAL: Ask a virtual area of 0x400000000 bytes 00:04:38.410 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:38.410 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:38.410 EAL: Ask a virtual area of 0x61000 bytes 00:04:38.410 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:38.410 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:38.410 EAL: Ask a virtual area of 0x400000000 bytes 00:04:38.410 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:38.410 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:38.410 EAL: Ask a virtual area of 0x61000 bytes 00:04:38.410 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:38.410 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:38.410 EAL: Ask a virtual area of 0x400000000 bytes 00:04:38.410 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:38.410 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:38.410 EAL: Ask a virtual area of 0x61000 bytes 00:04:38.410 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:38.410 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:38.410 EAL: Ask a virtual area of 0x400000000 bytes 00:04:38.410 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:38.410 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:38.410 EAL: Hugepages will be freed exactly as allocated. 00:04:38.410 EAL: No shared files mode enabled, IPC is disabled 00:04:38.410 EAL: No shared files mode enabled, IPC is disabled 00:04:38.410 EAL: TSC frequency is ~2100000 KHz 00:04:38.410 EAL: Main lcore 0 is ready (tid=7f09565c4a00;cpuset=[0]) 00:04:38.410 EAL: Trying to obtain current memory policy. 00:04:38.410 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:38.410 EAL: Restoring previous memory policy: 0 00:04:38.410 EAL: request: mp_malloc_sync 00:04:38.410 EAL: No shared files mode enabled, IPC is disabled 00:04:38.410 EAL: Heap on socket 0 was expanded by 2MB 00:04:38.410 EAL: No shared files mode enabled, IPC is disabled 00:04:38.670 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:38.670 EAL: Mem event callback 'spdk:(nil)' registered 00:04:38.670 00:04:38.670 00:04:38.670 CUnit - A unit testing framework for C - Version 2.1-3 00:04:38.670 http://cunit.sourceforge.net/ 00:04:38.670 00:04:38.670 00:04:38.670 Suite: components_suite 00:04:38.670 Test: vtophys_malloc_test ...passed 00:04:38.670 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:38.670 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:38.670 EAL: Restoring previous memory policy: 4 00:04:38.670 EAL: Calling mem event callback 'spdk:(nil)' 00:04:38.670 EAL: request: mp_malloc_sync 00:04:38.670 EAL: No shared files mode enabled, IPC is disabled 00:04:38.670 EAL: Heap on socket 0 was expanded by 4MB 00:04:38.670 EAL: Calling mem event callback 'spdk:(nil)' 00:04:38.670 EAL: request: mp_malloc_sync 00:04:38.670 EAL: No shared files mode enabled, IPC is disabled 00:04:38.670 EAL: Heap on socket 0 was shrunk by 4MB 00:04:38.670 EAL: Trying to obtain current memory policy. 00:04:38.670 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:38.670 EAL: Restoring previous memory policy: 4 00:04:38.670 EAL: Calling mem event callback 'spdk:(nil)' 00:04:38.670 EAL: request: mp_malloc_sync 00:04:38.670 EAL: No shared files mode enabled, IPC is disabled 00:04:38.670 EAL: Heap on socket 0 was expanded by 6MB 00:04:38.670 EAL: Calling mem event callback 'spdk:(nil)' 00:04:38.670 EAL: request: mp_malloc_sync 00:04:38.670 EAL: No shared files mode enabled, IPC is disabled 00:04:38.670 EAL: Heap on socket 0 was shrunk by 6MB 00:04:38.670 EAL: Trying to obtain current memory policy. 00:04:38.670 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:38.670 EAL: Restoring previous memory policy: 4 00:04:38.670 EAL: Calling mem event callback 'spdk:(nil)' 00:04:38.670 EAL: request: mp_malloc_sync 00:04:38.670 EAL: No shared files mode enabled, IPC is disabled 00:04:38.670 EAL: Heap on socket 0 was expanded by 10MB 00:04:38.670 EAL: Calling mem event callback 'spdk:(nil)' 00:04:38.670 EAL: request: mp_malloc_sync 00:04:38.670 EAL: No shared files mode enabled, IPC is disabled 00:04:38.670 EAL: Heap on socket 0 was shrunk by 10MB 00:04:38.670 EAL: Trying to obtain current memory policy. 00:04:38.670 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:38.670 EAL: Restoring previous memory policy: 4 00:04:38.670 EAL: Calling mem event callback 'spdk:(nil)' 00:04:38.670 EAL: request: mp_malloc_sync 00:04:38.670 EAL: No shared files mode enabled, IPC is disabled 00:04:38.670 EAL: Heap on socket 0 was expanded by 18MB 00:04:38.670 EAL: Calling mem event callback 'spdk:(nil)' 00:04:38.670 EAL: request: mp_malloc_sync 00:04:38.670 EAL: No shared files mode enabled, IPC is disabled 00:04:38.670 EAL: Heap on socket 0 was shrunk by 18MB 00:04:38.670 EAL: Trying to obtain current memory policy. 00:04:38.670 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:38.670 EAL: Restoring previous memory policy: 4 00:04:38.670 EAL: Calling mem event callback 'spdk:(nil)' 00:04:38.670 EAL: request: mp_malloc_sync 00:04:38.670 EAL: No shared files mode enabled, IPC is disabled 00:04:38.670 EAL: Heap on socket 0 was expanded by 34MB 00:04:38.670 EAL: Calling mem event callback 'spdk:(nil)' 00:04:38.670 EAL: request: mp_malloc_sync 00:04:38.670 EAL: No shared files mode enabled, IPC is disabled 00:04:38.670 EAL: Heap on socket 0 was shrunk by 34MB 00:04:38.670 EAL: Trying to obtain current memory policy. 00:04:38.670 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:38.670 EAL: Restoring previous memory policy: 4 00:04:38.670 EAL: Calling mem event callback 'spdk:(nil)' 00:04:38.670 EAL: request: mp_malloc_sync 00:04:38.670 EAL: No shared files mode enabled, IPC is disabled 00:04:38.670 EAL: Heap on socket 0 was expanded by 66MB 00:04:38.670 EAL: Calling mem event callback 'spdk:(nil)' 00:04:38.670 EAL: request: mp_malloc_sync 00:04:38.670 EAL: No shared files mode enabled, IPC is disabled 00:04:38.670 EAL: Heap on socket 0 was shrunk by 66MB 00:04:38.670 EAL: Trying to obtain current memory policy. 00:04:38.670 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:38.670 EAL: Restoring previous memory policy: 4 00:04:38.670 EAL: Calling mem event callback 'spdk:(nil)' 00:04:38.670 EAL: request: mp_malloc_sync 00:04:38.670 EAL: No shared files mode enabled, IPC is disabled 00:04:38.670 EAL: Heap on socket 0 was expanded by 130MB 00:04:38.670 EAL: Calling mem event callback 'spdk:(nil)' 00:04:38.670 EAL: request: mp_malloc_sync 00:04:38.670 EAL: No shared files mode enabled, IPC is disabled 00:04:38.670 EAL: Heap on socket 0 was shrunk by 130MB 00:04:38.670 EAL: Trying to obtain current memory policy. 00:04:38.670 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:38.670 EAL: Restoring previous memory policy: 4 00:04:38.670 EAL: Calling mem event callback 'spdk:(nil)' 00:04:38.670 EAL: request: mp_malloc_sync 00:04:38.670 EAL: No shared files mode enabled, IPC is disabled 00:04:38.670 EAL: Heap on socket 0 was expanded by 258MB 00:04:38.670 EAL: Calling mem event callback 'spdk:(nil)' 00:04:38.936 EAL: request: mp_malloc_sync 00:04:38.936 EAL: No shared files mode enabled, IPC is disabled 00:04:38.936 EAL: Heap on socket 0 was shrunk by 258MB 00:04:38.936 EAL: Trying to obtain current memory policy. 00:04:38.936 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:38.936 EAL: Restoring previous memory policy: 4 00:04:38.936 EAL: Calling mem event callback 'spdk:(nil)' 00:04:38.936 EAL: request: mp_malloc_sync 00:04:38.936 EAL: No shared files mode enabled, IPC is disabled 00:04:38.936 EAL: Heap on socket 0 was expanded by 514MB 00:04:38.936 EAL: Calling mem event callback 'spdk:(nil)' 00:04:38.936 EAL: request: mp_malloc_sync 00:04:38.936 EAL: No shared files mode enabled, IPC is disabled 00:04:38.936 EAL: Heap on socket 0 was shrunk by 514MB 00:04:38.936 EAL: Trying to obtain current memory policy. 00:04:38.936 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:39.195 EAL: Restoring previous memory policy: 4 00:04:39.195 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.195 EAL: request: mp_malloc_sync 00:04:39.195 EAL: No shared files mode enabled, IPC is disabled 00:04:39.195 EAL: Heap on socket 0 was expanded by 1026MB 00:04:39.455 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.455 EAL: request: mp_malloc_sync 00:04:39.455 EAL: No shared files mode enabled, IPC is disabled 00:04:39.455 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:39.455 passed 00:04:39.455 00:04:39.455 Run Summary: Type Total Ran Passed Failed Inactive 00:04:39.455 suites 1 1 n/a 0 0 00:04:39.455 tests 2 2 2 0 0 00:04:39.455 asserts 497 497 497 0 n/a 00:04:39.455 00:04:39.455 Elapsed time = 0.965 seconds 00:04:39.455 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.455 EAL: request: mp_malloc_sync 00:04:39.455 EAL: No shared files mode enabled, IPC is disabled 00:04:39.455 EAL: Heap on socket 0 was shrunk by 2MB 00:04:39.455 EAL: No shared files mode enabled, IPC is disabled 00:04:39.455 EAL: No shared files mode enabled, IPC is disabled 00:04:39.455 EAL: No shared files mode enabled, IPC is disabled 00:04:39.455 00:04:39.455 real 0m1.091s 00:04:39.455 user 0m0.647s 00:04:39.455 sys 0m0.421s 00:04:39.455 17:15:08 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:39.455 17:15:08 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:39.455 ************************************ 00:04:39.455 END TEST env_vtophys 00:04:39.455 ************************************ 00:04:39.714 17:15:08 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:39.714 17:15:08 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:39.714 17:15:08 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:39.714 17:15:08 env -- common/autotest_common.sh@10 -- # set +x 00:04:39.714 ************************************ 00:04:39.714 START TEST env_pci 00:04:39.714 ************************************ 00:04:39.714 17:15:08 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:39.714 00:04:39.714 00:04:39.714 CUnit - A unit testing framework for C - Version 2.1-3 00:04:39.714 http://cunit.sourceforge.net/ 00:04:39.714 00:04:39.714 00:04:39.714 Suite: pci 00:04:39.714 Test: pci_hook ...[2024-12-09 17:15:08.700691] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2382968 has claimed it 00:04:39.714 EAL: Cannot find device (10000:00:01.0) 00:04:39.714 EAL: Failed to attach device on primary process 00:04:39.714 passed 00:04:39.714 00:04:39.714 Run Summary: Type Total Ran Passed Failed Inactive 00:04:39.714 suites 1 1 n/a 0 0 00:04:39.714 tests 1 1 1 0 0 00:04:39.714 asserts 25 25 25 0 n/a 00:04:39.714 00:04:39.714 Elapsed time = 0.027 seconds 00:04:39.714 00:04:39.714 real 0m0.048s 00:04:39.714 user 0m0.011s 00:04:39.714 sys 0m0.037s 00:04:39.714 17:15:08 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:39.714 17:15:08 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:39.714 ************************************ 00:04:39.714 END TEST env_pci 00:04:39.714 ************************************ 00:04:39.714 17:15:08 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:39.714 17:15:08 env -- env/env.sh@15 -- # uname 00:04:39.714 17:15:08 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:39.714 17:15:08 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:39.714 17:15:08 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:39.714 17:15:08 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:39.714 17:15:08 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:39.714 17:15:08 env -- common/autotest_common.sh@10 -- # set +x 00:04:39.714 ************************************ 00:04:39.714 START TEST env_dpdk_post_init 00:04:39.714 ************************************ 00:04:39.714 17:15:08 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:39.714 EAL: Detected CPU lcores: 96 00:04:39.714 EAL: Detected NUMA nodes: 2 00:04:39.714 EAL: Detected shared linkage of DPDK 00:04:39.714 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:39.714 EAL: Selected IOVA mode 'VA' 00:04:39.714 EAL: VFIO support initialized 00:04:39.714 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:39.973 EAL: Using IOMMU type 1 (Type 1) 00:04:39.973 EAL: Ignore mapping IO port bar(1) 00:04:39.973 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:39.973 EAL: Ignore mapping IO port bar(1) 00:04:39.973 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:39.973 EAL: Ignore mapping IO port bar(1) 00:04:39.973 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:39.973 EAL: Ignore mapping IO port bar(1) 00:04:39.973 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:39.973 EAL: Ignore mapping IO port bar(1) 00:04:39.973 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:39.973 EAL: Ignore mapping IO port bar(1) 00:04:39.973 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:39.973 EAL: Ignore mapping IO port bar(1) 00:04:39.973 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:39.973 EAL: Ignore mapping IO port bar(1) 00:04:39.973 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:40.911 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:04:40.911 EAL: Ignore mapping IO port bar(1) 00:04:40.911 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:40.911 EAL: Ignore mapping IO port bar(1) 00:04:40.911 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:40.911 EAL: Ignore mapping IO port bar(1) 00:04:40.911 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:40.911 EAL: Ignore mapping IO port bar(1) 00:04:40.911 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:40.911 EAL: Ignore mapping IO port bar(1) 00:04:40.911 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:40.911 EAL: Ignore mapping IO port bar(1) 00:04:40.911 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:40.911 EAL: Ignore mapping IO port bar(1) 00:04:40.911 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:40.911 EAL: Ignore mapping IO port bar(1) 00:04:40.911 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:44.199 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:04:44.199 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:04:44.199 Starting DPDK initialization... 00:04:44.199 Starting SPDK post initialization... 00:04:44.199 SPDK NVMe probe 00:04:44.199 Attaching to 0000:5e:00.0 00:04:44.199 Attached to 0000:5e:00.0 00:04:44.199 Cleaning up... 00:04:44.199 00:04:44.199 real 0m4.402s 00:04:44.199 user 0m3.006s 00:04:44.199 sys 0m0.462s 00:04:44.199 17:15:13 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.199 17:15:13 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:44.199 ************************************ 00:04:44.199 END TEST env_dpdk_post_init 00:04:44.199 ************************************ 00:04:44.199 17:15:13 env -- env/env.sh@26 -- # uname 00:04:44.199 17:15:13 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:44.199 17:15:13 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:44.199 17:15:13 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:44.199 17:15:13 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.199 17:15:13 env -- common/autotest_common.sh@10 -- # set +x 00:04:44.199 ************************************ 00:04:44.199 START TEST env_mem_callbacks 00:04:44.199 ************************************ 00:04:44.199 17:15:13 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:44.199 EAL: Detected CPU lcores: 96 00:04:44.199 EAL: Detected NUMA nodes: 2 00:04:44.199 EAL: Detected shared linkage of DPDK 00:04:44.199 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:44.199 EAL: Selected IOVA mode 'VA' 00:04:44.199 EAL: VFIO support initialized 00:04:44.199 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:44.199 00:04:44.199 00:04:44.199 CUnit - A unit testing framework for C - Version 2.1-3 00:04:44.199 http://cunit.sourceforge.net/ 00:04:44.199 00:04:44.199 00:04:44.199 Suite: memory 00:04:44.199 Test: test ... 00:04:44.199 register 0x200000200000 2097152 00:04:44.199 malloc 3145728 00:04:44.199 register 0x200000400000 4194304 00:04:44.199 buf 0x200000500000 len 3145728 PASSED 00:04:44.199 malloc 64 00:04:44.199 buf 0x2000004fff40 len 64 PASSED 00:04:44.199 malloc 4194304 00:04:44.199 register 0x200000800000 6291456 00:04:44.199 buf 0x200000a00000 len 4194304 PASSED 00:04:44.199 free 0x200000500000 3145728 00:04:44.199 free 0x2000004fff40 64 00:04:44.199 unregister 0x200000400000 4194304 PASSED 00:04:44.199 free 0x200000a00000 4194304 00:04:44.199 unregister 0x200000800000 6291456 PASSED 00:04:44.199 malloc 8388608 00:04:44.199 register 0x200000400000 10485760 00:04:44.199 buf 0x200000600000 len 8388608 PASSED 00:04:44.199 free 0x200000600000 8388608 00:04:44.199 unregister 0x200000400000 10485760 PASSED 00:04:44.199 passed 00:04:44.199 00:04:44.199 Run Summary: Type Total Ran Passed Failed Inactive 00:04:44.199 suites 1 1 n/a 0 0 00:04:44.199 tests 1 1 1 0 0 00:04:44.199 asserts 15 15 15 0 n/a 00:04:44.199 00:04:44.199 Elapsed time = 0.008 seconds 00:04:44.199 00:04:44.199 real 0m0.057s 00:04:44.199 user 0m0.020s 00:04:44.199 sys 0m0.037s 00:04:44.199 17:15:13 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.199 17:15:13 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:44.199 ************************************ 00:04:44.199 END TEST env_mem_callbacks 00:04:44.199 ************************************ 00:04:44.199 00:04:44.199 real 0m6.285s 00:04:44.199 user 0m4.080s 00:04:44.199 sys 0m1.283s 00:04:44.199 17:15:13 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.199 17:15:13 env -- common/autotest_common.sh@10 -- # set +x 00:04:44.199 ************************************ 00:04:44.199 END TEST env 00:04:44.199 ************************************ 00:04:44.458 17:15:13 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:44.458 17:15:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:44.458 17:15:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.458 17:15:13 -- common/autotest_common.sh@10 -- # set +x 00:04:44.458 ************************************ 00:04:44.458 START TEST rpc 00:04:44.458 ************************************ 00:04:44.459 17:15:13 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:44.459 * Looking for test storage... 00:04:44.459 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:44.459 17:15:13 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:44.459 17:15:13 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:44.459 17:15:13 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:44.459 17:15:13 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:44.459 17:15:13 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:44.459 17:15:13 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:44.459 17:15:13 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:44.459 17:15:13 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:44.459 17:15:13 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:44.459 17:15:13 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:44.459 17:15:13 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:44.459 17:15:13 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:44.459 17:15:13 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:44.459 17:15:13 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:44.459 17:15:13 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:44.459 17:15:13 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:44.459 17:15:13 rpc -- scripts/common.sh@345 -- # : 1 00:04:44.459 17:15:13 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:44.459 17:15:13 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:44.459 17:15:13 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:44.459 17:15:13 rpc -- scripts/common.sh@353 -- # local d=1 00:04:44.459 17:15:13 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:44.459 17:15:13 rpc -- scripts/common.sh@355 -- # echo 1 00:04:44.459 17:15:13 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:44.459 17:15:13 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:44.459 17:15:13 rpc -- scripts/common.sh@353 -- # local d=2 00:04:44.459 17:15:13 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:44.459 17:15:13 rpc -- scripts/common.sh@355 -- # echo 2 00:04:44.459 17:15:13 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:44.459 17:15:13 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:44.459 17:15:13 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:44.459 17:15:13 rpc -- scripts/common.sh@368 -- # return 0 00:04:44.459 17:15:13 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:44.459 17:15:13 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:44.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.459 --rc genhtml_branch_coverage=1 00:04:44.459 --rc genhtml_function_coverage=1 00:04:44.459 --rc genhtml_legend=1 00:04:44.459 --rc geninfo_all_blocks=1 00:04:44.459 --rc geninfo_unexecuted_blocks=1 00:04:44.459 00:04:44.459 ' 00:04:44.459 17:15:13 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:44.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.459 --rc genhtml_branch_coverage=1 00:04:44.459 --rc genhtml_function_coverage=1 00:04:44.459 --rc genhtml_legend=1 00:04:44.459 --rc geninfo_all_blocks=1 00:04:44.459 --rc geninfo_unexecuted_blocks=1 00:04:44.459 00:04:44.459 ' 00:04:44.459 17:15:13 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:44.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.459 --rc genhtml_branch_coverage=1 00:04:44.459 --rc genhtml_function_coverage=1 00:04:44.459 --rc genhtml_legend=1 00:04:44.459 --rc geninfo_all_blocks=1 00:04:44.459 --rc geninfo_unexecuted_blocks=1 00:04:44.459 00:04:44.459 ' 00:04:44.459 17:15:13 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:44.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.459 --rc genhtml_branch_coverage=1 00:04:44.459 --rc genhtml_function_coverage=1 00:04:44.459 --rc genhtml_legend=1 00:04:44.459 --rc geninfo_all_blocks=1 00:04:44.459 --rc geninfo_unexecuted_blocks=1 00:04:44.459 00:04:44.459 ' 00:04:44.459 17:15:13 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2383966 00:04:44.459 17:15:13 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:44.459 17:15:13 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:44.459 17:15:13 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2383966 00:04:44.459 17:15:13 rpc -- common/autotest_common.sh@835 -- # '[' -z 2383966 ']' 00:04:44.459 17:15:13 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.459 17:15:13 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:44.459 17:15:13 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.459 17:15:13 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:44.459 17:15:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.718 [2024-12-09 17:15:13.679382] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:04:44.718 [2024-12-09 17:15:13.679427] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2383966 ] 00:04:44.718 [2024-12-09 17:15:13.753990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.718 [2024-12-09 17:15:13.793770] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:44.718 [2024-12-09 17:15:13.793807] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2383966' to capture a snapshot of events at runtime. 00:04:44.718 [2024-12-09 17:15:13.793814] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:44.718 [2024-12-09 17:15:13.793820] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:44.718 [2024-12-09 17:15:13.793824] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2383966 for offline analysis/debug. 00:04:44.718 [2024-12-09 17:15:13.794340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.976 17:15:14 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:44.976 17:15:14 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:44.976 17:15:14 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:44.976 17:15:14 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:44.976 17:15:14 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:44.976 17:15:14 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:44.976 17:15:14 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:44.976 17:15:14 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.976 17:15:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.976 ************************************ 00:04:44.976 START TEST rpc_integrity 00:04:44.976 ************************************ 00:04:44.976 17:15:14 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:44.976 17:15:14 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:44.976 17:15:14 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.976 17:15:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.976 17:15:14 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.976 17:15:14 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:44.976 17:15:14 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:44.976 17:15:14 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:44.976 17:15:14 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:44.976 17:15:14 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.976 17:15:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.976 17:15:14 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.976 17:15:14 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:44.976 17:15:14 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:44.976 17:15:14 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.976 17:15:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.976 17:15:14 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.976 17:15:14 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:44.976 { 00:04:44.976 "name": "Malloc0", 00:04:44.976 "aliases": [ 00:04:44.976 "e85a3fa4-b5b5-4c4c-b3a0-b0898a00f323" 00:04:44.976 ], 00:04:44.976 "product_name": "Malloc disk", 00:04:44.976 "block_size": 512, 00:04:44.976 "num_blocks": 16384, 00:04:44.976 "uuid": "e85a3fa4-b5b5-4c4c-b3a0-b0898a00f323", 00:04:44.976 "assigned_rate_limits": { 00:04:44.976 "rw_ios_per_sec": 0, 00:04:44.976 "rw_mbytes_per_sec": 0, 00:04:44.976 "r_mbytes_per_sec": 0, 00:04:44.976 "w_mbytes_per_sec": 0 00:04:44.976 }, 00:04:44.976 "claimed": false, 00:04:44.976 "zoned": false, 00:04:44.976 "supported_io_types": { 00:04:44.976 "read": true, 00:04:44.976 "write": true, 00:04:44.976 "unmap": true, 00:04:44.976 "flush": true, 00:04:44.976 "reset": true, 00:04:44.976 "nvme_admin": false, 00:04:44.976 "nvme_io": false, 00:04:44.976 "nvme_io_md": false, 00:04:44.976 "write_zeroes": true, 00:04:44.976 "zcopy": true, 00:04:44.976 "get_zone_info": false, 00:04:44.976 "zone_management": false, 00:04:44.976 "zone_append": false, 00:04:44.976 "compare": false, 00:04:44.976 "compare_and_write": false, 00:04:44.976 "abort": true, 00:04:44.976 "seek_hole": false, 00:04:44.976 "seek_data": false, 00:04:44.976 "copy": true, 00:04:44.976 "nvme_iov_md": false 00:04:44.976 }, 00:04:44.976 "memory_domains": [ 00:04:44.976 { 00:04:44.976 "dma_device_id": "system", 00:04:44.976 "dma_device_type": 1 00:04:44.976 }, 00:04:44.976 { 00:04:44.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:44.976 "dma_device_type": 2 00:04:44.976 } 00:04:44.976 ], 00:04:44.976 "driver_specific": {} 00:04:44.976 } 00:04:44.976 ]' 00:04:44.976 17:15:14 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:45.235 17:15:14 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:45.235 17:15:14 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:45.235 17:15:14 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.235 17:15:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.235 [2024-12-09 17:15:14.182060] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:45.235 [2024-12-09 17:15:14.182094] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:45.235 [2024-12-09 17:15:14.182105] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2174a40 00:04:45.235 [2024-12-09 17:15:14.182128] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:45.235 [2024-12-09 17:15:14.183209] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:45.235 [2024-12-09 17:15:14.183243] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:45.235 Passthru0 00:04:45.235 17:15:14 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.235 17:15:14 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:45.235 17:15:14 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.235 17:15:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.235 17:15:14 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.235 17:15:14 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:45.235 { 00:04:45.235 "name": "Malloc0", 00:04:45.235 "aliases": [ 00:04:45.235 "e85a3fa4-b5b5-4c4c-b3a0-b0898a00f323" 00:04:45.235 ], 00:04:45.235 "product_name": "Malloc disk", 00:04:45.235 "block_size": 512, 00:04:45.235 "num_blocks": 16384, 00:04:45.235 "uuid": "e85a3fa4-b5b5-4c4c-b3a0-b0898a00f323", 00:04:45.235 "assigned_rate_limits": { 00:04:45.235 "rw_ios_per_sec": 0, 00:04:45.235 "rw_mbytes_per_sec": 0, 00:04:45.235 "r_mbytes_per_sec": 0, 00:04:45.235 "w_mbytes_per_sec": 0 00:04:45.235 }, 00:04:45.235 "claimed": true, 00:04:45.235 "claim_type": "exclusive_write", 00:04:45.235 "zoned": false, 00:04:45.235 "supported_io_types": { 00:04:45.235 "read": true, 00:04:45.235 "write": true, 00:04:45.235 "unmap": true, 00:04:45.235 "flush": true, 00:04:45.235 "reset": true, 00:04:45.235 "nvme_admin": false, 00:04:45.235 "nvme_io": false, 00:04:45.235 "nvme_io_md": false, 00:04:45.235 "write_zeroes": true, 00:04:45.235 "zcopy": true, 00:04:45.235 "get_zone_info": false, 00:04:45.235 "zone_management": false, 00:04:45.235 "zone_append": false, 00:04:45.235 "compare": false, 00:04:45.235 "compare_and_write": false, 00:04:45.235 "abort": true, 00:04:45.235 "seek_hole": false, 00:04:45.235 "seek_data": false, 00:04:45.235 "copy": true, 00:04:45.235 "nvme_iov_md": false 00:04:45.235 }, 00:04:45.235 "memory_domains": [ 00:04:45.235 { 00:04:45.235 "dma_device_id": "system", 00:04:45.235 "dma_device_type": 1 00:04:45.235 }, 00:04:45.235 { 00:04:45.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:45.235 "dma_device_type": 2 00:04:45.235 } 00:04:45.235 ], 00:04:45.235 "driver_specific": {} 00:04:45.235 }, 00:04:45.235 { 00:04:45.235 "name": "Passthru0", 00:04:45.235 "aliases": [ 00:04:45.235 "a534e752-1b69-52b8-9daa-0ba027d4b736" 00:04:45.235 ], 00:04:45.235 "product_name": "passthru", 00:04:45.235 "block_size": 512, 00:04:45.235 "num_blocks": 16384, 00:04:45.235 "uuid": "a534e752-1b69-52b8-9daa-0ba027d4b736", 00:04:45.235 "assigned_rate_limits": { 00:04:45.235 "rw_ios_per_sec": 0, 00:04:45.235 "rw_mbytes_per_sec": 0, 00:04:45.235 "r_mbytes_per_sec": 0, 00:04:45.235 "w_mbytes_per_sec": 0 00:04:45.235 }, 00:04:45.235 "claimed": false, 00:04:45.235 "zoned": false, 00:04:45.235 "supported_io_types": { 00:04:45.235 "read": true, 00:04:45.235 "write": true, 00:04:45.235 "unmap": true, 00:04:45.235 "flush": true, 00:04:45.235 "reset": true, 00:04:45.235 "nvme_admin": false, 00:04:45.235 "nvme_io": false, 00:04:45.235 "nvme_io_md": false, 00:04:45.235 "write_zeroes": true, 00:04:45.235 "zcopy": true, 00:04:45.235 "get_zone_info": false, 00:04:45.235 "zone_management": false, 00:04:45.235 "zone_append": false, 00:04:45.235 "compare": false, 00:04:45.235 "compare_and_write": false, 00:04:45.235 "abort": true, 00:04:45.235 "seek_hole": false, 00:04:45.235 "seek_data": false, 00:04:45.235 "copy": true, 00:04:45.235 "nvme_iov_md": false 00:04:45.235 }, 00:04:45.235 "memory_domains": [ 00:04:45.235 { 00:04:45.235 "dma_device_id": "system", 00:04:45.235 "dma_device_type": 1 00:04:45.235 }, 00:04:45.235 { 00:04:45.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:45.235 "dma_device_type": 2 00:04:45.235 } 00:04:45.235 ], 00:04:45.235 "driver_specific": { 00:04:45.235 "passthru": { 00:04:45.235 "name": "Passthru0", 00:04:45.235 "base_bdev_name": "Malloc0" 00:04:45.235 } 00:04:45.235 } 00:04:45.235 } 00:04:45.235 ]' 00:04:45.235 17:15:14 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:45.235 17:15:14 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:45.235 17:15:14 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:45.235 17:15:14 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.235 17:15:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.235 17:15:14 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.235 17:15:14 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:45.235 17:15:14 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.235 17:15:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.235 17:15:14 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.235 17:15:14 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:45.235 17:15:14 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.235 17:15:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.235 17:15:14 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.235 17:15:14 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:45.235 17:15:14 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:45.235 17:15:14 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:45.235 00:04:45.235 real 0m0.278s 00:04:45.235 user 0m0.171s 00:04:45.235 sys 0m0.043s 00:04:45.235 17:15:14 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:45.235 17:15:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.235 ************************************ 00:04:45.235 END TEST rpc_integrity 00:04:45.235 ************************************ 00:04:45.235 17:15:14 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:45.235 17:15:14 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:45.235 17:15:14 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.235 17:15:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.235 ************************************ 00:04:45.235 START TEST rpc_plugins 00:04:45.235 ************************************ 00:04:45.235 17:15:14 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:45.235 17:15:14 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:45.235 17:15:14 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.235 17:15:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:45.235 17:15:14 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.235 17:15:14 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:45.235 17:15:14 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:45.235 17:15:14 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.235 17:15:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:45.494 17:15:14 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.494 17:15:14 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:45.494 { 00:04:45.494 "name": "Malloc1", 00:04:45.494 "aliases": [ 00:04:45.494 "d9c3e6e8-8e12-40a5-a4f0-53f973a0bc6d" 00:04:45.494 ], 00:04:45.494 "product_name": "Malloc disk", 00:04:45.494 "block_size": 4096, 00:04:45.494 "num_blocks": 256, 00:04:45.494 "uuid": "d9c3e6e8-8e12-40a5-a4f0-53f973a0bc6d", 00:04:45.494 "assigned_rate_limits": { 00:04:45.494 "rw_ios_per_sec": 0, 00:04:45.494 "rw_mbytes_per_sec": 0, 00:04:45.494 "r_mbytes_per_sec": 0, 00:04:45.494 "w_mbytes_per_sec": 0 00:04:45.494 }, 00:04:45.494 "claimed": false, 00:04:45.494 "zoned": false, 00:04:45.494 "supported_io_types": { 00:04:45.494 "read": true, 00:04:45.494 "write": true, 00:04:45.494 "unmap": true, 00:04:45.494 "flush": true, 00:04:45.494 "reset": true, 00:04:45.494 "nvme_admin": false, 00:04:45.494 "nvme_io": false, 00:04:45.494 "nvme_io_md": false, 00:04:45.494 "write_zeroes": true, 00:04:45.494 "zcopy": true, 00:04:45.494 "get_zone_info": false, 00:04:45.494 "zone_management": false, 00:04:45.494 "zone_append": false, 00:04:45.494 "compare": false, 00:04:45.494 "compare_and_write": false, 00:04:45.494 "abort": true, 00:04:45.494 "seek_hole": false, 00:04:45.494 "seek_data": false, 00:04:45.494 "copy": true, 00:04:45.494 "nvme_iov_md": false 00:04:45.494 }, 00:04:45.494 "memory_domains": [ 00:04:45.494 { 00:04:45.494 "dma_device_id": "system", 00:04:45.494 "dma_device_type": 1 00:04:45.494 }, 00:04:45.494 { 00:04:45.494 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:45.494 "dma_device_type": 2 00:04:45.494 } 00:04:45.494 ], 00:04:45.494 "driver_specific": {} 00:04:45.494 } 00:04:45.494 ]' 00:04:45.494 17:15:14 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:45.494 17:15:14 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:45.494 17:15:14 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:45.494 17:15:14 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.494 17:15:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:45.494 17:15:14 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.494 17:15:14 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:45.494 17:15:14 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.494 17:15:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:45.494 17:15:14 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.494 17:15:14 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:45.494 17:15:14 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:45.494 17:15:14 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:45.494 00:04:45.494 real 0m0.143s 00:04:45.494 user 0m0.090s 00:04:45.494 sys 0m0.018s 00:04:45.494 17:15:14 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:45.494 17:15:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:45.494 ************************************ 00:04:45.494 END TEST rpc_plugins 00:04:45.494 ************************************ 00:04:45.494 17:15:14 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:45.494 17:15:14 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:45.494 17:15:14 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.494 17:15:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.494 ************************************ 00:04:45.494 START TEST rpc_trace_cmd_test 00:04:45.494 ************************************ 00:04:45.494 17:15:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:45.494 17:15:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:45.494 17:15:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:45.494 17:15:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.494 17:15:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:45.494 17:15:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.494 17:15:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:45.494 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2383966", 00:04:45.494 "tpoint_group_mask": "0x8", 00:04:45.494 "iscsi_conn": { 00:04:45.494 "mask": "0x2", 00:04:45.494 "tpoint_mask": "0x0" 00:04:45.494 }, 00:04:45.494 "scsi": { 00:04:45.494 "mask": "0x4", 00:04:45.494 "tpoint_mask": "0x0" 00:04:45.494 }, 00:04:45.494 "bdev": { 00:04:45.494 "mask": "0x8", 00:04:45.494 "tpoint_mask": "0xffffffffffffffff" 00:04:45.494 }, 00:04:45.494 "nvmf_rdma": { 00:04:45.494 "mask": "0x10", 00:04:45.494 "tpoint_mask": "0x0" 00:04:45.494 }, 00:04:45.494 "nvmf_tcp": { 00:04:45.494 "mask": "0x20", 00:04:45.494 "tpoint_mask": "0x0" 00:04:45.494 }, 00:04:45.494 "ftl": { 00:04:45.494 "mask": "0x40", 00:04:45.494 "tpoint_mask": "0x0" 00:04:45.494 }, 00:04:45.494 "blobfs": { 00:04:45.494 "mask": "0x80", 00:04:45.494 "tpoint_mask": "0x0" 00:04:45.494 }, 00:04:45.494 "dsa": { 00:04:45.494 "mask": "0x200", 00:04:45.494 "tpoint_mask": "0x0" 00:04:45.494 }, 00:04:45.494 "thread": { 00:04:45.494 "mask": "0x400", 00:04:45.494 "tpoint_mask": "0x0" 00:04:45.494 }, 00:04:45.494 "nvme_pcie": { 00:04:45.494 "mask": "0x800", 00:04:45.494 "tpoint_mask": "0x0" 00:04:45.494 }, 00:04:45.494 "iaa": { 00:04:45.494 "mask": "0x1000", 00:04:45.494 "tpoint_mask": "0x0" 00:04:45.494 }, 00:04:45.494 "nvme_tcp": { 00:04:45.494 "mask": "0x2000", 00:04:45.494 "tpoint_mask": "0x0" 00:04:45.494 }, 00:04:45.494 "bdev_nvme": { 00:04:45.494 "mask": "0x4000", 00:04:45.494 "tpoint_mask": "0x0" 00:04:45.494 }, 00:04:45.494 "sock": { 00:04:45.494 "mask": "0x8000", 00:04:45.494 "tpoint_mask": "0x0" 00:04:45.494 }, 00:04:45.494 "blob": { 00:04:45.494 "mask": "0x10000", 00:04:45.494 "tpoint_mask": "0x0" 00:04:45.494 }, 00:04:45.494 "bdev_raid": { 00:04:45.494 "mask": "0x20000", 00:04:45.494 "tpoint_mask": "0x0" 00:04:45.494 }, 00:04:45.494 "scheduler": { 00:04:45.494 "mask": "0x40000", 00:04:45.494 "tpoint_mask": "0x0" 00:04:45.494 } 00:04:45.494 }' 00:04:45.494 17:15:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:45.494 17:15:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:45.494 17:15:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:45.753 17:15:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:45.753 17:15:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:45.753 17:15:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:45.753 17:15:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:45.753 17:15:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:45.753 17:15:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:45.753 17:15:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:45.753 00:04:45.753 real 0m0.213s 00:04:45.753 user 0m0.178s 00:04:45.753 sys 0m0.026s 00:04:45.753 17:15:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:45.753 17:15:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:45.753 ************************************ 00:04:45.753 END TEST rpc_trace_cmd_test 00:04:45.753 ************************************ 00:04:45.753 17:15:14 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:45.753 17:15:14 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:45.753 17:15:14 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:45.753 17:15:14 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:45.753 17:15:14 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.753 17:15:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.753 ************************************ 00:04:45.753 START TEST rpc_daemon_integrity 00:04:45.753 ************************************ 00:04:45.753 17:15:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:45.753 17:15:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:45.753 17:15:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.753 17:15:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.753 17:15:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.753 17:15:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:45.753 17:15:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:46.012 17:15:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:46.012 17:15:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:46.012 17:15:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.012 17:15:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.012 17:15:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.012 17:15:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:46.012 17:15:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:46.012 17:15:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.012 17:15:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.012 17:15:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.012 17:15:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:46.012 { 00:04:46.012 "name": "Malloc2", 00:04:46.012 "aliases": [ 00:04:46.012 "a605cf81-91b0-4b36-bda5-f97b6d47df87" 00:04:46.012 ], 00:04:46.012 "product_name": "Malloc disk", 00:04:46.012 "block_size": 512, 00:04:46.012 "num_blocks": 16384, 00:04:46.012 "uuid": "a605cf81-91b0-4b36-bda5-f97b6d47df87", 00:04:46.012 "assigned_rate_limits": { 00:04:46.012 "rw_ios_per_sec": 0, 00:04:46.012 "rw_mbytes_per_sec": 0, 00:04:46.012 "r_mbytes_per_sec": 0, 00:04:46.012 "w_mbytes_per_sec": 0 00:04:46.012 }, 00:04:46.012 "claimed": false, 00:04:46.012 "zoned": false, 00:04:46.012 "supported_io_types": { 00:04:46.012 "read": true, 00:04:46.012 "write": true, 00:04:46.012 "unmap": true, 00:04:46.012 "flush": true, 00:04:46.012 "reset": true, 00:04:46.012 "nvme_admin": false, 00:04:46.012 "nvme_io": false, 00:04:46.012 "nvme_io_md": false, 00:04:46.012 "write_zeroes": true, 00:04:46.012 "zcopy": true, 00:04:46.012 "get_zone_info": false, 00:04:46.012 "zone_management": false, 00:04:46.012 "zone_append": false, 00:04:46.012 "compare": false, 00:04:46.012 "compare_and_write": false, 00:04:46.012 "abort": true, 00:04:46.012 "seek_hole": false, 00:04:46.012 "seek_data": false, 00:04:46.012 "copy": true, 00:04:46.012 "nvme_iov_md": false 00:04:46.012 }, 00:04:46.012 "memory_domains": [ 00:04:46.012 { 00:04:46.012 "dma_device_id": "system", 00:04:46.012 "dma_device_type": 1 00:04:46.012 }, 00:04:46.012 { 00:04:46.012 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:46.012 "dma_device_type": 2 00:04:46.012 } 00:04:46.012 ], 00:04:46.012 "driver_specific": {} 00:04:46.012 } 00:04:46.012 ]' 00:04:46.012 17:15:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:46.012 17:15:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:46.012 17:15:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:46.012 17:15:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.012 17:15:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.012 [2024-12-09 17:15:15.008299] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:46.012 [2024-12-09 17:15:15.008326] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:46.012 [2024-12-09 17:15:15.008336] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x21422e0 00:04:46.012 [2024-12-09 17:15:15.008343] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:46.012 [2024-12-09 17:15:15.009300] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:46.012 [2024-12-09 17:15:15.009318] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:46.012 Passthru0 00:04:46.012 17:15:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.012 17:15:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:46.012 17:15:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.012 17:15:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.012 17:15:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.012 17:15:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:46.012 { 00:04:46.012 "name": "Malloc2", 00:04:46.012 "aliases": [ 00:04:46.012 "a605cf81-91b0-4b36-bda5-f97b6d47df87" 00:04:46.012 ], 00:04:46.012 "product_name": "Malloc disk", 00:04:46.012 "block_size": 512, 00:04:46.012 "num_blocks": 16384, 00:04:46.012 "uuid": "a605cf81-91b0-4b36-bda5-f97b6d47df87", 00:04:46.012 "assigned_rate_limits": { 00:04:46.012 "rw_ios_per_sec": 0, 00:04:46.012 "rw_mbytes_per_sec": 0, 00:04:46.012 "r_mbytes_per_sec": 0, 00:04:46.012 "w_mbytes_per_sec": 0 00:04:46.012 }, 00:04:46.012 "claimed": true, 00:04:46.012 "claim_type": "exclusive_write", 00:04:46.012 "zoned": false, 00:04:46.012 "supported_io_types": { 00:04:46.012 "read": true, 00:04:46.012 "write": true, 00:04:46.012 "unmap": true, 00:04:46.012 "flush": true, 00:04:46.012 "reset": true, 00:04:46.012 "nvme_admin": false, 00:04:46.012 "nvme_io": false, 00:04:46.012 "nvme_io_md": false, 00:04:46.012 "write_zeroes": true, 00:04:46.012 "zcopy": true, 00:04:46.012 "get_zone_info": false, 00:04:46.012 "zone_management": false, 00:04:46.012 "zone_append": false, 00:04:46.012 "compare": false, 00:04:46.012 "compare_and_write": false, 00:04:46.012 "abort": true, 00:04:46.012 "seek_hole": false, 00:04:46.012 "seek_data": false, 00:04:46.012 "copy": true, 00:04:46.012 "nvme_iov_md": false 00:04:46.012 }, 00:04:46.012 "memory_domains": [ 00:04:46.012 { 00:04:46.012 "dma_device_id": "system", 00:04:46.012 "dma_device_type": 1 00:04:46.012 }, 00:04:46.012 { 00:04:46.012 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:46.012 "dma_device_type": 2 00:04:46.012 } 00:04:46.012 ], 00:04:46.012 "driver_specific": {} 00:04:46.012 }, 00:04:46.012 { 00:04:46.012 "name": "Passthru0", 00:04:46.012 "aliases": [ 00:04:46.012 "3be2df6b-0d6f-5ed7-8d11-b1f96ea3b10e" 00:04:46.012 ], 00:04:46.012 "product_name": "passthru", 00:04:46.012 "block_size": 512, 00:04:46.012 "num_blocks": 16384, 00:04:46.012 "uuid": "3be2df6b-0d6f-5ed7-8d11-b1f96ea3b10e", 00:04:46.012 "assigned_rate_limits": { 00:04:46.012 "rw_ios_per_sec": 0, 00:04:46.012 "rw_mbytes_per_sec": 0, 00:04:46.012 "r_mbytes_per_sec": 0, 00:04:46.012 "w_mbytes_per_sec": 0 00:04:46.012 }, 00:04:46.012 "claimed": false, 00:04:46.012 "zoned": false, 00:04:46.012 "supported_io_types": { 00:04:46.012 "read": true, 00:04:46.012 "write": true, 00:04:46.012 "unmap": true, 00:04:46.012 "flush": true, 00:04:46.012 "reset": true, 00:04:46.012 "nvme_admin": false, 00:04:46.012 "nvme_io": false, 00:04:46.012 "nvme_io_md": false, 00:04:46.012 "write_zeroes": true, 00:04:46.012 "zcopy": true, 00:04:46.012 "get_zone_info": false, 00:04:46.012 "zone_management": false, 00:04:46.012 "zone_append": false, 00:04:46.012 "compare": false, 00:04:46.012 "compare_and_write": false, 00:04:46.012 "abort": true, 00:04:46.012 "seek_hole": false, 00:04:46.012 "seek_data": false, 00:04:46.012 "copy": true, 00:04:46.012 "nvme_iov_md": false 00:04:46.012 }, 00:04:46.012 "memory_domains": [ 00:04:46.012 { 00:04:46.012 "dma_device_id": "system", 00:04:46.012 "dma_device_type": 1 00:04:46.012 }, 00:04:46.012 { 00:04:46.012 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:46.012 "dma_device_type": 2 00:04:46.012 } 00:04:46.012 ], 00:04:46.012 "driver_specific": { 00:04:46.012 "passthru": { 00:04:46.012 "name": "Passthru0", 00:04:46.012 "base_bdev_name": "Malloc2" 00:04:46.012 } 00:04:46.012 } 00:04:46.012 } 00:04:46.012 ]' 00:04:46.012 17:15:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:46.012 17:15:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:46.012 17:15:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:46.012 17:15:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.012 17:15:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.012 17:15:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.012 17:15:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:46.012 17:15:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.012 17:15:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.012 17:15:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.012 17:15:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:46.012 17:15:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.012 17:15:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.013 17:15:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.013 17:15:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:46.013 17:15:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:46.013 17:15:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:46.013 00:04:46.013 real 0m0.279s 00:04:46.013 user 0m0.176s 00:04:46.013 sys 0m0.039s 00:04:46.013 17:15:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:46.013 17:15:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.013 ************************************ 00:04:46.013 END TEST rpc_daemon_integrity 00:04:46.013 ************************************ 00:04:46.013 17:15:15 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:46.270 17:15:15 rpc -- rpc/rpc.sh@84 -- # killprocess 2383966 00:04:46.270 17:15:15 rpc -- common/autotest_common.sh@954 -- # '[' -z 2383966 ']' 00:04:46.270 17:15:15 rpc -- common/autotest_common.sh@958 -- # kill -0 2383966 00:04:46.270 17:15:15 rpc -- common/autotest_common.sh@959 -- # uname 00:04:46.270 17:15:15 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:46.270 17:15:15 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2383966 00:04:46.270 17:15:15 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:46.270 17:15:15 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:46.270 17:15:15 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2383966' 00:04:46.270 killing process with pid 2383966 00:04:46.270 17:15:15 rpc -- common/autotest_common.sh@973 -- # kill 2383966 00:04:46.270 17:15:15 rpc -- common/autotest_common.sh@978 -- # wait 2383966 00:04:46.529 00:04:46.529 real 0m2.082s 00:04:46.529 user 0m2.648s 00:04:46.529 sys 0m0.684s 00:04:46.529 17:15:15 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:46.529 17:15:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.529 ************************************ 00:04:46.529 END TEST rpc 00:04:46.529 ************************************ 00:04:46.529 17:15:15 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:46.529 17:15:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:46.529 17:15:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.529 17:15:15 -- common/autotest_common.sh@10 -- # set +x 00:04:46.529 ************************************ 00:04:46.529 START TEST skip_rpc 00:04:46.529 ************************************ 00:04:46.529 17:15:15 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:46.529 * Looking for test storage... 00:04:46.529 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:46.529 17:15:15 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:46.529 17:15:15 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:46.529 17:15:15 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:46.788 17:15:15 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:46.788 17:15:15 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:46.788 17:15:15 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:46.788 17:15:15 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:46.788 17:15:15 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:46.788 17:15:15 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:46.788 17:15:15 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:46.788 17:15:15 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:46.788 17:15:15 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:46.788 17:15:15 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:46.788 17:15:15 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:46.788 17:15:15 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:46.788 17:15:15 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:46.788 17:15:15 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:46.788 17:15:15 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:46.788 17:15:15 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:46.788 17:15:15 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:46.788 17:15:15 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:46.788 17:15:15 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:46.788 17:15:15 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:46.788 17:15:15 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:46.788 17:15:15 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:46.788 17:15:15 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:46.788 17:15:15 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:46.788 17:15:15 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:46.788 17:15:15 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:46.788 17:15:15 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:46.788 17:15:15 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:46.788 17:15:15 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:46.789 17:15:15 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:46.789 17:15:15 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:46.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.789 --rc genhtml_branch_coverage=1 00:04:46.789 --rc genhtml_function_coverage=1 00:04:46.789 --rc genhtml_legend=1 00:04:46.789 --rc geninfo_all_blocks=1 00:04:46.789 --rc geninfo_unexecuted_blocks=1 00:04:46.789 00:04:46.789 ' 00:04:46.789 17:15:15 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:46.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.789 --rc genhtml_branch_coverage=1 00:04:46.789 --rc genhtml_function_coverage=1 00:04:46.789 --rc genhtml_legend=1 00:04:46.789 --rc geninfo_all_blocks=1 00:04:46.789 --rc geninfo_unexecuted_blocks=1 00:04:46.789 00:04:46.789 ' 00:04:46.789 17:15:15 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:46.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.789 --rc genhtml_branch_coverage=1 00:04:46.789 --rc genhtml_function_coverage=1 00:04:46.789 --rc genhtml_legend=1 00:04:46.789 --rc geninfo_all_blocks=1 00:04:46.789 --rc geninfo_unexecuted_blocks=1 00:04:46.789 00:04:46.789 ' 00:04:46.789 17:15:15 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:46.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.789 --rc genhtml_branch_coverage=1 00:04:46.789 --rc genhtml_function_coverage=1 00:04:46.789 --rc genhtml_legend=1 00:04:46.789 --rc geninfo_all_blocks=1 00:04:46.789 --rc geninfo_unexecuted_blocks=1 00:04:46.789 00:04:46.789 ' 00:04:46.789 17:15:15 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:46.789 17:15:15 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:46.789 17:15:15 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:46.789 17:15:15 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:46.789 17:15:15 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.789 17:15:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.789 ************************************ 00:04:46.789 START TEST skip_rpc 00:04:46.789 ************************************ 00:04:46.789 17:15:15 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:46.789 17:15:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2384599 00:04:46.789 17:15:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:46.789 17:15:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:46.789 17:15:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:46.789 [2024-12-09 17:15:15.864597] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:04:46.789 [2024-12-09 17:15:15.864634] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2384599 ] 00:04:46.789 [2024-12-09 17:15:15.936055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.047 [2024-12-09 17:15:15.975744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.311 17:15:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:52.311 17:15:20 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:52.311 17:15:20 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:52.311 17:15:20 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:52.311 17:15:20 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:52.311 17:15:20 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:52.311 17:15:20 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:52.311 17:15:20 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:52.311 17:15:20 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.311 17:15:20 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.311 17:15:20 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:52.311 17:15:20 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:52.311 17:15:20 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:52.311 17:15:20 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:52.311 17:15:20 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:52.311 17:15:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:52.311 17:15:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2384599 00:04:52.311 17:15:20 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 2384599 ']' 00:04:52.311 17:15:20 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 2384599 00:04:52.311 17:15:20 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:52.311 17:15:20 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:52.311 17:15:20 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2384599 00:04:52.311 17:15:20 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:52.311 17:15:20 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:52.311 17:15:20 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2384599' 00:04:52.311 killing process with pid 2384599 00:04:52.311 17:15:20 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 2384599 00:04:52.311 17:15:20 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 2384599 00:04:52.311 00:04:52.311 real 0m5.366s 00:04:52.311 user 0m5.143s 00:04:52.311 sys 0m0.264s 00:04:52.311 17:15:21 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:52.311 17:15:21 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.311 ************************************ 00:04:52.311 END TEST skip_rpc 00:04:52.311 ************************************ 00:04:52.311 17:15:21 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:52.311 17:15:21 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:52.311 17:15:21 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:52.311 17:15:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.311 ************************************ 00:04:52.311 START TEST skip_rpc_with_json 00:04:52.311 ************************************ 00:04:52.311 17:15:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:52.311 17:15:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:52.311 17:15:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2385532 00:04:52.311 17:15:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:52.311 17:15:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:52.311 17:15:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2385532 00:04:52.311 17:15:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 2385532 ']' 00:04:52.311 17:15:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:52.311 17:15:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:52.311 17:15:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:52.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:52.311 17:15:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:52.311 17:15:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:52.311 [2024-12-09 17:15:21.305792] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:04:52.311 [2024-12-09 17:15:21.305837] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2385532 ] 00:04:52.311 [2024-12-09 17:15:21.380304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.311 [2024-12-09 17:15:21.417577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.570 17:15:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:52.570 17:15:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:52.570 17:15:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:52.570 17:15:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.570 17:15:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:52.570 [2024-12-09 17:15:21.639863] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:52.570 request: 00:04:52.570 { 00:04:52.570 "trtype": "tcp", 00:04:52.570 "method": "nvmf_get_transports", 00:04:52.570 "req_id": 1 00:04:52.570 } 00:04:52.570 Got JSON-RPC error response 00:04:52.570 response: 00:04:52.570 { 00:04:52.570 "code": -19, 00:04:52.570 "message": "No such device" 00:04:52.570 } 00:04:52.570 17:15:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:52.570 17:15:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:52.570 17:15:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.570 17:15:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:52.570 [2024-12-09 17:15:21.651973] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:52.570 17:15:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:52.570 17:15:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:52.570 17:15:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.570 17:15:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:52.829 17:15:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:52.829 17:15:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:52.829 { 00:04:52.829 "subsystems": [ 00:04:52.829 { 00:04:52.829 "subsystem": "fsdev", 00:04:52.829 "config": [ 00:04:52.829 { 00:04:52.829 "method": "fsdev_set_opts", 00:04:52.829 "params": { 00:04:52.829 "fsdev_io_pool_size": 65535, 00:04:52.829 "fsdev_io_cache_size": 256 00:04:52.829 } 00:04:52.829 } 00:04:52.829 ] 00:04:52.829 }, 00:04:52.829 { 00:04:52.829 "subsystem": "vfio_user_target", 00:04:52.829 "config": null 00:04:52.829 }, 00:04:52.829 { 00:04:52.829 "subsystem": "keyring", 00:04:52.829 "config": [] 00:04:52.829 }, 00:04:52.829 { 00:04:52.829 "subsystem": "iobuf", 00:04:52.829 "config": [ 00:04:52.829 { 00:04:52.829 "method": "iobuf_set_options", 00:04:52.829 "params": { 00:04:52.829 "small_pool_count": 8192, 00:04:52.829 "large_pool_count": 1024, 00:04:52.829 "small_bufsize": 8192, 00:04:52.829 "large_bufsize": 135168, 00:04:52.829 "enable_numa": false 00:04:52.829 } 00:04:52.829 } 00:04:52.829 ] 00:04:52.829 }, 00:04:52.829 { 00:04:52.829 "subsystem": "sock", 00:04:52.829 "config": [ 00:04:52.829 { 00:04:52.829 "method": "sock_set_default_impl", 00:04:52.829 "params": { 00:04:52.829 "impl_name": "posix" 00:04:52.829 } 00:04:52.829 }, 00:04:52.829 { 00:04:52.829 "method": "sock_impl_set_options", 00:04:52.829 "params": { 00:04:52.829 "impl_name": "ssl", 00:04:52.829 "recv_buf_size": 4096, 00:04:52.829 "send_buf_size": 4096, 00:04:52.829 "enable_recv_pipe": true, 00:04:52.829 "enable_quickack": false, 00:04:52.829 "enable_placement_id": 0, 00:04:52.829 "enable_zerocopy_send_server": true, 00:04:52.829 "enable_zerocopy_send_client": false, 00:04:52.829 "zerocopy_threshold": 0, 00:04:52.829 "tls_version": 0, 00:04:52.829 "enable_ktls": false 00:04:52.829 } 00:04:52.829 }, 00:04:52.829 { 00:04:52.829 "method": "sock_impl_set_options", 00:04:52.829 "params": { 00:04:52.829 "impl_name": "posix", 00:04:52.829 "recv_buf_size": 2097152, 00:04:52.829 "send_buf_size": 2097152, 00:04:52.829 "enable_recv_pipe": true, 00:04:52.829 "enable_quickack": false, 00:04:52.829 "enable_placement_id": 0, 00:04:52.829 "enable_zerocopy_send_server": true, 00:04:52.829 "enable_zerocopy_send_client": false, 00:04:52.829 "zerocopy_threshold": 0, 00:04:52.829 "tls_version": 0, 00:04:52.829 "enable_ktls": false 00:04:52.829 } 00:04:52.829 } 00:04:52.829 ] 00:04:52.829 }, 00:04:52.829 { 00:04:52.829 "subsystem": "vmd", 00:04:52.829 "config": [] 00:04:52.829 }, 00:04:52.829 { 00:04:52.829 "subsystem": "accel", 00:04:52.829 "config": [ 00:04:52.829 { 00:04:52.829 "method": "accel_set_options", 00:04:52.829 "params": { 00:04:52.829 "small_cache_size": 128, 00:04:52.829 "large_cache_size": 16, 00:04:52.829 "task_count": 2048, 00:04:52.829 "sequence_count": 2048, 00:04:52.829 "buf_count": 2048 00:04:52.829 } 00:04:52.829 } 00:04:52.829 ] 00:04:52.829 }, 00:04:52.829 { 00:04:52.829 "subsystem": "bdev", 00:04:52.829 "config": [ 00:04:52.829 { 00:04:52.829 "method": "bdev_set_options", 00:04:52.829 "params": { 00:04:52.829 "bdev_io_pool_size": 65535, 00:04:52.829 "bdev_io_cache_size": 256, 00:04:52.829 "bdev_auto_examine": true, 00:04:52.829 "iobuf_small_cache_size": 128, 00:04:52.829 "iobuf_large_cache_size": 16 00:04:52.829 } 00:04:52.829 }, 00:04:52.829 { 00:04:52.829 "method": "bdev_raid_set_options", 00:04:52.829 "params": { 00:04:52.829 "process_window_size_kb": 1024, 00:04:52.829 "process_max_bandwidth_mb_sec": 0 00:04:52.829 } 00:04:52.829 }, 00:04:52.829 { 00:04:52.829 "method": "bdev_iscsi_set_options", 00:04:52.829 "params": { 00:04:52.829 "timeout_sec": 30 00:04:52.829 } 00:04:52.829 }, 00:04:52.829 { 00:04:52.829 "method": "bdev_nvme_set_options", 00:04:52.829 "params": { 00:04:52.829 "action_on_timeout": "none", 00:04:52.829 "timeout_us": 0, 00:04:52.829 "timeout_admin_us": 0, 00:04:52.829 "keep_alive_timeout_ms": 10000, 00:04:52.829 "arbitration_burst": 0, 00:04:52.829 "low_priority_weight": 0, 00:04:52.829 "medium_priority_weight": 0, 00:04:52.829 "high_priority_weight": 0, 00:04:52.829 "nvme_adminq_poll_period_us": 10000, 00:04:52.829 "nvme_ioq_poll_period_us": 0, 00:04:52.829 "io_queue_requests": 0, 00:04:52.829 "delay_cmd_submit": true, 00:04:52.829 "transport_retry_count": 4, 00:04:52.829 "bdev_retry_count": 3, 00:04:52.829 "transport_ack_timeout": 0, 00:04:52.829 "ctrlr_loss_timeout_sec": 0, 00:04:52.829 "reconnect_delay_sec": 0, 00:04:52.829 "fast_io_fail_timeout_sec": 0, 00:04:52.829 "disable_auto_failback": false, 00:04:52.829 "generate_uuids": false, 00:04:52.829 "transport_tos": 0, 00:04:52.829 "nvme_error_stat": false, 00:04:52.829 "rdma_srq_size": 0, 00:04:52.830 "io_path_stat": false, 00:04:52.830 "allow_accel_sequence": false, 00:04:52.830 "rdma_max_cq_size": 0, 00:04:52.830 "rdma_cm_event_timeout_ms": 0, 00:04:52.830 "dhchap_digests": [ 00:04:52.830 "sha256", 00:04:52.830 "sha384", 00:04:52.830 "sha512" 00:04:52.830 ], 00:04:52.830 "dhchap_dhgroups": [ 00:04:52.830 "null", 00:04:52.830 "ffdhe2048", 00:04:52.830 "ffdhe3072", 00:04:52.830 "ffdhe4096", 00:04:52.830 "ffdhe6144", 00:04:52.830 "ffdhe8192" 00:04:52.830 ] 00:04:52.830 } 00:04:52.830 }, 00:04:52.830 { 00:04:52.830 "method": "bdev_nvme_set_hotplug", 00:04:52.830 "params": { 00:04:52.830 "period_us": 100000, 00:04:52.830 "enable": false 00:04:52.830 } 00:04:52.830 }, 00:04:52.830 { 00:04:52.830 "method": "bdev_wait_for_examine" 00:04:52.830 } 00:04:52.830 ] 00:04:52.830 }, 00:04:52.830 { 00:04:52.830 "subsystem": "scsi", 00:04:52.830 "config": null 00:04:52.830 }, 00:04:52.830 { 00:04:52.830 "subsystem": "scheduler", 00:04:52.830 "config": [ 00:04:52.830 { 00:04:52.830 "method": "framework_set_scheduler", 00:04:52.830 "params": { 00:04:52.830 "name": "static" 00:04:52.830 } 00:04:52.830 } 00:04:52.830 ] 00:04:52.830 }, 00:04:52.830 { 00:04:52.830 "subsystem": "vhost_scsi", 00:04:52.830 "config": [] 00:04:52.830 }, 00:04:52.830 { 00:04:52.830 "subsystem": "vhost_blk", 00:04:52.830 "config": [] 00:04:52.830 }, 00:04:52.830 { 00:04:52.830 "subsystem": "ublk", 00:04:52.830 "config": [] 00:04:52.830 }, 00:04:52.830 { 00:04:52.830 "subsystem": "nbd", 00:04:52.830 "config": [] 00:04:52.830 }, 00:04:52.830 { 00:04:52.830 "subsystem": "nvmf", 00:04:52.830 "config": [ 00:04:52.830 { 00:04:52.830 "method": "nvmf_set_config", 00:04:52.830 "params": { 00:04:52.830 "discovery_filter": "match_any", 00:04:52.830 "admin_cmd_passthru": { 00:04:52.830 "identify_ctrlr": false 00:04:52.830 }, 00:04:52.830 "dhchap_digests": [ 00:04:52.830 "sha256", 00:04:52.830 "sha384", 00:04:52.830 "sha512" 00:04:52.830 ], 00:04:52.830 "dhchap_dhgroups": [ 00:04:52.830 "null", 00:04:52.830 "ffdhe2048", 00:04:52.830 "ffdhe3072", 00:04:52.830 "ffdhe4096", 00:04:52.830 "ffdhe6144", 00:04:52.830 "ffdhe8192" 00:04:52.830 ] 00:04:52.830 } 00:04:52.830 }, 00:04:52.830 { 00:04:52.830 "method": "nvmf_set_max_subsystems", 00:04:52.830 "params": { 00:04:52.830 "max_subsystems": 1024 00:04:52.830 } 00:04:52.830 }, 00:04:52.830 { 00:04:52.830 "method": "nvmf_set_crdt", 00:04:52.830 "params": { 00:04:52.830 "crdt1": 0, 00:04:52.830 "crdt2": 0, 00:04:52.830 "crdt3": 0 00:04:52.830 } 00:04:52.830 }, 00:04:52.830 { 00:04:52.830 "method": "nvmf_create_transport", 00:04:52.830 "params": { 00:04:52.830 "trtype": "TCP", 00:04:52.830 "max_queue_depth": 128, 00:04:52.830 "max_io_qpairs_per_ctrlr": 127, 00:04:52.830 "in_capsule_data_size": 4096, 00:04:52.830 "max_io_size": 131072, 00:04:52.830 "io_unit_size": 131072, 00:04:52.830 "max_aq_depth": 128, 00:04:52.830 "num_shared_buffers": 511, 00:04:52.830 "buf_cache_size": 4294967295, 00:04:52.830 "dif_insert_or_strip": false, 00:04:52.830 "zcopy": false, 00:04:52.830 "c2h_success": true, 00:04:52.830 "sock_priority": 0, 00:04:52.830 "abort_timeout_sec": 1, 00:04:52.830 "ack_timeout": 0, 00:04:52.830 "data_wr_pool_size": 0 00:04:52.830 } 00:04:52.830 } 00:04:52.830 ] 00:04:52.830 }, 00:04:52.830 { 00:04:52.830 "subsystem": "iscsi", 00:04:52.830 "config": [ 00:04:52.830 { 00:04:52.830 "method": "iscsi_set_options", 00:04:52.830 "params": { 00:04:52.830 "node_base": "iqn.2016-06.io.spdk", 00:04:52.830 "max_sessions": 128, 00:04:52.830 "max_connections_per_session": 2, 00:04:52.830 "max_queue_depth": 64, 00:04:52.830 "default_time2wait": 2, 00:04:52.830 "default_time2retain": 20, 00:04:52.830 "first_burst_length": 8192, 00:04:52.830 "immediate_data": true, 00:04:52.830 "allow_duplicated_isid": false, 00:04:52.830 "error_recovery_level": 0, 00:04:52.830 "nop_timeout": 60, 00:04:52.830 "nop_in_interval": 30, 00:04:52.830 "disable_chap": false, 00:04:52.830 "require_chap": false, 00:04:52.830 "mutual_chap": false, 00:04:52.830 "chap_group": 0, 00:04:52.830 "max_large_datain_per_connection": 64, 00:04:52.830 "max_r2t_per_connection": 4, 00:04:52.830 "pdu_pool_size": 36864, 00:04:52.830 "immediate_data_pool_size": 16384, 00:04:52.830 "data_out_pool_size": 2048 00:04:52.830 } 00:04:52.830 } 00:04:52.830 ] 00:04:52.830 } 00:04:52.830 ] 00:04:52.830 } 00:04:52.830 17:15:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:52.830 17:15:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2385532 00:04:52.830 17:15:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2385532 ']' 00:04:52.830 17:15:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2385532 00:04:52.830 17:15:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:52.830 17:15:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:52.830 17:15:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2385532 00:04:52.830 17:15:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:52.830 17:15:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:52.830 17:15:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2385532' 00:04:52.830 killing process with pid 2385532 00:04:52.830 17:15:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2385532 00:04:52.830 17:15:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2385532 00:04:53.089 17:15:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2385556 00:04:53.089 17:15:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:53.089 17:15:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:58.360 17:15:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2385556 00:04:58.360 17:15:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2385556 ']' 00:04:58.360 17:15:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2385556 00:04:58.360 17:15:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:58.360 17:15:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:58.360 17:15:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2385556 00:04:58.360 17:15:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:58.360 17:15:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:58.360 17:15:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2385556' 00:04:58.360 killing process with pid 2385556 00:04:58.360 17:15:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2385556 00:04:58.360 17:15:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2385556 00:04:58.360 17:15:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:58.360 17:15:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:58.360 00:04:58.360 real 0m6.283s 00:04:58.360 user 0m5.974s 00:04:58.360 sys 0m0.604s 00:04:58.360 17:15:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.360 17:15:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:58.360 ************************************ 00:04:58.360 END TEST skip_rpc_with_json 00:04:58.360 ************************************ 00:04:58.619 17:15:27 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:58.619 17:15:27 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.619 17:15:27 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.619 17:15:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.619 ************************************ 00:04:58.619 START TEST skip_rpc_with_delay 00:04:58.619 ************************************ 00:04:58.619 17:15:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:58.619 17:15:27 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:58.620 17:15:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:58.620 17:15:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:58.620 17:15:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:58.620 17:15:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:58.620 17:15:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:58.620 17:15:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:58.620 17:15:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:58.620 17:15:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:58.620 17:15:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:58.620 17:15:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:58.620 17:15:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:58.620 [2024-12-09 17:15:27.659059] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:58.620 17:15:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:58.620 17:15:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:58.620 17:15:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:58.620 17:15:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:58.620 00:04:58.620 real 0m0.070s 00:04:58.620 user 0m0.046s 00:04:58.620 sys 0m0.023s 00:04:58.620 17:15:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.620 17:15:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:58.620 ************************************ 00:04:58.620 END TEST skip_rpc_with_delay 00:04:58.620 ************************************ 00:04:58.620 17:15:27 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:58.620 17:15:27 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:58.620 17:15:27 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:58.620 17:15:27 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.620 17:15:27 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.620 17:15:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.620 ************************************ 00:04:58.620 START TEST exit_on_failed_rpc_init 00:04:58.620 ************************************ 00:04:58.620 17:15:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:58.620 17:15:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2386519 00:04:58.620 17:15:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2386519 00:04:58.620 17:15:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:58.620 17:15:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 2386519 ']' 00:04:58.620 17:15:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.620 17:15:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:58.620 17:15:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.620 17:15:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:58.620 17:15:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:58.879 [2024-12-09 17:15:27.797939] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:04:58.879 [2024-12-09 17:15:27.797985] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2386519 ] 00:04:58.879 [2024-12-09 17:15:27.855931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.879 [2024-12-09 17:15:27.898096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.139 17:15:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:59.139 17:15:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:59.139 17:15:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:59.139 17:15:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:59.139 17:15:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:59.139 17:15:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:59.139 17:15:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:59.139 17:15:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:59.139 17:15:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:59.139 17:15:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:59.139 17:15:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:59.139 17:15:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:59.139 17:15:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:59.139 17:15:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:59.139 17:15:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:59.139 [2024-12-09 17:15:28.187431] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:04:59.139 [2024-12-09 17:15:28.187476] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2386735 ] 00:04:59.139 [2024-12-09 17:15:28.259057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.139 [2024-12-09 17:15:28.298861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:59.139 [2024-12-09 17:15:28.298915] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:59.139 [2024-12-09 17:15:28.298924] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:59.139 [2024-12-09 17:15:28.298930] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:59.398 17:15:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:59.398 17:15:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:59.398 17:15:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:59.398 17:15:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:59.398 17:15:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:59.398 17:15:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:59.398 17:15:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:59.398 17:15:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2386519 00:04:59.398 17:15:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 2386519 ']' 00:04:59.398 17:15:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 2386519 00:04:59.398 17:15:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:59.398 17:15:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:59.398 17:15:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2386519 00:04:59.398 17:15:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:59.398 17:15:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:59.398 17:15:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2386519' 00:04:59.398 killing process with pid 2386519 00:04:59.398 17:15:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 2386519 00:04:59.398 17:15:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 2386519 00:04:59.658 00:04:59.658 real 0m0.944s 00:04:59.658 user 0m1.034s 00:04:59.658 sys 0m0.372s 00:04:59.658 17:15:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.658 17:15:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:59.658 ************************************ 00:04:59.658 END TEST exit_on_failed_rpc_init 00:04:59.658 ************************************ 00:04:59.658 17:15:28 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:59.658 00:04:59.658 real 0m13.129s 00:04:59.658 user 0m12.411s 00:04:59.658 sys 0m1.546s 00:04:59.658 17:15:28 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.658 17:15:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.658 ************************************ 00:04:59.658 END TEST skip_rpc 00:04:59.658 ************************************ 00:04:59.658 17:15:28 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:59.658 17:15:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.658 17:15:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.658 17:15:28 -- common/autotest_common.sh@10 -- # set +x 00:04:59.658 ************************************ 00:04:59.658 START TEST rpc_client 00:04:59.658 ************************************ 00:04:59.658 17:15:28 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:59.917 * Looking for test storage... 00:04:59.917 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:59.917 17:15:28 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:59.917 17:15:28 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:04:59.917 17:15:28 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:59.917 17:15:28 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:59.917 17:15:28 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:59.918 17:15:28 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:59.918 17:15:28 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:59.918 17:15:28 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:59.918 17:15:28 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:59.918 17:15:28 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:59.918 17:15:28 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:59.918 17:15:28 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:59.918 17:15:28 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:59.918 17:15:28 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:59.918 17:15:28 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:59.918 17:15:28 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:59.918 17:15:28 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:59.918 17:15:28 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:59.918 17:15:28 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:59.918 17:15:28 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:59.918 17:15:28 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:59.918 17:15:28 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:59.918 17:15:28 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:59.918 17:15:28 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:59.918 17:15:28 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:59.918 17:15:28 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:59.918 17:15:28 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:59.918 17:15:28 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:59.918 17:15:28 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:59.918 17:15:28 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:59.918 17:15:28 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:59.918 17:15:28 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:59.918 17:15:28 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:59.918 17:15:28 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:59.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.918 --rc genhtml_branch_coverage=1 00:04:59.918 --rc genhtml_function_coverage=1 00:04:59.918 --rc genhtml_legend=1 00:04:59.918 --rc geninfo_all_blocks=1 00:04:59.918 --rc geninfo_unexecuted_blocks=1 00:04:59.918 00:04:59.918 ' 00:04:59.918 17:15:28 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:59.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.918 --rc genhtml_branch_coverage=1 00:04:59.918 --rc genhtml_function_coverage=1 00:04:59.918 --rc genhtml_legend=1 00:04:59.918 --rc geninfo_all_blocks=1 00:04:59.918 --rc geninfo_unexecuted_blocks=1 00:04:59.918 00:04:59.918 ' 00:04:59.918 17:15:28 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:59.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.918 --rc genhtml_branch_coverage=1 00:04:59.918 --rc genhtml_function_coverage=1 00:04:59.918 --rc genhtml_legend=1 00:04:59.918 --rc geninfo_all_blocks=1 00:04:59.918 --rc geninfo_unexecuted_blocks=1 00:04:59.918 00:04:59.918 ' 00:04:59.918 17:15:28 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:59.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.918 --rc genhtml_branch_coverage=1 00:04:59.918 --rc genhtml_function_coverage=1 00:04:59.918 --rc genhtml_legend=1 00:04:59.918 --rc geninfo_all_blocks=1 00:04:59.918 --rc geninfo_unexecuted_blocks=1 00:04:59.918 00:04:59.918 ' 00:04:59.918 17:15:28 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:59.918 OK 00:04:59.918 17:15:28 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:59.918 00:04:59.918 real 0m0.197s 00:04:59.918 user 0m0.107s 00:04:59.918 sys 0m0.103s 00:04:59.918 17:15:28 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.918 17:15:28 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:59.918 ************************************ 00:04:59.918 END TEST rpc_client 00:04:59.918 ************************************ 00:04:59.918 17:15:29 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:59.918 17:15:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.918 17:15:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.918 17:15:29 -- common/autotest_common.sh@10 -- # set +x 00:04:59.918 ************************************ 00:04:59.918 START TEST json_config 00:04:59.918 ************************************ 00:04:59.918 17:15:29 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:00.178 17:15:29 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:00.178 17:15:29 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:05:00.178 17:15:29 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:00.178 17:15:29 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:00.178 17:15:29 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:00.178 17:15:29 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:00.178 17:15:29 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:00.178 17:15:29 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:00.178 17:15:29 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:00.178 17:15:29 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:00.178 17:15:29 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:00.178 17:15:29 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:00.178 17:15:29 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:00.178 17:15:29 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:00.178 17:15:29 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:00.178 17:15:29 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:00.178 17:15:29 json_config -- scripts/common.sh@345 -- # : 1 00:05:00.178 17:15:29 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:00.178 17:15:29 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:00.178 17:15:29 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:00.178 17:15:29 json_config -- scripts/common.sh@353 -- # local d=1 00:05:00.178 17:15:29 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:00.178 17:15:29 json_config -- scripts/common.sh@355 -- # echo 1 00:05:00.178 17:15:29 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:00.178 17:15:29 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:00.178 17:15:29 json_config -- scripts/common.sh@353 -- # local d=2 00:05:00.178 17:15:29 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:00.178 17:15:29 json_config -- scripts/common.sh@355 -- # echo 2 00:05:00.178 17:15:29 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:00.178 17:15:29 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:00.178 17:15:29 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:00.178 17:15:29 json_config -- scripts/common.sh@368 -- # return 0 00:05:00.178 17:15:29 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:00.178 17:15:29 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:00.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.178 --rc genhtml_branch_coverage=1 00:05:00.178 --rc genhtml_function_coverage=1 00:05:00.178 --rc genhtml_legend=1 00:05:00.178 --rc geninfo_all_blocks=1 00:05:00.178 --rc geninfo_unexecuted_blocks=1 00:05:00.178 00:05:00.178 ' 00:05:00.178 17:15:29 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:00.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.178 --rc genhtml_branch_coverage=1 00:05:00.178 --rc genhtml_function_coverage=1 00:05:00.178 --rc genhtml_legend=1 00:05:00.178 --rc geninfo_all_blocks=1 00:05:00.178 --rc geninfo_unexecuted_blocks=1 00:05:00.178 00:05:00.178 ' 00:05:00.178 17:15:29 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:00.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.178 --rc genhtml_branch_coverage=1 00:05:00.178 --rc genhtml_function_coverage=1 00:05:00.178 --rc genhtml_legend=1 00:05:00.178 --rc geninfo_all_blocks=1 00:05:00.178 --rc geninfo_unexecuted_blocks=1 00:05:00.178 00:05:00.178 ' 00:05:00.178 17:15:29 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:00.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.178 --rc genhtml_branch_coverage=1 00:05:00.178 --rc genhtml_function_coverage=1 00:05:00.178 --rc genhtml_legend=1 00:05:00.178 --rc geninfo_all_blocks=1 00:05:00.178 --rc geninfo_unexecuted_blocks=1 00:05:00.178 00:05:00.178 ' 00:05:00.178 17:15:29 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:00.178 17:15:29 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:00.178 17:15:29 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:00.178 17:15:29 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:00.178 17:15:29 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:00.178 17:15:29 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:00.178 17:15:29 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:00.178 17:15:29 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:00.178 17:15:29 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:00.178 17:15:29 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:00.178 17:15:29 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:00.178 17:15:29 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:00.178 17:15:29 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:05:00.178 17:15:29 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:05:00.178 17:15:29 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:00.178 17:15:29 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:00.178 17:15:29 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:00.178 17:15:29 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:00.178 17:15:29 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:00.178 17:15:29 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:00.178 17:15:29 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:00.178 17:15:29 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:00.178 17:15:29 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:00.178 17:15:29 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:00.178 17:15:29 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:00.178 17:15:29 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:00.178 17:15:29 json_config -- paths/export.sh@5 -- # export PATH 00:05:00.178 17:15:29 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:00.178 17:15:29 json_config -- nvmf/common.sh@51 -- # : 0 00:05:00.178 17:15:29 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:00.178 17:15:29 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:00.178 17:15:29 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:00.178 17:15:29 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:00.178 17:15:29 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:00.178 17:15:29 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:00.178 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:00.178 17:15:29 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:00.178 17:15:29 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:00.178 17:15:29 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:00.178 17:15:29 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:00.178 17:15:29 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:00.178 17:15:29 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:00.178 17:15:29 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:00.178 17:15:29 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:00.178 17:15:29 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:00.179 17:15:29 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:00.179 17:15:29 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:00.179 17:15:29 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:00.179 17:15:29 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:00.179 17:15:29 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:00.179 17:15:29 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:00.179 17:15:29 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:00.179 17:15:29 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:00.179 17:15:29 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:00.179 17:15:29 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:00.179 INFO: JSON configuration test init 00:05:00.179 17:15:29 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:00.179 17:15:29 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:00.179 17:15:29 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:00.179 17:15:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:00.179 17:15:29 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:00.179 17:15:29 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:00.179 17:15:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:00.179 17:15:29 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:00.179 17:15:29 json_config -- json_config/common.sh@9 -- # local app=target 00:05:00.179 17:15:29 json_config -- json_config/common.sh@10 -- # shift 00:05:00.179 17:15:29 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:00.179 17:15:29 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:00.179 17:15:29 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:00.179 17:15:29 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:00.179 17:15:29 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:00.179 17:15:29 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2386954 00:05:00.179 17:15:29 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:00.179 Waiting for target to run... 00:05:00.179 17:15:29 json_config -- json_config/common.sh@25 -- # waitforlisten 2386954 /var/tmp/spdk_tgt.sock 00:05:00.179 17:15:29 json_config -- common/autotest_common.sh@835 -- # '[' -z 2386954 ']' 00:05:00.179 17:15:29 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:00.179 17:15:29 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:00.179 17:15:29 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:00.179 17:15:29 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:00.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:00.179 17:15:29 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:00.179 17:15:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:00.179 [2024-12-09 17:15:29.304439] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:05:00.179 [2024-12-09 17:15:29.304492] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2386954 ] 00:05:00.437 [2024-12-09 17:15:29.584973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.695 [2024-12-09 17:15:29.617320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.953 17:15:30 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:00.953 17:15:30 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:00.953 17:15:30 json_config -- json_config/common.sh@26 -- # echo '' 00:05:00.953 00:05:00.953 17:15:30 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:01.211 17:15:30 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:01.211 17:15:30 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:01.211 17:15:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:01.211 17:15:30 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:01.211 17:15:30 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:01.211 17:15:30 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:01.211 17:15:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:01.211 17:15:30 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:01.211 17:15:30 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:01.211 17:15:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:04.497 17:15:33 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:04.497 17:15:33 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:04.497 17:15:33 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:04.497 17:15:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.497 17:15:33 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:04.497 17:15:33 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:04.497 17:15:33 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:04.497 17:15:33 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:04.497 17:15:33 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:04.497 17:15:33 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:04.497 17:15:33 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:04.497 17:15:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:04.497 17:15:33 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:04.497 17:15:33 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:04.497 17:15:33 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:04.497 17:15:33 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:04.497 17:15:33 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:04.497 17:15:33 json_config -- json_config/json_config.sh@54 -- # sort 00:05:04.497 17:15:33 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:04.497 17:15:33 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:04.497 17:15:33 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:04.497 17:15:33 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:04.497 17:15:33 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:04.497 17:15:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.497 17:15:33 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:04.497 17:15:33 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:04.497 17:15:33 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:04.497 17:15:33 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:04.497 17:15:33 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:04.497 17:15:33 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:04.497 17:15:33 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:04.497 17:15:33 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:04.497 17:15:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.497 17:15:33 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:04.497 17:15:33 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:04.497 17:15:33 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:04.497 17:15:33 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:04.497 17:15:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:04.755 MallocForNvmf0 00:05:04.755 17:15:33 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:04.755 17:15:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:04.755 MallocForNvmf1 00:05:04.755 17:15:33 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:04.755 17:15:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:05.013 [2024-12-09 17:15:34.077543] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:05.013 17:15:34 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:05.013 17:15:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:05.271 17:15:34 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:05.271 17:15:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:05.530 17:15:34 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:05.530 17:15:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:05.530 17:15:34 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:05.530 17:15:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:05.873 [2024-12-09 17:15:34.827818] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:05.873 17:15:34 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:05.873 17:15:34 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:05.873 17:15:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.873 17:15:34 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:05.873 17:15:34 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:05.873 17:15:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.873 17:15:34 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:05.873 17:15:34 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:05.873 17:15:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:06.198 MallocBdevForConfigChangeCheck 00:05:06.198 17:15:35 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:06.198 17:15:35 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:06.198 17:15:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:06.198 17:15:35 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:06.198 17:15:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:06.457 17:15:35 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:06.457 INFO: shutting down applications... 00:05:06.457 17:15:35 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:06.457 17:15:35 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:06.457 17:15:35 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:06.457 17:15:35 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:08.064 Calling clear_iscsi_subsystem 00:05:08.064 Calling clear_nvmf_subsystem 00:05:08.064 Calling clear_nbd_subsystem 00:05:08.064 Calling clear_ublk_subsystem 00:05:08.064 Calling clear_vhost_blk_subsystem 00:05:08.064 Calling clear_vhost_scsi_subsystem 00:05:08.064 Calling clear_bdev_subsystem 00:05:08.064 17:15:37 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:08.064 17:15:37 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:08.064 17:15:37 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:08.064 17:15:37 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:08.064 17:15:37 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:08.064 17:15:37 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:08.322 17:15:37 json_config -- json_config/json_config.sh@352 -- # break 00:05:08.322 17:15:37 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:08.322 17:15:37 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:08.322 17:15:37 json_config -- json_config/common.sh@31 -- # local app=target 00:05:08.322 17:15:37 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:08.322 17:15:37 json_config -- json_config/common.sh@35 -- # [[ -n 2386954 ]] 00:05:08.322 17:15:37 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2386954 00:05:08.322 17:15:37 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:08.322 17:15:37 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:08.322 17:15:37 json_config -- json_config/common.sh@41 -- # kill -0 2386954 00:05:08.322 17:15:37 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:08.890 17:15:37 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:08.890 17:15:37 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:08.890 17:15:37 json_config -- json_config/common.sh@41 -- # kill -0 2386954 00:05:08.890 17:15:37 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:08.890 17:15:37 json_config -- json_config/common.sh@43 -- # break 00:05:08.890 17:15:37 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:08.890 17:15:37 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:08.890 SPDK target shutdown done 00:05:08.890 17:15:37 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:08.890 INFO: relaunching applications... 00:05:08.890 17:15:37 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:08.890 17:15:37 json_config -- json_config/common.sh@9 -- # local app=target 00:05:08.890 17:15:37 json_config -- json_config/common.sh@10 -- # shift 00:05:08.890 17:15:37 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:08.890 17:15:37 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:08.890 17:15:37 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:08.890 17:15:37 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:08.890 17:15:37 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:08.890 17:15:37 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2388590 00:05:08.890 17:15:37 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:08.890 Waiting for target to run... 00:05:08.890 17:15:37 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:08.890 17:15:37 json_config -- json_config/common.sh@25 -- # waitforlisten 2388590 /var/tmp/spdk_tgt.sock 00:05:08.890 17:15:37 json_config -- common/autotest_common.sh@835 -- # '[' -z 2388590 ']' 00:05:08.890 17:15:37 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:08.890 17:15:37 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:08.890 17:15:37 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:08.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:08.890 17:15:37 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:08.890 17:15:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:08.890 [2024-12-09 17:15:37.952478] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:05:08.890 [2024-12-09 17:15:37.952534] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2388590 ] 00:05:09.458 [2024-12-09 17:15:38.410677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.458 [2024-12-09 17:15:38.460566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.744 [2024-12-09 17:15:41.487102] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:12.744 [2024-12-09 17:15:41.519349] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:13.002 17:15:42 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:13.002 17:15:42 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:13.002 17:15:42 json_config -- json_config/common.sh@26 -- # echo '' 00:05:13.002 00:05:13.002 17:15:42 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:13.002 17:15:42 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:13.002 INFO: Checking if target configuration is the same... 00:05:13.002 17:15:42 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:13.002 17:15:42 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:13.002 17:15:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:13.002 + '[' 2 -ne 2 ']' 00:05:13.002 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:13.261 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:13.261 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:13.261 +++ basename /dev/fd/62 00:05:13.261 ++ mktemp /tmp/62.XXX 00:05:13.261 + tmp_file_1=/tmp/62.DGb 00:05:13.261 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:13.261 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:13.261 + tmp_file_2=/tmp/spdk_tgt_config.json.b0U 00:05:13.261 + ret=0 00:05:13.261 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:13.519 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:13.519 + diff -u /tmp/62.DGb /tmp/spdk_tgt_config.json.b0U 00:05:13.519 + echo 'INFO: JSON config files are the same' 00:05:13.519 INFO: JSON config files are the same 00:05:13.519 + rm /tmp/62.DGb /tmp/spdk_tgt_config.json.b0U 00:05:13.519 + exit 0 00:05:13.519 17:15:42 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:13.519 17:15:42 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:13.519 INFO: changing configuration and checking if this can be detected... 00:05:13.519 17:15:42 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:13.519 17:15:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:13.778 17:15:42 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:13.778 17:15:42 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:13.778 17:15:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:13.778 + '[' 2 -ne 2 ']' 00:05:13.778 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:13.778 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:13.778 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:13.778 +++ basename /dev/fd/62 00:05:13.778 ++ mktemp /tmp/62.XXX 00:05:13.778 + tmp_file_1=/tmp/62.dR1 00:05:13.778 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:13.778 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:13.778 + tmp_file_2=/tmp/spdk_tgt_config.json.szw 00:05:13.778 + ret=0 00:05:13.778 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:14.036 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:14.036 + diff -u /tmp/62.dR1 /tmp/spdk_tgt_config.json.szw 00:05:14.036 + ret=1 00:05:14.036 + echo '=== Start of file: /tmp/62.dR1 ===' 00:05:14.036 + cat /tmp/62.dR1 00:05:14.036 + echo '=== End of file: /tmp/62.dR1 ===' 00:05:14.036 + echo '' 00:05:14.036 + echo '=== Start of file: /tmp/spdk_tgt_config.json.szw ===' 00:05:14.036 + cat /tmp/spdk_tgt_config.json.szw 00:05:14.036 + echo '=== End of file: /tmp/spdk_tgt_config.json.szw ===' 00:05:14.036 + echo '' 00:05:14.036 + rm /tmp/62.dR1 /tmp/spdk_tgt_config.json.szw 00:05:14.036 + exit 1 00:05:14.036 17:15:43 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:14.036 INFO: configuration change detected. 00:05:14.036 17:15:43 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:14.036 17:15:43 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:14.036 17:15:43 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:14.036 17:15:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:14.036 17:15:43 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:14.036 17:15:43 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:14.036 17:15:43 json_config -- json_config/json_config.sh@324 -- # [[ -n 2388590 ]] 00:05:14.036 17:15:43 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:14.036 17:15:43 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:14.036 17:15:43 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:14.036 17:15:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:14.036 17:15:43 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:14.036 17:15:43 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:14.037 17:15:43 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:14.037 17:15:43 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:14.037 17:15:43 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:14.037 17:15:43 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:14.037 17:15:43 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:14.037 17:15:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:14.295 17:15:43 json_config -- json_config/json_config.sh@330 -- # killprocess 2388590 00:05:14.296 17:15:43 json_config -- common/autotest_common.sh@954 -- # '[' -z 2388590 ']' 00:05:14.296 17:15:43 json_config -- common/autotest_common.sh@958 -- # kill -0 2388590 00:05:14.296 17:15:43 json_config -- common/autotest_common.sh@959 -- # uname 00:05:14.296 17:15:43 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:14.296 17:15:43 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2388590 00:05:14.296 17:15:43 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:14.296 17:15:43 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:14.296 17:15:43 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2388590' 00:05:14.296 killing process with pid 2388590 00:05:14.296 17:15:43 json_config -- common/autotest_common.sh@973 -- # kill 2388590 00:05:14.296 17:15:43 json_config -- common/autotest_common.sh@978 -- # wait 2388590 00:05:15.674 17:15:44 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:15.674 17:15:44 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:15.674 17:15:44 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:15.674 17:15:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.674 17:15:44 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:15.674 17:15:44 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:15.674 INFO: Success 00:05:15.674 00:05:15.674 real 0m15.790s 00:05:15.674 user 0m16.326s 00:05:15.674 sys 0m2.598s 00:05:15.934 17:15:44 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:15.934 17:15:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.934 ************************************ 00:05:15.934 END TEST json_config 00:05:15.934 ************************************ 00:05:15.934 17:15:44 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:15.934 17:15:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:15.934 17:15:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:15.934 17:15:44 -- common/autotest_common.sh@10 -- # set +x 00:05:15.934 ************************************ 00:05:15.934 START TEST json_config_extra_key 00:05:15.934 ************************************ 00:05:15.934 17:15:44 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:15.934 17:15:44 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:15.934 17:15:44 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:05:15.934 17:15:44 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:15.934 17:15:45 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:15.934 17:15:45 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:15.934 17:15:45 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:15.934 17:15:45 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:15.934 17:15:45 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:15.934 17:15:45 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:15.934 17:15:45 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:15.934 17:15:45 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:15.934 17:15:45 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:15.934 17:15:45 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:15.934 17:15:45 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:15.934 17:15:45 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:15.934 17:15:45 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:15.934 17:15:45 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:15.934 17:15:45 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:15.934 17:15:45 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:15.934 17:15:45 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:15.934 17:15:45 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:15.934 17:15:45 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:15.934 17:15:45 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:15.934 17:15:45 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:15.934 17:15:45 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:15.934 17:15:45 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:15.934 17:15:45 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:15.934 17:15:45 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:15.934 17:15:45 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:15.934 17:15:45 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:15.934 17:15:45 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:15.934 17:15:45 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:15.934 17:15:45 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:15.934 17:15:45 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:15.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.934 --rc genhtml_branch_coverage=1 00:05:15.934 --rc genhtml_function_coverage=1 00:05:15.934 --rc genhtml_legend=1 00:05:15.934 --rc geninfo_all_blocks=1 00:05:15.934 --rc geninfo_unexecuted_blocks=1 00:05:15.934 00:05:15.934 ' 00:05:15.934 17:15:45 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:15.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.934 --rc genhtml_branch_coverage=1 00:05:15.934 --rc genhtml_function_coverage=1 00:05:15.934 --rc genhtml_legend=1 00:05:15.934 --rc geninfo_all_blocks=1 00:05:15.934 --rc geninfo_unexecuted_blocks=1 00:05:15.934 00:05:15.934 ' 00:05:15.934 17:15:45 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:15.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.934 --rc genhtml_branch_coverage=1 00:05:15.934 --rc genhtml_function_coverage=1 00:05:15.934 --rc genhtml_legend=1 00:05:15.934 --rc geninfo_all_blocks=1 00:05:15.934 --rc geninfo_unexecuted_blocks=1 00:05:15.934 00:05:15.934 ' 00:05:15.934 17:15:45 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:15.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.934 --rc genhtml_branch_coverage=1 00:05:15.934 --rc genhtml_function_coverage=1 00:05:15.934 --rc genhtml_legend=1 00:05:15.934 --rc geninfo_all_blocks=1 00:05:15.934 --rc geninfo_unexecuted_blocks=1 00:05:15.934 00:05:15.934 ' 00:05:15.934 17:15:45 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:15.934 17:15:45 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:15.934 17:15:45 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:15.934 17:15:45 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:15.934 17:15:45 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:15.934 17:15:45 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:15.934 17:15:45 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:15.934 17:15:45 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:15.934 17:15:45 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:15.934 17:15:45 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:15.934 17:15:45 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:15.934 17:15:45 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:15.934 17:15:45 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:05:15.934 17:15:45 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:05:15.934 17:15:45 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:15.934 17:15:45 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:15.934 17:15:45 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:15.934 17:15:45 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:15.934 17:15:45 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:15.934 17:15:45 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:15.934 17:15:45 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:15.934 17:15:45 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:15.934 17:15:45 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:15.934 17:15:45 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.934 17:15:45 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.934 17:15:45 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.934 17:15:45 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:15.934 17:15:45 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.934 17:15:45 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:15.934 17:15:45 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:15.934 17:15:45 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:15.934 17:15:45 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:15.934 17:15:45 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:15.934 17:15:45 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:15.934 17:15:45 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:15.934 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:15.934 17:15:45 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:15.934 17:15:45 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:15.934 17:15:45 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:15.934 17:15:45 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:15.934 17:15:45 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:15.934 17:15:45 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:15.934 17:15:45 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:15.934 17:15:45 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:15.934 17:15:45 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:15.934 17:15:45 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:15.934 17:15:45 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:15.935 17:15:45 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:15.935 17:15:45 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:15.935 17:15:45 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:15.935 INFO: launching applications... 00:05:15.935 17:15:45 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:15.935 17:15:45 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:15.935 17:15:45 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:15.935 17:15:45 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:15.935 17:15:45 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:15.935 17:15:45 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:15.935 17:15:45 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:15.935 17:15:45 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:16.194 17:15:45 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2389870 00:05:16.194 17:15:45 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:16.194 Waiting for target to run... 00:05:16.194 17:15:45 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2389870 /var/tmp/spdk_tgt.sock 00:05:16.194 17:15:45 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 2389870 ']' 00:05:16.194 17:15:45 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:16.194 17:15:45 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:16.194 17:15:45 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:16.194 17:15:45 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:16.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:16.194 17:15:45 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:16.194 17:15:45 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:16.194 [2024-12-09 17:15:45.163859] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:05:16.194 [2024-12-09 17:15:45.163908] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2389870 ] 00:05:16.453 [2024-12-09 17:15:45.609317] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.712 [2024-12-09 17:15:45.658694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.971 17:15:45 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:16.971 17:15:45 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:16.971 17:15:45 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:16.971 00:05:16.971 17:15:45 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:16.971 INFO: shutting down applications... 00:05:16.971 17:15:45 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:16.971 17:15:45 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:16.971 17:15:45 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:16.971 17:15:46 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2389870 ]] 00:05:16.971 17:15:46 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2389870 00:05:16.971 17:15:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:16.971 17:15:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:16.971 17:15:46 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2389870 00:05:16.971 17:15:46 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:17.540 17:15:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:17.540 17:15:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:17.540 17:15:46 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2389870 00:05:17.540 17:15:46 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:17.540 17:15:46 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:17.540 17:15:46 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:17.540 17:15:46 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:17.540 SPDK target shutdown done 00:05:17.540 17:15:46 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:17.540 Success 00:05:17.540 00:05:17.540 real 0m1.587s 00:05:17.540 user 0m1.206s 00:05:17.540 sys 0m0.574s 00:05:17.540 17:15:46 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:17.540 17:15:46 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:17.540 ************************************ 00:05:17.540 END TEST json_config_extra_key 00:05:17.540 ************************************ 00:05:17.540 17:15:46 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:17.540 17:15:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:17.540 17:15:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:17.540 17:15:46 -- common/autotest_common.sh@10 -- # set +x 00:05:17.540 ************************************ 00:05:17.540 START TEST alias_rpc 00:05:17.540 ************************************ 00:05:17.540 17:15:46 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:17.540 * Looking for test storage... 00:05:17.540 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:17.540 17:15:46 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:17.540 17:15:46 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:17.540 17:15:46 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:17.799 17:15:46 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:17.799 17:15:46 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:17.799 17:15:46 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:17.799 17:15:46 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:17.799 17:15:46 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:17.799 17:15:46 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:17.799 17:15:46 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:17.799 17:15:46 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:17.799 17:15:46 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:17.799 17:15:46 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:17.799 17:15:46 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:17.799 17:15:46 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:17.799 17:15:46 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:17.799 17:15:46 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:17.799 17:15:46 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:17.799 17:15:46 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:17.799 17:15:46 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:17.799 17:15:46 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:17.799 17:15:46 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:17.799 17:15:46 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:17.799 17:15:46 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:17.799 17:15:46 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:17.799 17:15:46 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:17.799 17:15:46 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:17.799 17:15:46 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:17.799 17:15:46 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:17.800 17:15:46 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:17.800 17:15:46 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:17.800 17:15:46 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:17.800 17:15:46 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:17.800 17:15:46 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:17.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.800 --rc genhtml_branch_coverage=1 00:05:17.800 --rc genhtml_function_coverage=1 00:05:17.800 --rc genhtml_legend=1 00:05:17.800 --rc geninfo_all_blocks=1 00:05:17.800 --rc geninfo_unexecuted_blocks=1 00:05:17.800 00:05:17.800 ' 00:05:17.800 17:15:46 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:17.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.800 --rc genhtml_branch_coverage=1 00:05:17.800 --rc genhtml_function_coverage=1 00:05:17.800 --rc genhtml_legend=1 00:05:17.800 --rc geninfo_all_blocks=1 00:05:17.800 --rc geninfo_unexecuted_blocks=1 00:05:17.800 00:05:17.800 ' 00:05:17.800 17:15:46 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:17.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.800 --rc genhtml_branch_coverage=1 00:05:17.800 --rc genhtml_function_coverage=1 00:05:17.800 --rc genhtml_legend=1 00:05:17.800 --rc geninfo_all_blocks=1 00:05:17.800 --rc geninfo_unexecuted_blocks=1 00:05:17.800 00:05:17.800 ' 00:05:17.800 17:15:46 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:17.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.800 --rc genhtml_branch_coverage=1 00:05:17.800 --rc genhtml_function_coverage=1 00:05:17.800 --rc genhtml_legend=1 00:05:17.800 --rc geninfo_all_blocks=1 00:05:17.800 --rc geninfo_unexecuted_blocks=1 00:05:17.800 00:05:17.800 ' 00:05:17.800 17:15:46 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:17.800 17:15:46 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2390163 00:05:17.800 17:15:46 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:17.800 17:15:46 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2390163 00:05:17.800 17:15:46 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 2390163 ']' 00:05:17.800 17:15:46 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.800 17:15:46 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:17.800 17:15:46 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.800 17:15:46 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:17.800 17:15:46 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.800 [2024-12-09 17:15:46.804727] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:05:17.800 [2024-12-09 17:15:46.804778] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2390163 ] 00:05:17.800 [2024-12-09 17:15:46.879870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.800 [2024-12-09 17:15:46.918333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.059 17:15:47 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:18.059 17:15:47 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:18.059 17:15:47 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:18.318 17:15:47 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2390163 00:05:18.318 17:15:47 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 2390163 ']' 00:05:18.318 17:15:47 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 2390163 00:05:18.318 17:15:47 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:18.318 17:15:47 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:18.318 17:15:47 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2390163 00:05:18.318 17:15:47 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:18.318 17:15:47 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:18.318 17:15:47 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2390163' 00:05:18.318 killing process with pid 2390163 00:05:18.318 17:15:47 alias_rpc -- common/autotest_common.sh@973 -- # kill 2390163 00:05:18.318 17:15:47 alias_rpc -- common/autotest_common.sh@978 -- # wait 2390163 00:05:18.577 00:05:18.577 real 0m1.152s 00:05:18.577 user 0m1.180s 00:05:18.577 sys 0m0.414s 00:05:18.577 17:15:47 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:18.577 17:15:47 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.577 ************************************ 00:05:18.577 END TEST alias_rpc 00:05:18.577 ************************************ 00:05:18.837 17:15:47 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:18.837 17:15:47 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:18.837 17:15:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:18.837 17:15:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.837 17:15:47 -- common/autotest_common.sh@10 -- # set +x 00:05:18.837 ************************************ 00:05:18.837 START TEST spdkcli_tcp 00:05:18.837 ************************************ 00:05:18.837 17:15:47 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:18.837 * Looking for test storage... 00:05:18.837 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:18.837 17:15:47 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:18.837 17:15:47 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:05:18.837 17:15:47 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:18.837 17:15:47 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:18.837 17:15:47 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:18.837 17:15:47 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:18.837 17:15:47 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:18.837 17:15:47 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:18.837 17:15:47 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:18.837 17:15:47 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:18.837 17:15:47 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:18.837 17:15:47 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:18.837 17:15:47 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:18.837 17:15:47 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:18.837 17:15:47 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:18.837 17:15:47 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:18.837 17:15:47 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:18.837 17:15:47 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:18.837 17:15:47 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:18.837 17:15:47 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:18.837 17:15:47 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:18.837 17:15:47 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:18.837 17:15:47 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:18.837 17:15:47 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:18.837 17:15:47 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:18.837 17:15:47 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:18.837 17:15:47 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:18.837 17:15:47 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:18.837 17:15:47 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:18.837 17:15:47 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:18.837 17:15:47 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:18.837 17:15:47 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:18.837 17:15:47 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:18.837 17:15:47 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:18.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.837 --rc genhtml_branch_coverage=1 00:05:18.837 --rc genhtml_function_coverage=1 00:05:18.837 --rc genhtml_legend=1 00:05:18.837 --rc geninfo_all_blocks=1 00:05:18.837 --rc geninfo_unexecuted_blocks=1 00:05:18.837 00:05:18.837 ' 00:05:18.837 17:15:47 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:18.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.837 --rc genhtml_branch_coverage=1 00:05:18.837 --rc genhtml_function_coverage=1 00:05:18.837 --rc genhtml_legend=1 00:05:18.837 --rc geninfo_all_blocks=1 00:05:18.837 --rc geninfo_unexecuted_blocks=1 00:05:18.837 00:05:18.837 ' 00:05:18.837 17:15:47 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:18.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.837 --rc genhtml_branch_coverage=1 00:05:18.837 --rc genhtml_function_coverage=1 00:05:18.837 --rc genhtml_legend=1 00:05:18.837 --rc geninfo_all_blocks=1 00:05:18.837 --rc geninfo_unexecuted_blocks=1 00:05:18.837 00:05:18.837 ' 00:05:18.837 17:15:47 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:18.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.837 --rc genhtml_branch_coverage=1 00:05:18.837 --rc genhtml_function_coverage=1 00:05:18.837 --rc genhtml_legend=1 00:05:18.837 --rc geninfo_all_blocks=1 00:05:18.837 --rc geninfo_unexecuted_blocks=1 00:05:18.837 00:05:18.837 ' 00:05:18.837 17:15:47 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:18.837 17:15:47 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:18.837 17:15:47 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:18.837 17:15:47 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:18.837 17:15:47 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:18.837 17:15:47 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:18.837 17:15:47 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:18.837 17:15:47 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:18.837 17:15:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:18.837 17:15:47 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2390450 00:05:18.837 17:15:47 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:18.837 17:15:47 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2390450 00:05:18.837 17:15:47 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 2390450 ']' 00:05:18.837 17:15:47 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.837 17:15:47 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:18.837 17:15:47 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.837 17:15:47 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:18.837 17:15:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:19.096 [2024-12-09 17:15:48.024605] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:05:19.097 [2024-12-09 17:15:48.024653] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2390450 ] 00:05:19.097 [2024-12-09 17:15:48.082842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:19.097 [2024-12-09 17:15:48.126485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:19.097 [2024-12-09 17:15:48.126495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.356 17:15:48 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:19.356 17:15:48 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:19.356 17:15:48 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2390479 00:05:19.356 17:15:48 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:19.356 17:15:48 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:19.356 [ 00:05:19.356 "bdev_malloc_delete", 00:05:19.356 "bdev_malloc_create", 00:05:19.356 "bdev_null_resize", 00:05:19.356 "bdev_null_delete", 00:05:19.356 "bdev_null_create", 00:05:19.356 "bdev_nvme_cuse_unregister", 00:05:19.356 "bdev_nvme_cuse_register", 00:05:19.356 "bdev_opal_new_user", 00:05:19.356 "bdev_opal_set_lock_state", 00:05:19.356 "bdev_opal_delete", 00:05:19.356 "bdev_opal_get_info", 00:05:19.356 "bdev_opal_create", 00:05:19.356 "bdev_nvme_opal_revert", 00:05:19.356 "bdev_nvme_opal_init", 00:05:19.356 "bdev_nvme_send_cmd", 00:05:19.356 "bdev_nvme_set_keys", 00:05:19.356 "bdev_nvme_get_path_iostat", 00:05:19.356 "bdev_nvme_get_mdns_discovery_info", 00:05:19.356 "bdev_nvme_stop_mdns_discovery", 00:05:19.356 "bdev_nvme_start_mdns_discovery", 00:05:19.356 "bdev_nvme_set_multipath_policy", 00:05:19.356 "bdev_nvme_set_preferred_path", 00:05:19.356 "bdev_nvme_get_io_paths", 00:05:19.356 "bdev_nvme_remove_error_injection", 00:05:19.356 "bdev_nvme_add_error_injection", 00:05:19.356 "bdev_nvme_get_discovery_info", 00:05:19.356 "bdev_nvme_stop_discovery", 00:05:19.356 "bdev_nvme_start_discovery", 00:05:19.356 "bdev_nvme_get_controller_health_info", 00:05:19.356 "bdev_nvme_disable_controller", 00:05:19.356 "bdev_nvme_enable_controller", 00:05:19.357 "bdev_nvme_reset_controller", 00:05:19.357 "bdev_nvme_get_transport_statistics", 00:05:19.357 "bdev_nvme_apply_firmware", 00:05:19.357 "bdev_nvme_detach_controller", 00:05:19.357 "bdev_nvme_get_controllers", 00:05:19.357 "bdev_nvme_attach_controller", 00:05:19.357 "bdev_nvme_set_hotplug", 00:05:19.357 "bdev_nvme_set_options", 00:05:19.357 "bdev_passthru_delete", 00:05:19.357 "bdev_passthru_create", 00:05:19.357 "bdev_lvol_set_parent_bdev", 00:05:19.357 "bdev_lvol_set_parent", 00:05:19.357 "bdev_lvol_check_shallow_copy", 00:05:19.357 "bdev_lvol_start_shallow_copy", 00:05:19.357 "bdev_lvol_grow_lvstore", 00:05:19.357 "bdev_lvol_get_lvols", 00:05:19.357 "bdev_lvol_get_lvstores", 00:05:19.357 "bdev_lvol_delete", 00:05:19.357 "bdev_lvol_set_read_only", 00:05:19.357 "bdev_lvol_resize", 00:05:19.357 "bdev_lvol_decouple_parent", 00:05:19.357 "bdev_lvol_inflate", 00:05:19.357 "bdev_lvol_rename", 00:05:19.357 "bdev_lvol_clone_bdev", 00:05:19.357 "bdev_lvol_clone", 00:05:19.357 "bdev_lvol_snapshot", 00:05:19.357 "bdev_lvol_create", 00:05:19.357 "bdev_lvol_delete_lvstore", 00:05:19.357 "bdev_lvol_rename_lvstore", 00:05:19.357 "bdev_lvol_create_lvstore", 00:05:19.357 "bdev_raid_set_options", 00:05:19.357 "bdev_raid_remove_base_bdev", 00:05:19.357 "bdev_raid_add_base_bdev", 00:05:19.357 "bdev_raid_delete", 00:05:19.357 "bdev_raid_create", 00:05:19.357 "bdev_raid_get_bdevs", 00:05:19.357 "bdev_error_inject_error", 00:05:19.357 "bdev_error_delete", 00:05:19.357 "bdev_error_create", 00:05:19.357 "bdev_split_delete", 00:05:19.357 "bdev_split_create", 00:05:19.357 "bdev_delay_delete", 00:05:19.357 "bdev_delay_create", 00:05:19.357 "bdev_delay_update_latency", 00:05:19.357 "bdev_zone_block_delete", 00:05:19.357 "bdev_zone_block_create", 00:05:19.357 "blobfs_create", 00:05:19.357 "blobfs_detect", 00:05:19.357 "blobfs_set_cache_size", 00:05:19.357 "bdev_aio_delete", 00:05:19.357 "bdev_aio_rescan", 00:05:19.357 "bdev_aio_create", 00:05:19.357 "bdev_ftl_set_property", 00:05:19.357 "bdev_ftl_get_properties", 00:05:19.357 "bdev_ftl_get_stats", 00:05:19.357 "bdev_ftl_unmap", 00:05:19.357 "bdev_ftl_unload", 00:05:19.357 "bdev_ftl_delete", 00:05:19.357 "bdev_ftl_load", 00:05:19.357 "bdev_ftl_create", 00:05:19.357 "bdev_virtio_attach_controller", 00:05:19.357 "bdev_virtio_scsi_get_devices", 00:05:19.357 "bdev_virtio_detach_controller", 00:05:19.357 "bdev_virtio_blk_set_hotplug", 00:05:19.357 "bdev_iscsi_delete", 00:05:19.357 "bdev_iscsi_create", 00:05:19.357 "bdev_iscsi_set_options", 00:05:19.357 "accel_error_inject_error", 00:05:19.357 "ioat_scan_accel_module", 00:05:19.357 "dsa_scan_accel_module", 00:05:19.357 "iaa_scan_accel_module", 00:05:19.357 "vfu_virtio_create_fs_endpoint", 00:05:19.357 "vfu_virtio_create_scsi_endpoint", 00:05:19.357 "vfu_virtio_scsi_remove_target", 00:05:19.357 "vfu_virtio_scsi_add_target", 00:05:19.357 "vfu_virtio_create_blk_endpoint", 00:05:19.357 "vfu_virtio_delete_endpoint", 00:05:19.357 "keyring_file_remove_key", 00:05:19.357 "keyring_file_add_key", 00:05:19.357 "keyring_linux_set_options", 00:05:19.357 "fsdev_aio_delete", 00:05:19.357 "fsdev_aio_create", 00:05:19.357 "iscsi_get_histogram", 00:05:19.357 "iscsi_enable_histogram", 00:05:19.357 "iscsi_set_options", 00:05:19.357 "iscsi_get_auth_groups", 00:05:19.357 "iscsi_auth_group_remove_secret", 00:05:19.357 "iscsi_auth_group_add_secret", 00:05:19.357 "iscsi_delete_auth_group", 00:05:19.357 "iscsi_create_auth_group", 00:05:19.357 "iscsi_set_discovery_auth", 00:05:19.357 "iscsi_get_options", 00:05:19.357 "iscsi_target_node_request_logout", 00:05:19.357 "iscsi_target_node_set_redirect", 00:05:19.357 "iscsi_target_node_set_auth", 00:05:19.357 "iscsi_target_node_add_lun", 00:05:19.357 "iscsi_get_stats", 00:05:19.357 "iscsi_get_connections", 00:05:19.357 "iscsi_portal_group_set_auth", 00:05:19.357 "iscsi_start_portal_group", 00:05:19.357 "iscsi_delete_portal_group", 00:05:19.357 "iscsi_create_portal_group", 00:05:19.357 "iscsi_get_portal_groups", 00:05:19.357 "iscsi_delete_target_node", 00:05:19.357 "iscsi_target_node_remove_pg_ig_maps", 00:05:19.357 "iscsi_target_node_add_pg_ig_maps", 00:05:19.357 "iscsi_create_target_node", 00:05:19.357 "iscsi_get_target_nodes", 00:05:19.357 "iscsi_delete_initiator_group", 00:05:19.357 "iscsi_initiator_group_remove_initiators", 00:05:19.357 "iscsi_initiator_group_add_initiators", 00:05:19.357 "iscsi_create_initiator_group", 00:05:19.357 "iscsi_get_initiator_groups", 00:05:19.357 "nvmf_set_crdt", 00:05:19.357 "nvmf_set_config", 00:05:19.357 "nvmf_set_max_subsystems", 00:05:19.357 "nvmf_stop_mdns_prr", 00:05:19.357 "nvmf_publish_mdns_prr", 00:05:19.357 "nvmf_subsystem_get_listeners", 00:05:19.357 "nvmf_subsystem_get_qpairs", 00:05:19.357 "nvmf_subsystem_get_controllers", 00:05:19.357 "nvmf_get_stats", 00:05:19.357 "nvmf_get_transports", 00:05:19.357 "nvmf_create_transport", 00:05:19.357 "nvmf_get_targets", 00:05:19.357 "nvmf_delete_target", 00:05:19.357 "nvmf_create_target", 00:05:19.357 "nvmf_subsystem_allow_any_host", 00:05:19.357 "nvmf_subsystem_set_keys", 00:05:19.357 "nvmf_subsystem_remove_host", 00:05:19.357 "nvmf_subsystem_add_host", 00:05:19.357 "nvmf_ns_remove_host", 00:05:19.357 "nvmf_ns_add_host", 00:05:19.357 "nvmf_subsystem_remove_ns", 00:05:19.357 "nvmf_subsystem_set_ns_ana_group", 00:05:19.357 "nvmf_subsystem_add_ns", 00:05:19.357 "nvmf_subsystem_listener_set_ana_state", 00:05:19.357 "nvmf_discovery_get_referrals", 00:05:19.357 "nvmf_discovery_remove_referral", 00:05:19.357 "nvmf_discovery_add_referral", 00:05:19.357 "nvmf_subsystem_remove_listener", 00:05:19.357 "nvmf_subsystem_add_listener", 00:05:19.357 "nvmf_delete_subsystem", 00:05:19.357 "nvmf_create_subsystem", 00:05:19.357 "nvmf_get_subsystems", 00:05:19.357 "env_dpdk_get_mem_stats", 00:05:19.357 "nbd_get_disks", 00:05:19.357 "nbd_stop_disk", 00:05:19.357 "nbd_start_disk", 00:05:19.358 "ublk_recover_disk", 00:05:19.358 "ublk_get_disks", 00:05:19.358 "ublk_stop_disk", 00:05:19.358 "ublk_start_disk", 00:05:19.358 "ublk_destroy_target", 00:05:19.358 "ublk_create_target", 00:05:19.358 "virtio_blk_create_transport", 00:05:19.358 "virtio_blk_get_transports", 00:05:19.358 "vhost_controller_set_coalescing", 00:05:19.358 "vhost_get_controllers", 00:05:19.358 "vhost_delete_controller", 00:05:19.358 "vhost_create_blk_controller", 00:05:19.358 "vhost_scsi_controller_remove_target", 00:05:19.358 "vhost_scsi_controller_add_target", 00:05:19.358 "vhost_start_scsi_controller", 00:05:19.358 "vhost_create_scsi_controller", 00:05:19.358 "thread_set_cpumask", 00:05:19.358 "scheduler_set_options", 00:05:19.358 "framework_get_governor", 00:05:19.358 "framework_get_scheduler", 00:05:19.358 "framework_set_scheduler", 00:05:19.358 "framework_get_reactors", 00:05:19.358 "thread_get_io_channels", 00:05:19.358 "thread_get_pollers", 00:05:19.358 "thread_get_stats", 00:05:19.358 "framework_monitor_context_switch", 00:05:19.358 "spdk_kill_instance", 00:05:19.358 "log_enable_timestamps", 00:05:19.358 "log_get_flags", 00:05:19.358 "log_clear_flag", 00:05:19.358 "log_set_flag", 00:05:19.358 "log_get_level", 00:05:19.358 "log_set_level", 00:05:19.358 "log_get_print_level", 00:05:19.358 "log_set_print_level", 00:05:19.358 "framework_enable_cpumask_locks", 00:05:19.358 "framework_disable_cpumask_locks", 00:05:19.358 "framework_wait_init", 00:05:19.358 "framework_start_init", 00:05:19.358 "scsi_get_devices", 00:05:19.358 "bdev_get_histogram", 00:05:19.358 "bdev_enable_histogram", 00:05:19.358 "bdev_set_qos_limit", 00:05:19.358 "bdev_set_qd_sampling_period", 00:05:19.358 "bdev_get_bdevs", 00:05:19.358 "bdev_reset_iostat", 00:05:19.358 "bdev_get_iostat", 00:05:19.358 "bdev_examine", 00:05:19.358 "bdev_wait_for_examine", 00:05:19.358 "bdev_set_options", 00:05:19.358 "accel_get_stats", 00:05:19.358 "accel_set_options", 00:05:19.358 "accel_set_driver", 00:05:19.358 "accel_crypto_key_destroy", 00:05:19.358 "accel_crypto_keys_get", 00:05:19.358 "accel_crypto_key_create", 00:05:19.358 "accel_assign_opc", 00:05:19.358 "accel_get_module_info", 00:05:19.358 "accel_get_opc_assignments", 00:05:19.358 "vmd_rescan", 00:05:19.358 "vmd_remove_device", 00:05:19.358 "vmd_enable", 00:05:19.358 "sock_get_default_impl", 00:05:19.358 "sock_set_default_impl", 00:05:19.358 "sock_impl_set_options", 00:05:19.358 "sock_impl_get_options", 00:05:19.358 "iobuf_get_stats", 00:05:19.358 "iobuf_set_options", 00:05:19.358 "keyring_get_keys", 00:05:19.358 "vfu_tgt_set_base_path", 00:05:19.358 "framework_get_pci_devices", 00:05:19.358 "framework_get_config", 00:05:19.358 "framework_get_subsystems", 00:05:19.358 "fsdev_set_opts", 00:05:19.358 "fsdev_get_opts", 00:05:19.358 "trace_get_info", 00:05:19.358 "trace_get_tpoint_group_mask", 00:05:19.358 "trace_disable_tpoint_group", 00:05:19.358 "trace_enable_tpoint_group", 00:05:19.358 "trace_clear_tpoint_mask", 00:05:19.358 "trace_set_tpoint_mask", 00:05:19.358 "notify_get_notifications", 00:05:19.358 "notify_get_types", 00:05:19.358 "spdk_get_version", 00:05:19.358 "rpc_get_methods" 00:05:19.358 ] 00:05:19.617 17:15:48 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:19.617 17:15:48 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:19.617 17:15:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:19.617 17:15:48 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:19.617 17:15:48 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2390450 00:05:19.617 17:15:48 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 2390450 ']' 00:05:19.617 17:15:48 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 2390450 00:05:19.618 17:15:48 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:19.618 17:15:48 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:19.618 17:15:48 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2390450 00:05:19.618 17:15:48 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:19.618 17:15:48 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:19.618 17:15:48 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2390450' 00:05:19.618 killing process with pid 2390450 00:05:19.618 17:15:48 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 2390450 00:05:19.618 17:15:48 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 2390450 00:05:19.877 00:05:19.877 real 0m1.133s 00:05:19.877 user 0m1.964s 00:05:19.877 sys 0m0.440s 00:05:19.877 17:15:48 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.877 17:15:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:19.877 ************************************ 00:05:19.877 END TEST spdkcli_tcp 00:05:19.877 ************************************ 00:05:19.877 17:15:48 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:19.877 17:15:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:19.877 17:15:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.877 17:15:48 -- common/autotest_common.sh@10 -- # set +x 00:05:19.877 ************************************ 00:05:19.877 START TEST dpdk_mem_utility 00:05:19.877 ************************************ 00:05:19.877 17:15:49 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:20.137 * Looking for test storage... 00:05:20.137 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:20.137 17:15:49 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:20.137 17:15:49 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:05:20.137 17:15:49 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:20.137 17:15:49 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:20.137 17:15:49 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:20.137 17:15:49 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:20.137 17:15:49 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:20.137 17:15:49 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:20.137 17:15:49 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:20.137 17:15:49 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:20.137 17:15:49 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:20.137 17:15:49 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:20.137 17:15:49 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:20.137 17:15:49 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:20.137 17:15:49 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:20.137 17:15:49 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:20.137 17:15:49 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:20.137 17:15:49 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:20.137 17:15:49 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:20.137 17:15:49 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:20.137 17:15:49 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:20.137 17:15:49 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:20.137 17:15:49 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:20.137 17:15:49 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:20.137 17:15:49 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:20.137 17:15:49 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:20.137 17:15:49 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:20.137 17:15:49 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:20.137 17:15:49 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:20.137 17:15:49 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:20.137 17:15:49 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:20.137 17:15:49 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:20.137 17:15:49 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:20.137 17:15:49 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:20.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.137 --rc genhtml_branch_coverage=1 00:05:20.137 --rc genhtml_function_coverage=1 00:05:20.137 --rc genhtml_legend=1 00:05:20.137 --rc geninfo_all_blocks=1 00:05:20.137 --rc geninfo_unexecuted_blocks=1 00:05:20.137 00:05:20.137 ' 00:05:20.137 17:15:49 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:20.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.137 --rc genhtml_branch_coverage=1 00:05:20.137 --rc genhtml_function_coverage=1 00:05:20.137 --rc genhtml_legend=1 00:05:20.137 --rc geninfo_all_blocks=1 00:05:20.137 --rc geninfo_unexecuted_blocks=1 00:05:20.137 00:05:20.137 ' 00:05:20.137 17:15:49 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:20.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.137 --rc genhtml_branch_coverage=1 00:05:20.137 --rc genhtml_function_coverage=1 00:05:20.137 --rc genhtml_legend=1 00:05:20.137 --rc geninfo_all_blocks=1 00:05:20.137 --rc geninfo_unexecuted_blocks=1 00:05:20.137 00:05:20.137 ' 00:05:20.137 17:15:49 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:20.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.137 --rc genhtml_branch_coverage=1 00:05:20.137 --rc genhtml_function_coverage=1 00:05:20.137 --rc genhtml_legend=1 00:05:20.137 --rc geninfo_all_blocks=1 00:05:20.137 --rc geninfo_unexecuted_blocks=1 00:05:20.137 00:05:20.137 ' 00:05:20.137 17:15:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:20.137 17:15:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2390750 00:05:20.137 17:15:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2390750 00:05:20.137 17:15:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:20.137 17:15:49 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 2390750 ']' 00:05:20.137 17:15:49 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.137 17:15:49 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:20.137 17:15:49 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.137 17:15:49 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:20.137 17:15:49 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:20.137 [2024-12-09 17:15:49.229295] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:05:20.137 [2024-12-09 17:15:49.229343] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2390750 ] 00:05:20.137 [2024-12-09 17:15:49.304504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.397 [2024-12-09 17:15:49.345912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.397 17:15:49 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:20.397 17:15:49 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:20.397 17:15:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:20.397 17:15:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:20.397 17:15:49 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.397 17:15:49 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:20.397 { 00:05:20.397 "filename": "/tmp/spdk_mem_dump.txt" 00:05:20.397 } 00:05:20.397 17:15:49 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.397 17:15:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:20.657 DPDK memory size 818.000000 MiB in 1 heap(s) 00:05:20.657 1 heaps totaling size 818.000000 MiB 00:05:20.657 size: 818.000000 MiB heap id: 0 00:05:20.657 end heaps---------- 00:05:20.657 9 mempools totaling size 603.782043 MiB 00:05:20.657 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:20.657 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:20.657 size: 100.555481 MiB name: bdev_io_2390750 00:05:20.657 size: 50.003479 MiB name: msgpool_2390750 00:05:20.657 size: 36.509338 MiB name: fsdev_io_2390750 00:05:20.657 size: 21.763794 MiB name: PDU_Pool 00:05:20.657 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:20.657 size: 4.133484 MiB name: evtpool_2390750 00:05:20.657 size: 0.026123 MiB name: Session_Pool 00:05:20.657 end mempools------- 00:05:20.657 6 memzones totaling size 4.142822 MiB 00:05:20.657 size: 1.000366 MiB name: RG_ring_0_2390750 00:05:20.657 size: 1.000366 MiB name: RG_ring_1_2390750 00:05:20.657 size: 1.000366 MiB name: RG_ring_4_2390750 00:05:20.657 size: 1.000366 MiB name: RG_ring_5_2390750 00:05:20.657 size: 0.125366 MiB name: RG_ring_2_2390750 00:05:20.657 size: 0.015991 MiB name: RG_ring_3_2390750 00:05:20.657 end memzones------- 00:05:20.657 17:15:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:20.657 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:05:20.657 list of free elements. size: 10.852478 MiB 00:05:20.657 element at address: 0x200019200000 with size: 0.999878 MiB 00:05:20.657 element at address: 0x200019400000 with size: 0.999878 MiB 00:05:20.657 element at address: 0x200000400000 with size: 0.998535 MiB 00:05:20.657 element at address: 0x200032000000 with size: 0.994446 MiB 00:05:20.657 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:20.657 element at address: 0x200012c00000 with size: 0.944275 MiB 00:05:20.657 element at address: 0x200019600000 with size: 0.936584 MiB 00:05:20.657 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:20.657 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:05:20.657 element at address: 0x200000c00000 with size: 0.495422 MiB 00:05:20.657 element at address: 0x20000a600000 with size: 0.490723 MiB 00:05:20.657 element at address: 0x200019800000 with size: 0.485657 MiB 00:05:20.657 element at address: 0x200003e00000 with size: 0.481934 MiB 00:05:20.657 element at address: 0x200028200000 with size: 0.410034 MiB 00:05:20.657 element at address: 0x200000800000 with size: 0.355042 MiB 00:05:20.657 list of standard malloc elements. size: 199.218628 MiB 00:05:20.657 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:20.657 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:20.657 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:20.657 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:05:20.657 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:05:20.657 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:20.657 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:05:20.657 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:20.657 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:05:20.657 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:20.657 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:20.657 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:20.657 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:20.657 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:05:20.657 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:20.657 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:20.657 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:05:20.657 element at address: 0x20000085b040 with size: 0.000183 MiB 00:05:20.657 element at address: 0x20000085f300 with size: 0.000183 MiB 00:05:20.657 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:20.657 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:20.657 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:20.657 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:20.657 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:20.657 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:20.657 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:20.657 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:20.657 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:20.657 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:20.657 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:20.657 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:20.657 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:20.657 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:20.657 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:05:20.657 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:05:20.657 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:05:20.657 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:05:20.657 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:05:20.657 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:05:20.657 element at address: 0x200028268f80 with size: 0.000183 MiB 00:05:20.657 element at address: 0x200028269040 with size: 0.000183 MiB 00:05:20.657 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:05:20.657 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:05:20.657 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:05:20.657 list of memzone associated elements. size: 607.928894 MiB 00:05:20.657 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:05:20.657 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:20.657 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:05:20.657 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:20.657 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:05:20.657 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_2390750_0 00:05:20.657 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:20.657 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2390750_0 00:05:20.657 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:20.657 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_2390750_0 00:05:20.657 element at address: 0x2000199be940 with size: 20.255554 MiB 00:05:20.657 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:20.657 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:05:20.657 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:20.657 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:20.657 associated memzone info: size: 3.000122 MiB name: MP_evtpool_2390750_0 00:05:20.657 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:20.657 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2390750 00:05:20.657 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:20.657 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2390750 00:05:20.657 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:20.657 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:20.657 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:05:20.657 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:20.657 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:20.657 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:20.657 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:20.657 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:20.657 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:20.657 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2390750 00:05:20.657 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:20.657 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2390750 00:05:20.657 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:05:20.657 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2390750 00:05:20.657 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:05:20.657 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2390750 00:05:20.657 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:20.657 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_2390750 00:05:20.657 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:20.657 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2390750 00:05:20.657 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:20.657 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:20.657 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:20.657 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:20.657 element at address: 0x20001987c540 with size: 0.250488 MiB 00:05:20.657 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:20.657 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:20.657 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_2390750 00:05:20.657 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:05:20.657 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2390750 00:05:20.657 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:20.657 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:20.657 element at address: 0x200028269100 with size: 0.023743 MiB 00:05:20.657 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:20.657 element at address: 0x20000085b100 with size: 0.016113 MiB 00:05:20.657 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2390750 00:05:20.657 element at address: 0x20002826f240 with size: 0.002441 MiB 00:05:20.658 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:20.658 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:05:20.658 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2390750 00:05:20.658 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:20.658 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_2390750 00:05:20.658 element at address: 0x20000085af00 with size: 0.000305 MiB 00:05:20.658 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2390750 00:05:20.658 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:05:20.658 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:20.658 17:15:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:20.658 17:15:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2390750 00:05:20.658 17:15:49 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 2390750 ']' 00:05:20.658 17:15:49 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 2390750 00:05:20.658 17:15:49 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:20.658 17:15:49 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:20.658 17:15:49 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2390750 00:05:20.658 17:15:49 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:20.658 17:15:49 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:20.658 17:15:49 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2390750' 00:05:20.658 killing process with pid 2390750 00:05:20.658 17:15:49 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 2390750 00:05:20.658 17:15:49 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 2390750 00:05:20.917 00:05:20.917 real 0m1.011s 00:05:20.917 user 0m0.939s 00:05:20.917 sys 0m0.412s 00:05:20.917 17:15:50 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:20.917 17:15:50 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:20.917 ************************************ 00:05:20.917 END TEST dpdk_mem_utility 00:05:20.917 ************************************ 00:05:20.917 17:15:50 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:20.917 17:15:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:20.917 17:15:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.917 17:15:50 -- common/autotest_common.sh@10 -- # set +x 00:05:20.917 ************************************ 00:05:20.917 START TEST event 00:05:20.917 ************************************ 00:05:20.917 17:15:50 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:21.177 * Looking for test storage... 00:05:21.177 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:21.177 17:15:50 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:21.177 17:15:50 event -- common/autotest_common.sh@1711 -- # lcov --version 00:05:21.177 17:15:50 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:21.177 17:15:50 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:21.177 17:15:50 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:21.177 17:15:50 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:21.177 17:15:50 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:21.177 17:15:50 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:21.177 17:15:50 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:21.177 17:15:50 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:21.177 17:15:50 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:21.177 17:15:50 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:21.177 17:15:50 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:21.177 17:15:50 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:21.177 17:15:50 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:21.177 17:15:50 event -- scripts/common.sh@344 -- # case "$op" in 00:05:21.177 17:15:50 event -- scripts/common.sh@345 -- # : 1 00:05:21.177 17:15:50 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:21.177 17:15:50 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:21.177 17:15:50 event -- scripts/common.sh@365 -- # decimal 1 00:05:21.177 17:15:50 event -- scripts/common.sh@353 -- # local d=1 00:05:21.177 17:15:50 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:21.177 17:15:50 event -- scripts/common.sh@355 -- # echo 1 00:05:21.177 17:15:50 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:21.177 17:15:50 event -- scripts/common.sh@366 -- # decimal 2 00:05:21.177 17:15:50 event -- scripts/common.sh@353 -- # local d=2 00:05:21.177 17:15:50 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:21.177 17:15:50 event -- scripts/common.sh@355 -- # echo 2 00:05:21.177 17:15:50 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:21.177 17:15:50 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:21.177 17:15:50 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:21.177 17:15:50 event -- scripts/common.sh@368 -- # return 0 00:05:21.177 17:15:50 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:21.177 17:15:50 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:21.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.177 --rc genhtml_branch_coverage=1 00:05:21.177 --rc genhtml_function_coverage=1 00:05:21.177 --rc genhtml_legend=1 00:05:21.177 --rc geninfo_all_blocks=1 00:05:21.177 --rc geninfo_unexecuted_blocks=1 00:05:21.177 00:05:21.177 ' 00:05:21.177 17:15:50 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:21.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.177 --rc genhtml_branch_coverage=1 00:05:21.177 --rc genhtml_function_coverage=1 00:05:21.177 --rc genhtml_legend=1 00:05:21.177 --rc geninfo_all_blocks=1 00:05:21.177 --rc geninfo_unexecuted_blocks=1 00:05:21.177 00:05:21.177 ' 00:05:21.177 17:15:50 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:21.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.177 --rc genhtml_branch_coverage=1 00:05:21.177 --rc genhtml_function_coverage=1 00:05:21.177 --rc genhtml_legend=1 00:05:21.177 --rc geninfo_all_blocks=1 00:05:21.177 --rc geninfo_unexecuted_blocks=1 00:05:21.177 00:05:21.177 ' 00:05:21.177 17:15:50 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:21.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.177 --rc genhtml_branch_coverage=1 00:05:21.177 --rc genhtml_function_coverage=1 00:05:21.177 --rc genhtml_legend=1 00:05:21.177 --rc geninfo_all_blocks=1 00:05:21.177 --rc geninfo_unexecuted_blocks=1 00:05:21.177 00:05:21.177 ' 00:05:21.177 17:15:50 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:21.177 17:15:50 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:21.177 17:15:50 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:21.177 17:15:50 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:21.177 17:15:50 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.177 17:15:50 event -- common/autotest_common.sh@10 -- # set +x 00:05:21.177 ************************************ 00:05:21.177 START TEST event_perf 00:05:21.177 ************************************ 00:05:21.177 17:15:50 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:21.177 Running I/O for 1 seconds...[2024-12-09 17:15:50.316203] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:05:21.177 [2024-12-09 17:15:50.316278] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2391041 ] 00:05:21.436 [2024-12-09 17:15:50.395509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:21.436 [2024-12-09 17:15:50.437519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:21.436 [2024-12-09 17:15:50.437628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:21.436 [2024-12-09 17:15:50.437734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.436 [2024-12-09 17:15:50.437735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:22.373 Running I/O for 1 seconds... 00:05:22.373 lcore 0: 207774 00:05:22.373 lcore 1: 207772 00:05:22.373 lcore 2: 207774 00:05:22.373 lcore 3: 207774 00:05:22.373 done. 00:05:22.373 00:05:22.373 real 0m1.181s 00:05:22.373 user 0m4.099s 00:05:22.373 sys 0m0.078s 00:05:22.373 17:15:51 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:22.373 17:15:51 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:22.373 ************************************ 00:05:22.373 END TEST event_perf 00:05:22.373 ************************************ 00:05:22.373 17:15:51 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:22.373 17:15:51 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:22.373 17:15:51 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:22.373 17:15:51 event -- common/autotest_common.sh@10 -- # set +x 00:05:22.373 ************************************ 00:05:22.373 START TEST event_reactor 00:05:22.373 ************************************ 00:05:22.373 17:15:51 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:22.632 [2024-12-09 17:15:51.569867] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:05:22.632 [2024-12-09 17:15:51.569939] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2391287 ] 00:05:22.632 [2024-12-09 17:15:51.650057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.632 [2024-12-09 17:15:51.689234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.569 test_start 00:05:23.569 oneshot 00:05:23.569 tick 100 00:05:23.569 tick 100 00:05:23.569 tick 250 00:05:23.569 tick 100 00:05:23.569 tick 100 00:05:23.569 tick 100 00:05:23.569 tick 250 00:05:23.569 tick 500 00:05:23.569 tick 100 00:05:23.569 tick 100 00:05:23.569 tick 250 00:05:23.569 tick 100 00:05:23.569 tick 100 00:05:23.569 test_end 00:05:23.569 00:05:23.569 real 0m1.177s 00:05:23.569 user 0m1.098s 00:05:23.569 sys 0m0.075s 00:05:23.569 17:15:52 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.569 17:15:52 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:23.569 ************************************ 00:05:23.569 END TEST event_reactor 00:05:23.569 ************************************ 00:05:23.828 17:15:52 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:23.828 17:15:52 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:23.828 17:15:52 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.828 17:15:52 event -- common/autotest_common.sh@10 -- # set +x 00:05:23.828 ************************************ 00:05:23.828 START TEST event_reactor_perf 00:05:23.828 ************************************ 00:05:23.828 17:15:52 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:23.828 [2024-12-09 17:15:52.819234] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:05:23.828 [2024-12-09 17:15:52.819295] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2391538 ] 00:05:23.828 [2024-12-09 17:15:52.896677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.828 [2024-12-09 17:15:52.934995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.206 test_start 00:05:25.206 test_end 00:05:25.206 Performance: 522389 events per second 00:05:25.206 00:05:25.206 real 0m1.175s 00:05:25.206 user 0m1.099s 00:05:25.206 sys 0m0.073s 00:05:25.206 17:15:53 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:25.206 17:15:53 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:25.206 ************************************ 00:05:25.206 END TEST event_reactor_perf 00:05:25.206 ************************************ 00:05:25.206 17:15:54 event -- event/event.sh@49 -- # uname -s 00:05:25.206 17:15:54 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:25.206 17:15:54 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:25.206 17:15:54 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:25.206 17:15:54 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:25.206 17:15:54 event -- common/autotest_common.sh@10 -- # set +x 00:05:25.206 ************************************ 00:05:25.206 START TEST event_scheduler 00:05:25.206 ************************************ 00:05:25.206 17:15:54 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:25.206 * Looking for test storage... 00:05:25.207 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:25.207 17:15:54 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:25.207 17:15:54 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:05:25.207 17:15:54 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:25.207 17:15:54 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:25.207 17:15:54 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:25.207 17:15:54 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:25.207 17:15:54 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:25.207 17:15:54 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:25.207 17:15:54 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:25.207 17:15:54 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:25.207 17:15:54 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:25.207 17:15:54 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:25.207 17:15:54 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:25.207 17:15:54 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:25.207 17:15:54 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:25.207 17:15:54 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:25.207 17:15:54 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:25.207 17:15:54 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:25.207 17:15:54 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:25.207 17:15:54 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:25.207 17:15:54 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:25.207 17:15:54 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:25.207 17:15:54 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:25.207 17:15:54 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:25.207 17:15:54 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:25.207 17:15:54 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:25.207 17:15:54 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:25.207 17:15:54 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:25.207 17:15:54 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:25.207 17:15:54 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:25.207 17:15:54 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:25.207 17:15:54 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:25.207 17:15:54 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:25.207 17:15:54 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:25.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.207 --rc genhtml_branch_coverage=1 00:05:25.207 --rc genhtml_function_coverage=1 00:05:25.207 --rc genhtml_legend=1 00:05:25.207 --rc geninfo_all_blocks=1 00:05:25.207 --rc geninfo_unexecuted_blocks=1 00:05:25.207 00:05:25.207 ' 00:05:25.207 17:15:54 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:25.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.207 --rc genhtml_branch_coverage=1 00:05:25.207 --rc genhtml_function_coverage=1 00:05:25.207 --rc genhtml_legend=1 00:05:25.207 --rc geninfo_all_blocks=1 00:05:25.207 --rc geninfo_unexecuted_blocks=1 00:05:25.207 00:05:25.207 ' 00:05:25.207 17:15:54 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:25.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.207 --rc genhtml_branch_coverage=1 00:05:25.207 --rc genhtml_function_coverage=1 00:05:25.207 --rc genhtml_legend=1 00:05:25.207 --rc geninfo_all_blocks=1 00:05:25.207 --rc geninfo_unexecuted_blocks=1 00:05:25.207 00:05:25.207 ' 00:05:25.207 17:15:54 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:25.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.207 --rc genhtml_branch_coverage=1 00:05:25.207 --rc genhtml_function_coverage=1 00:05:25.207 --rc genhtml_legend=1 00:05:25.207 --rc geninfo_all_blocks=1 00:05:25.207 --rc geninfo_unexecuted_blocks=1 00:05:25.207 00:05:25.207 ' 00:05:25.207 17:15:54 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:25.207 17:15:54 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2391815 00:05:25.207 17:15:54 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:25.207 17:15:54 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:25.207 17:15:54 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2391815 00:05:25.207 17:15:54 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 2391815 ']' 00:05:25.207 17:15:54 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.207 17:15:54 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:25.207 17:15:54 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.207 17:15:54 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:25.207 17:15:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:25.207 [2024-12-09 17:15:54.275990] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:05:25.207 [2024-12-09 17:15:54.276038] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2391815 ] 00:05:25.207 [2024-12-09 17:15:54.347353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:25.467 [2024-12-09 17:15:54.391654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.467 [2024-12-09 17:15:54.391698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.467 [2024-12-09 17:15:54.391721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:25.467 [2024-12-09 17:15:54.391722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:25.467 17:15:54 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:25.467 17:15:54 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:25.467 17:15:54 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:25.467 17:15:54 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.467 17:15:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:25.467 [2024-12-09 17:15:54.444454] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:25.467 [2024-12-09 17:15:54.444471] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:25.467 [2024-12-09 17:15:54.444481] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:25.467 [2024-12-09 17:15:54.444487] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:25.467 [2024-12-09 17:15:54.444492] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:25.467 17:15:54 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.467 17:15:54 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:25.467 17:15:54 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.467 17:15:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:25.467 [2024-12-09 17:15:54.521919] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:25.467 17:15:54 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.467 17:15:54 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:25.467 17:15:54 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:25.467 17:15:54 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:25.467 17:15:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:25.467 ************************************ 00:05:25.467 START TEST scheduler_create_thread 00:05:25.467 ************************************ 00:05:25.467 17:15:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:25.467 17:15:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:25.467 17:15:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.467 17:15:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.467 2 00:05:25.467 17:15:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.467 17:15:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:25.467 17:15:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.467 17:15:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.467 3 00:05:25.467 17:15:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.467 17:15:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:25.467 17:15:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.467 17:15:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.467 4 00:05:25.467 17:15:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.467 17:15:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:25.467 17:15:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.467 17:15:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.467 5 00:05:25.467 17:15:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.467 17:15:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:25.467 17:15:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.467 17:15:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.467 6 00:05:25.467 17:15:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.467 17:15:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:25.467 17:15:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.467 17:15:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.467 7 00:05:25.467 17:15:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.467 17:15:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:25.467 17:15:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.467 17:15:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.467 8 00:05:25.467 17:15:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.467 17:15:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:25.467 17:15:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.467 17:15:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.467 9 00:05:25.467 17:15:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.467 17:15:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:25.467 17:15:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.467 17:15:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.726 10 00:05:25.726 17:15:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.726 17:15:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:25.726 17:15:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.726 17:15:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.726 17:15:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.726 17:15:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:25.726 17:15:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:25.726 17:15:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.726 17:15:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.667 17:15:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.667 17:15:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:26.667 17:15:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.667 17:15:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.044 17:15:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:28.044 17:15:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:28.044 17:15:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:28.044 17:15:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:28.044 17:15:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.982 17:15:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:28.982 00:05:28.982 real 0m3.383s 00:05:28.982 user 0m0.028s 00:05:28.982 sys 0m0.002s 00:05:28.982 17:15:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.982 17:15:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.982 ************************************ 00:05:28.982 END TEST scheduler_create_thread 00:05:28.982 ************************************ 00:05:28.982 17:15:57 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:28.982 17:15:57 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2391815 00:05:28.982 17:15:57 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 2391815 ']' 00:05:28.982 17:15:57 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 2391815 00:05:28.982 17:15:57 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:28.982 17:15:57 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:28.982 17:15:57 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2391815 00:05:28.982 17:15:58 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:28.982 17:15:58 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:28.982 17:15:58 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2391815' 00:05:28.982 killing process with pid 2391815 00:05:28.982 17:15:58 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 2391815 00:05:28.982 17:15:58 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 2391815 00:05:29.242 [2024-12-09 17:15:58.321848] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:29.501 00:05:29.501 real 0m4.473s 00:05:29.501 user 0m7.863s 00:05:29.501 sys 0m0.363s 00:05:29.501 17:15:58 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.501 17:15:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:29.501 ************************************ 00:05:29.501 END TEST event_scheduler 00:05:29.501 ************************************ 00:05:29.501 17:15:58 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:29.501 17:15:58 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:29.501 17:15:58 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:29.501 17:15:58 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.501 17:15:58 event -- common/autotest_common.sh@10 -- # set +x 00:05:29.501 ************************************ 00:05:29.501 START TEST app_repeat 00:05:29.501 ************************************ 00:05:29.501 17:15:58 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:29.501 17:15:58 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.501 17:15:58 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.501 17:15:58 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:29.501 17:15:58 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:29.501 17:15:58 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:29.501 17:15:58 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:29.501 17:15:58 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:29.501 17:15:58 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2392553 00:05:29.501 17:15:58 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:29.501 17:15:58 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:29.501 17:15:58 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2392553' 00:05:29.501 Process app_repeat pid: 2392553 00:05:29.501 17:15:58 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:29.501 17:15:58 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:29.501 spdk_app_start Round 0 00:05:29.501 17:15:58 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2392553 /var/tmp/spdk-nbd.sock 00:05:29.501 17:15:58 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2392553 ']' 00:05:29.501 17:15:58 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:29.501 17:15:58 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:29.501 17:15:58 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:29.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:29.501 17:15:58 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:29.501 17:15:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:29.501 [2024-12-09 17:15:58.640466] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:05:29.501 [2024-12-09 17:15:58.640524] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2392553 ] 00:05:29.761 [2024-12-09 17:15:58.701022] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:29.761 [2024-12-09 17:15:58.740811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:29.761 [2024-12-09 17:15:58.740812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.761 17:15:58 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:29.761 17:15:58 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:29.761 17:15:58 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:30.020 Malloc0 00:05:30.020 17:15:59 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:30.278 Malloc1 00:05:30.278 17:15:59 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:30.278 17:15:59 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.278 17:15:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:30.278 17:15:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:30.278 17:15:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.278 17:15:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:30.278 17:15:59 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:30.278 17:15:59 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.278 17:15:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:30.278 17:15:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:30.278 17:15:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.278 17:15:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:30.278 17:15:59 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:30.278 17:15:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:30.278 17:15:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.278 17:15:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:30.278 /dev/nbd0 00:05:30.537 17:15:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:30.537 17:15:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:30.537 17:15:59 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:30.537 17:15:59 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:30.537 17:15:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:30.537 17:15:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:30.537 17:15:59 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:30.537 17:15:59 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:30.537 17:15:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:30.537 17:15:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:30.537 17:15:59 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:30.537 1+0 records in 00:05:30.537 1+0 records out 00:05:30.537 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00019015 s, 21.5 MB/s 00:05:30.537 17:15:59 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:30.537 17:15:59 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:30.537 17:15:59 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:30.537 17:15:59 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:30.537 17:15:59 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:30.537 17:15:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:30.537 17:15:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.537 17:15:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:30.537 /dev/nbd1 00:05:30.537 17:15:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:30.796 17:15:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:30.796 17:15:59 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:30.796 17:15:59 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:30.796 17:15:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:30.796 17:15:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:30.796 17:15:59 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:30.796 17:15:59 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:30.796 17:15:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:30.796 17:15:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:30.796 17:15:59 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:30.796 1+0 records in 00:05:30.796 1+0 records out 00:05:30.796 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000222219 s, 18.4 MB/s 00:05:30.796 17:15:59 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:30.796 17:15:59 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:30.796 17:15:59 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:30.796 17:15:59 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:30.796 17:15:59 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:30.796 17:15:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:30.796 17:15:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.796 17:15:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:30.796 17:15:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.796 17:15:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:30.796 17:15:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:30.796 { 00:05:30.796 "nbd_device": "/dev/nbd0", 00:05:30.796 "bdev_name": "Malloc0" 00:05:30.796 }, 00:05:30.796 { 00:05:30.796 "nbd_device": "/dev/nbd1", 00:05:30.796 "bdev_name": "Malloc1" 00:05:30.796 } 00:05:30.796 ]' 00:05:30.796 17:15:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:30.796 { 00:05:30.796 "nbd_device": "/dev/nbd0", 00:05:30.796 "bdev_name": "Malloc0" 00:05:30.796 }, 00:05:30.796 { 00:05:30.796 "nbd_device": "/dev/nbd1", 00:05:30.796 "bdev_name": "Malloc1" 00:05:30.796 } 00:05:30.796 ]' 00:05:30.796 17:15:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:31.055 17:15:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:31.055 /dev/nbd1' 00:05:31.055 17:15:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:31.055 17:15:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:31.055 /dev/nbd1' 00:05:31.055 17:15:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:31.055 17:15:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:31.055 17:15:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:31.055 17:15:59 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:31.055 17:15:59 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:31.055 17:15:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.055 17:15:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:31.055 17:15:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:31.056 17:15:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:31.056 17:15:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:31.056 17:15:59 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:31.056 256+0 records in 00:05:31.056 256+0 records out 00:05:31.056 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010777 s, 97.3 MB/s 00:05:31.056 17:15:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:31.056 17:15:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:31.056 256+0 records in 00:05:31.056 256+0 records out 00:05:31.056 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0143859 s, 72.9 MB/s 00:05:31.056 17:16:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:31.056 17:16:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:31.056 256+0 records in 00:05:31.056 256+0 records out 00:05:31.056 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0157984 s, 66.4 MB/s 00:05:31.056 17:16:00 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:31.056 17:16:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.056 17:16:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:31.056 17:16:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:31.056 17:16:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:31.056 17:16:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:31.056 17:16:00 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:31.056 17:16:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:31.056 17:16:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:31.056 17:16:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:31.056 17:16:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:31.056 17:16:00 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:31.056 17:16:00 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:31.056 17:16:00 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.056 17:16:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.056 17:16:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:31.056 17:16:00 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:31.056 17:16:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:31.056 17:16:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:31.315 17:16:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:31.315 17:16:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:31.315 17:16:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:31.315 17:16:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:31.315 17:16:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:31.315 17:16:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:31.315 17:16:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:31.315 17:16:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:31.315 17:16:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:31.315 17:16:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:31.315 17:16:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:31.315 17:16:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:31.315 17:16:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:31.315 17:16:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:31.315 17:16:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:31.315 17:16:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:31.315 17:16:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:31.315 17:16:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:31.315 17:16:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:31.315 17:16:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.315 17:16:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:31.574 17:16:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:31.574 17:16:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:31.574 17:16:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:31.574 17:16:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:31.574 17:16:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:31.574 17:16:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:31.574 17:16:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:31.574 17:16:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:31.574 17:16:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:31.574 17:16:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:31.574 17:16:00 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:31.574 17:16:00 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:31.574 17:16:00 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:31.834 17:16:00 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:32.093 [2024-12-09 17:16:01.090637] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:32.093 [2024-12-09 17:16:01.126303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.093 [2024-12-09 17:16:01.126304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.093 [2024-12-09 17:16:01.166761] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:32.093 [2024-12-09 17:16:01.166801] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:35.381 17:16:03 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:35.381 17:16:03 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:35.381 spdk_app_start Round 1 00:05:35.381 17:16:03 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2392553 /var/tmp/spdk-nbd.sock 00:05:35.381 17:16:03 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2392553 ']' 00:05:35.382 17:16:03 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:35.382 17:16:03 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:35.382 17:16:03 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:35.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:35.382 17:16:03 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:35.382 17:16:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:35.382 17:16:04 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:35.382 17:16:04 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:35.382 17:16:04 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:35.382 Malloc0 00:05:35.382 17:16:04 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:35.382 Malloc1 00:05:35.382 17:16:04 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:35.382 17:16:04 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.382 17:16:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:35.382 17:16:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:35.382 17:16:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.382 17:16:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:35.382 17:16:04 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:35.382 17:16:04 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.382 17:16:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:35.382 17:16:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:35.382 17:16:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.382 17:16:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:35.382 17:16:04 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:35.382 17:16:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:35.382 17:16:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:35.382 17:16:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:35.641 /dev/nbd0 00:05:35.641 17:16:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:35.641 17:16:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:35.641 17:16:04 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:35.641 17:16:04 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:35.641 17:16:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:35.641 17:16:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:35.641 17:16:04 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:35.641 17:16:04 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:35.641 17:16:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:35.641 17:16:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:35.641 17:16:04 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:35.641 1+0 records in 00:05:35.641 1+0 records out 00:05:35.641 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000190063 s, 21.6 MB/s 00:05:35.641 17:16:04 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:35.641 17:16:04 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:35.641 17:16:04 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:35.641 17:16:04 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:35.641 17:16:04 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:35.641 17:16:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:35.641 17:16:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:35.641 17:16:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:35.900 /dev/nbd1 00:05:35.900 17:16:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:35.900 17:16:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:35.900 17:16:05 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:35.901 17:16:05 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:35.901 17:16:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:35.901 17:16:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:35.901 17:16:05 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:35.901 17:16:05 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:35.901 17:16:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:35.901 17:16:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:35.901 17:16:05 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:35.901 1+0 records in 00:05:35.901 1+0 records out 00:05:35.901 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000248616 s, 16.5 MB/s 00:05:35.901 17:16:05 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:35.901 17:16:05 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:35.901 17:16:05 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:35.901 17:16:05 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:35.901 17:16:05 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:35.901 17:16:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:35.901 17:16:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:35.901 17:16:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:35.901 17:16:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.901 17:16:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:36.159 17:16:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:36.159 { 00:05:36.159 "nbd_device": "/dev/nbd0", 00:05:36.159 "bdev_name": "Malloc0" 00:05:36.159 }, 00:05:36.159 { 00:05:36.159 "nbd_device": "/dev/nbd1", 00:05:36.159 "bdev_name": "Malloc1" 00:05:36.159 } 00:05:36.159 ]' 00:05:36.159 17:16:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:36.159 { 00:05:36.159 "nbd_device": "/dev/nbd0", 00:05:36.159 "bdev_name": "Malloc0" 00:05:36.159 }, 00:05:36.159 { 00:05:36.159 "nbd_device": "/dev/nbd1", 00:05:36.159 "bdev_name": "Malloc1" 00:05:36.159 } 00:05:36.159 ]' 00:05:36.159 17:16:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:36.159 17:16:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:36.159 /dev/nbd1' 00:05:36.159 17:16:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:36.159 /dev/nbd1' 00:05:36.159 17:16:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:36.159 17:16:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:36.159 17:16:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:36.159 17:16:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:36.160 17:16:05 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:36.160 17:16:05 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:36.160 17:16:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.160 17:16:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:36.160 17:16:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:36.160 17:16:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:36.160 17:16:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:36.160 17:16:05 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:36.160 256+0 records in 00:05:36.160 256+0 records out 00:05:36.160 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00994092 s, 105 MB/s 00:05:36.160 17:16:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:36.160 17:16:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:36.160 256+0 records in 00:05:36.160 256+0 records out 00:05:36.160 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0149128 s, 70.3 MB/s 00:05:36.160 17:16:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:36.160 17:16:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:36.418 256+0 records in 00:05:36.418 256+0 records out 00:05:36.418 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.01573 s, 66.7 MB/s 00:05:36.418 17:16:05 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:36.418 17:16:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.418 17:16:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:36.418 17:16:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:36.418 17:16:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:36.418 17:16:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:36.418 17:16:05 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:36.418 17:16:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:36.418 17:16:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:36.418 17:16:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:36.418 17:16:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:36.418 17:16:05 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:36.418 17:16:05 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:36.418 17:16:05 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.418 17:16:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.419 17:16:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:36.419 17:16:05 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:36.419 17:16:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:36.419 17:16:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:36.419 17:16:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:36.419 17:16:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:36.419 17:16:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:36.419 17:16:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:36.419 17:16:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:36.419 17:16:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:36.419 17:16:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:36.419 17:16:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:36.419 17:16:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:36.419 17:16:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:36.678 17:16:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:36.678 17:16:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:36.678 17:16:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:36.678 17:16:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:36.678 17:16:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:36.678 17:16:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:36.678 17:16:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:36.678 17:16:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:36.678 17:16:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:36.678 17:16:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.678 17:16:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:36.937 17:16:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:36.937 17:16:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:36.937 17:16:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:36.937 17:16:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:36.937 17:16:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:36.937 17:16:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:36.937 17:16:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:36.937 17:16:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:36.937 17:16:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:36.937 17:16:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:36.937 17:16:06 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:36.937 17:16:06 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:36.937 17:16:06 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:37.196 17:16:06 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:37.454 [2024-12-09 17:16:06.391954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:37.454 [2024-12-09 17:16:06.428817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.454 [2024-12-09 17:16:06.428817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.454 [2024-12-09 17:16:06.469889] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:37.454 [2024-12-09 17:16:06.469930] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:40.745 17:16:09 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:40.745 17:16:09 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:40.745 spdk_app_start Round 2 00:05:40.746 17:16:09 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2392553 /var/tmp/spdk-nbd.sock 00:05:40.746 17:16:09 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2392553 ']' 00:05:40.746 17:16:09 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:40.746 17:16:09 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:40.746 17:16:09 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:40.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:40.746 17:16:09 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:40.746 17:16:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:40.746 17:16:09 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.746 17:16:09 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:40.746 17:16:09 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:40.746 Malloc0 00:05:40.746 17:16:09 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:40.746 Malloc1 00:05:40.746 17:16:09 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:40.746 17:16:09 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.746 17:16:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.746 17:16:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:40.746 17:16:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.746 17:16:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:40.746 17:16:09 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:40.746 17:16:09 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.746 17:16:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.746 17:16:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:40.746 17:16:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.746 17:16:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:40.746 17:16:09 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:40.746 17:16:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:40.746 17:16:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.746 17:16:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:41.005 /dev/nbd0 00:05:41.005 17:16:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:41.005 17:16:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:41.005 17:16:10 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:41.005 17:16:10 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:41.005 17:16:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:41.005 17:16:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:41.005 17:16:10 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:41.005 17:16:10 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:41.005 17:16:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:41.005 17:16:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:41.005 17:16:10 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:41.005 1+0 records in 00:05:41.005 1+0 records out 00:05:41.005 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000195072 s, 21.0 MB/s 00:05:41.005 17:16:10 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:41.005 17:16:10 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:41.005 17:16:10 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:41.005 17:16:10 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:41.005 17:16:10 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:41.005 17:16:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:41.005 17:16:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:41.005 17:16:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:41.264 /dev/nbd1 00:05:41.264 17:16:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:41.264 17:16:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:41.264 17:16:10 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:41.264 17:16:10 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:41.264 17:16:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:41.264 17:16:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:41.264 17:16:10 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:41.264 17:16:10 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:41.264 17:16:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:41.264 17:16:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:41.264 17:16:10 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:41.264 1+0 records in 00:05:41.264 1+0 records out 00:05:41.264 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00020937 s, 19.6 MB/s 00:05:41.264 17:16:10 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:41.264 17:16:10 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:41.264 17:16:10 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:41.264 17:16:10 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:41.264 17:16:10 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:41.264 17:16:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:41.264 17:16:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:41.264 17:16:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:41.264 17:16:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.264 17:16:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:41.523 17:16:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:41.523 { 00:05:41.523 "nbd_device": "/dev/nbd0", 00:05:41.523 "bdev_name": "Malloc0" 00:05:41.523 }, 00:05:41.523 { 00:05:41.523 "nbd_device": "/dev/nbd1", 00:05:41.523 "bdev_name": "Malloc1" 00:05:41.523 } 00:05:41.523 ]' 00:05:41.523 17:16:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:41.523 { 00:05:41.523 "nbd_device": "/dev/nbd0", 00:05:41.523 "bdev_name": "Malloc0" 00:05:41.523 }, 00:05:41.523 { 00:05:41.523 "nbd_device": "/dev/nbd1", 00:05:41.523 "bdev_name": "Malloc1" 00:05:41.523 } 00:05:41.523 ]' 00:05:41.523 17:16:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:41.523 17:16:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:41.523 /dev/nbd1' 00:05:41.523 17:16:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:41.523 /dev/nbd1' 00:05:41.523 17:16:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:41.523 17:16:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:41.523 17:16:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:41.523 17:16:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:41.523 17:16:10 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:41.523 17:16:10 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:41.523 17:16:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.523 17:16:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:41.523 17:16:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:41.523 17:16:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:41.523 17:16:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:41.523 17:16:10 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:41.523 256+0 records in 00:05:41.523 256+0 records out 00:05:41.523 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106456 s, 98.5 MB/s 00:05:41.523 17:16:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:41.523 17:16:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:41.523 256+0 records in 00:05:41.523 256+0 records out 00:05:41.523 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0145168 s, 72.2 MB/s 00:05:41.523 17:16:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:41.523 17:16:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:41.523 256+0 records in 00:05:41.523 256+0 records out 00:05:41.523 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0155012 s, 67.6 MB/s 00:05:41.523 17:16:10 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:41.523 17:16:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.523 17:16:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:41.523 17:16:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:41.523 17:16:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:41.523 17:16:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:41.523 17:16:10 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:41.523 17:16:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:41.523 17:16:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:41.523 17:16:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:41.523 17:16:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:41.523 17:16:10 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:41.523 17:16:10 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:41.523 17:16:10 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.523 17:16:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.523 17:16:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:41.523 17:16:10 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:41.523 17:16:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:41.523 17:16:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:41.782 17:16:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:41.782 17:16:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:41.782 17:16:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:41.782 17:16:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:41.782 17:16:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:41.782 17:16:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:41.782 17:16:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:41.782 17:16:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:41.782 17:16:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:41.782 17:16:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:42.041 17:16:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:42.041 17:16:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:42.041 17:16:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:42.042 17:16:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:42.042 17:16:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:42.042 17:16:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:42.042 17:16:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:42.042 17:16:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:42.042 17:16:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:42.042 17:16:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.042 17:16:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:42.301 17:16:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:42.301 17:16:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:42.301 17:16:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:42.301 17:16:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:42.301 17:16:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:42.301 17:16:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:42.301 17:16:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:42.301 17:16:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:42.301 17:16:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:42.301 17:16:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:42.301 17:16:11 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:42.301 17:16:11 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:42.301 17:16:11 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:42.562 17:16:11 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:42.562 [2024-12-09 17:16:11.712153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:42.822 [2024-12-09 17:16:11.752179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.822 [2024-12-09 17:16:11.752180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.822 [2024-12-09 17:16:11.792566] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:42.822 [2024-12-09 17:16:11.792608] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:46.114 17:16:14 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2392553 /var/tmp/spdk-nbd.sock 00:05:46.114 17:16:14 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2392553 ']' 00:05:46.114 17:16:14 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:46.114 17:16:14 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:46.114 17:16:14 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:46.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:46.114 17:16:14 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:46.114 17:16:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:46.114 17:16:14 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:46.114 17:16:14 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:46.114 17:16:14 event.app_repeat -- event/event.sh@39 -- # killprocess 2392553 00:05:46.114 17:16:14 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 2392553 ']' 00:05:46.114 17:16:14 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 2392553 00:05:46.114 17:16:14 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:46.114 17:16:14 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:46.114 17:16:14 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2392553 00:05:46.114 17:16:14 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:46.114 17:16:14 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:46.114 17:16:14 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2392553' 00:05:46.114 killing process with pid 2392553 00:05:46.114 17:16:14 event.app_repeat -- common/autotest_common.sh@973 -- # kill 2392553 00:05:46.114 17:16:14 event.app_repeat -- common/autotest_common.sh@978 -- # wait 2392553 00:05:46.114 spdk_app_start is called in Round 0. 00:05:46.114 Shutdown signal received, stop current app iteration 00:05:46.114 Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 reinitialization... 00:05:46.114 spdk_app_start is called in Round 1. 00:05:46.114 Shutdown signal received, stop current app iteration 00:05:46.114 Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 reinitialization... 00:05:46.114 spdk_app_start is called in Round 2. 00:05:46.114 Shutdown signal received, stop current app iteration 00:05:46.114 Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 reinitialization... 00:05:46.114 spdk_app_start is called in Round 3. 00:05:46.114 Shutdown signal received, stop current app iteration 00:05:46.114 17:16:14 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:46.114 17:16:14 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:46.114 00:05:46.114 real 0m16.348s 00:05:46.114 user 0m36.087s 00:05:46.114 sys 0m2.420s 00:05:46.114 17:16:14 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.114 17:16:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:46.114 ************************************ 00:05:46.114 END TEST app_repeat 00:05:46.114 ************************************ 00:05:46.114 17:16:14 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:46.114 17:16:14 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:46.114 17:16:14 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:46.114 17:16:14 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.114 17:16:14 event -- common/autotest_common.sh@10 -- # set +x 00:05:46.114 ************************************ 00:05:46.114 START TEST cpu_locks 00:05:46.114 ************************************ 00:05:46.114 17:16:15 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:46.114 * Looking for test storage... 00:05:46.114 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:46.114 17:16:15 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:46.114 17:16:15 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:05:46.114 17:16:15 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:46.114 17:16:15 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:46.114 17:16:15 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.114 17:16:15 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.114 17:16:15 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.114 17:16:15 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.114 17:16:15 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.114 17:16:15 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.114 17:16:15 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.114 17:16:15 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.114 17:16:15 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.114 17:16:15 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.114 17:16:15 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.114 17:16:15 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:46.114 17:16:15 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:46.114 17:16:15 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.114 17:16:15 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.114 17:16:15 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:46.114 17:16:15 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:46.114 17:16:15 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.114 17:16:15 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:46.114 17:16:15 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.114 17:16:15 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:46.114 17:16:15 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:46.114 17:16:15 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.114 17:16:15 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:46.114 17:16:15 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.114 17:16:15 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.114 17:16:15 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.114 17:16:15 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:46.114 17:16:15 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.114 17:16:15 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:46.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.114 --rc genhtml_branch_coverage=1 00:05:46.114 --rc genhtml_function_coverage=1 00:05:46.115 --rc genhtml_legend=1 00:05:46.115 --rc geninfo_all_blocks=1 00:05:46.115 --rc geninfo_unexecuted_blocks=1 00:05:46.115 00:05:46.115 ' 00:05:46.115 17:16:15 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:46.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.115 --rc genhtml_branch_coverage=1 00:05:46.115 --rc genhtml_function_coverage=1 00:05:46.115 --rc genhtml_legend=1 00:05:46.115 --rc geninfo_all_blocks=1 00:05:46.115 --rc geninfo_unexecuted_blocks=1 00:05:46.115 00:05:46.115 ' 00:05:46.115 17:16:15 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:46.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.115 --rc genhtml_branch_coverage=1 00:05:46.115 --rc genhtml_function_coverage=1 00:05:46.115 --rc genhtml_legend=1 00:05:46.115 --rc geninfo_all_blocks=1 00:05:46.115 --rc geninfo_unexecuted_blocks=1 00:05:46.115 00:05:46.115 ' 00:05:46.115 17:16:15 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:46.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.115 --rc genhtml_branch_coverage=1 00:05:46.115 --rc genhtml_function_coverage=1 00:05:46.115 --rc genhtml_legend=1 00:05:46.115 --rc geninfo_all_blocks=1 00:05:46.115 --rc geninfo_unexecuted_blocks=1 00:05:46.115 00:05:46.115 ' 00:05:46.115 17:16:15 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:46.115 17:16:15 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:46.115 17:16:15 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:46.115 17:16:15 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:46.115 17:16:15 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:46.115 17:16:15 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.115 17:16:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:46.115 ************************************ 00:05:46.115 START TEST default_locks 00:05:46.115 ************************************ 00:05:46.115 17:16:15 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:46.115 17:16:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2395513 00:05:46.115 17:16:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:46.115 17:16:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2395513 00:05:46.115 17:16:15 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2395513 ']' 00:05:46.115 17:16:15 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.115 17:16:15 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:46.115 17:16:15 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.115 17:16:15 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:46.115 17:16:15 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:46.374 [2024-12-09 17:16:15.294650] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:05:46.374 [2024-12-09 17:16:15.294693] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2395513 ] 00:05:46.374 [2024-12-09 17:16:15.367072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.374 [2024-12-09 17:16:15.405003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.633 17:16:15 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:46.633 17:16:15 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:46.633 17:16:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2395513 00:05:46.633 17:16:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2395513 00:05:46.633 17:16:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:46.893 lslocks: write error 00:05:46.893 17:16:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2395513 00:05:46.893 17:16:15 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 2395513 ']' 00:05:46.893 17:16:15 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 2395513 00:05:46.893 17:16:15 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:46.893 17:16:15 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:46.893 17:16:15 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2395513 00:05:46.893 17:16:15 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:46.893 17:16:15 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:46.893 17:16:15 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2395513' 00:05:46.893 killing process with pid 2395513 00:05:46.893 17:16:15 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 2395513 00:05:46.893 17:16:15 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 2395513 00:05:47.152 17:16:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2395513 00:05:47.152 17:16:16 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:47.152 17:16:16 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2395513 00:05:47.152 17:16:16 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:47.152 17:16:16 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:47.152 17:16:16 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:47.152 17:16:16 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:47.152 17:16:16 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 2395513 00:05:47.152 17:16:16 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2395513 ']' 00:05:47.152 17:16:16 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.152 17:16:16 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:47.152 17:16:16 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.152 17:16:16 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:47.152 17:16:16 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:47.152 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2395513) - No such process 00:05:47.152 ERROR: process (pid: 2395513) is no longer running 00:05:47.152 17:16:16 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:47.152 17:16:16 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:47.152 17:16:16 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:47.152 17:16:16 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:47.152 17:16:16 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:47.152 17:16:16 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:47.152 17:16:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:47.152 17:16:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:47.152 17:16:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:47.152 17:16:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:47.152 00:05:47.152 real 0m0.973s 00:05:47.152 user 0m0.925s 00:05:47.152 sys 0m0.456s 00:05:47.152 17:16:16 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.152 17:16:16 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:47.152 ************************************ 00:05:47.152 END TEST default_locks 00:05:47.152 ************************************ 00:05:47.152 17:16:16 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:47.152 17:16:16 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:47.152 17:16:16 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.152 17:16:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:47.152 ************************************ 00:05:47.152 START TEST default_locks_via_rpc 00:05:47.152 ************************************ 00:05:47.152 17:16:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:47.152 17:16:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2395766 00:05:47.152 17:16:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2395766 00:05:47.152 17:16:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:47.152 17:16:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2395766 ']' 00:05:47.152 17:16:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.152 17:16:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:47.152 17:16:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.152 17:16:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:47.152 17:16:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.411 [2024-12-09 17:16:16.342310] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:05:47.411 [2024-12-09 17:16:16.342355] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2395766 ] 00:05:47.411 [2024-12-09 17:16:16.417197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.411 [2024-12-09 17:16:16.455842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.670 17:16:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:47.670 17:16:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:47.670 17:16:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:47.670 17:16:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.670 17:16:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.670 17:16:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.670 17:16:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:47.670 17:16:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:47.670 17:16:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:47.670 17:16:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:47.670 17:16:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:47.670 17:16:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.670 17:16:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.670 17:16:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.670 17:16:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2395766 00:05:47.670 17:16:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2395766 00:05:47.670 17:16:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:47.929 17:16:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2395766 00:05:47.929 17:16:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 2395766 ']' 00:05:47.929 17:16:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 2395766 00:05:47.929 17:16:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:47.929 17:16:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:47.929 17:16:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2395766 00:05:47.929 17:16:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:47.929 17:16:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:47.929 17:16:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2395766' 00:05:47.929 killing process with pid 2395766 00:05:47.929 17:16:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 2395766 00:05:47.929 17:16:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 2395766 00:05:48.497 00:05:48.497 real 0m1.080s 00:05:48.497 user 0m1.036s 00:05:48.497 sys 0m0.490s 00:05:48.497 17:16:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.497 17:16:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.497 ************************************ 00:05:48.498 END TEST default_locks_via_rpc 00:05:48.498 ************************************ 00:05:48.498 17:16:17 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:48.498 17:16:17 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:48.498 17:16:17 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.498 17:16:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:48.498 ************************************ 00:05:48.498 START TEST non_locking_app_on_locked_coremask 00:05:48.498 ************************************ 00:05:48.498 17:16:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:48.498 17:16:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2396025 00:05:48.498 17:16:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2396025 /var/tmp/spdk.sock 00:05:48.498 17:16:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:48.498 17:16:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2396025 ']' 00:05:48.498 17:16:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.498 17:16:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:48.498 17:16:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.498 17:16:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:48.498 17:16:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:48.498 [2024-12-09 17:16:17.487327] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:05:48.498 [2024-12-09 17:16:17.487371] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2396025 ] 00:05:48.498 [2024-12-09 17:16:17.562211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.498 [2024-12-09 17:16:17.602701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.757 17:16:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:48.757 17:16:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:48.757 17:16:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2396029 00:05:48.757 17:16:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2396029 /var/tmp/spdk2.sock 00:05:48.757 17:16:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:48.757 17:16:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2396029 ']' 00:05:48.757 17:16:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:48.757 17:16:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:48.757 17:16:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:48.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:48.757 17:16:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:48.757 17:16:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:48.757 [2024-12-09 17:16:17.877758] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:05:48.757 [2024-12-09 17:16:17.877801] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2396029 ] 00:05:49.016 [2024-12-09 17:16:17.968430] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:49.016 [2024-12-09 17:16:17.968458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.016 [2024-12-09 17:16:18.047240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.583 17:16:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:49.583 17:16:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:49.583 17:16:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2396025 00:05:49.583 17:16:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2396025 00:05:49.583 17:16:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:50.242 lslocks: write error 00:05:50.242 17:16:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2396025 00:05:50.242 17:16:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2396025 ']' 00:05:50.242 17:16:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2396025 00:05:50.242 17:16:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:50.242 17:16:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:50.242 17:16:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2396025 00:05:50.544 17:16:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:50.544 17:16:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:50.544 17:16:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2396025' 00:05:50.544 killing process with pid 2396025 00:05:50.544 17:16:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2396025 00:05:50.544 17:16:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2396025 00:05:51.110 17:16:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2396029 00:05:51.110 17:16:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2396029 ']' 00:05:51.110 17:16:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2396029 00:05:51.110 17:16:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:51.110 17:16:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:51.110 17:16:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2396029 00:05:51.110 17:16:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:51.110 17:16:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:51.110 17:16:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2396029' 00:05:51.110 killing process with pid 2396029 00:05:51.110 17:16:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2396029 00:05:51.110 17:16:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2396029 00:05:51.368 00:05:51.368 real 0m2.921s 00:05:51.368 user 0m3.046s 00:05:51.368 sys 0m0.971s 00:05:51.368 17:16:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.368 17:16:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:51.368 ************************************ 00:05:51.368 END TEST non_locking_app_on_locked_coremask 00:05:51.368 ************************************ 00:05:51.368 17:16:20 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:51.368 17:16:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.368 17:16:20 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.368 17:16:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:51.368 ************************************ 00:05:51.368 START TEST locking_app_on_unlocked_coremask 00:05:51.368 ************************************ 00:05:51.368 17:16:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:51.368 17:16:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2396525 00:05:51.368 17:16:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2396525 /var/tmp/spdk.sock 00:05:51.368 17:16:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:51.368 17:16:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2396525 ']' 00:05:51.368 17:16:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.368 17:16:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:51.368 17:16:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.368 17:16:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:51.368 17:16:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:51.368 [2024-12-09 17:16:20.478722] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:05:51.368 [2024-12-09 17:16:20.478767] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2396525 ] 00:05:51.627 [2024-12-09 17:16:20.554582] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:51.627 [2024-12-09 17:16:20.554605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.627 [2024-12-09 17:16:20.591265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.886 17:16:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:51.886 17:16:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:51.886 17:16:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2396528 00:05:51.886 17:16:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2396528 /var/tmp/spdk2.sock 00:05:51.886 17:16:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:51.886 17:16:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2396528 ']' 00:05:51.886 17:16:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:51.886 17:16:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:51.886 17:16:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:51.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:51.886 17:16:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:51.886 17:16:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:51.886 [2024-12-09 17:16:20.859574] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:05:51.886 [2024-12-09 17:16:20.859618] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2396528 ] 00:05:51.886 [2024-12-09 17:16:20.950093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.886 [2024-12-09 17:16:21.029870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.822 17:16:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:52.822 17:16:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:52.822 17:16:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2396528 00:05:52.822 17:16:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2396528 00:05:52.822 17:16:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:53.390 lslocks: write error 00:05:53.390 17:16:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2396525 00:05:53.390 17:16:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2396525 ']' 00:05:53.390 17:16:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2396525 00:05:53.390 17:16:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:53.390 17:16:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:53.390 17:16:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2396525 00:05:53.390 17:16:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:53.390 17:16:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:53.390 17:16:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2396525' 00:05:53.390 killing process with pid 2396525 00:05:53.390 17:16:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2396525 00:05:53.390 17:16:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2396525 00:05:53.957 17:16:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2396528 00:05:53.957 17:16:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2396528 ']' 00:05:53.957 17:16:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2396528 00:05:53.957 17:16:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:53.957 17:16:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:53.957 17:16:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2396528 00:05:53.957 17:16:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:53.957 17:16:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:53.957 17:16:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2396528' 00:05:53.957 killing process with pid 2396528 00:05:53.957 17:16:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2396528 00:05:53.957 17:16:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2396528 00:05:54.216 00:05:54.216 real 0m2.868s 00:05:54.216 user 0m3.026s 00:05:54.216 sys 0m0.959s 00:05:54.216 17:16:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.216 17:16:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:54.216 ************************************ 00:05:54.216 END TEST locking_app_on_unlocked_coremask 00:05:54.216 ************************************ 00:05:54.216 17:16:23 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:54.216 17:16:23 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:54.216 17:16:23 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.216 17:16:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:54.216 ************************************ 00:05:54.216 START TEST locking_app_on_locked_coremask 00:05:54.216 ************************************ 00:05:54.216 17:16:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:54.216 17:16:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2397016 00:05:54.216 17:16:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2397016 /var/tmp/spdk.sock 00:05:54.216 17:16:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:54.216 17:16:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2397016 ']' 00:05:54.216 17:16:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.216 17:16:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:54.216 17:16:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.216 17:16:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:54.216 17:16:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:54.475 [2024-12-09 17:16:23.419839] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:05:54.475 [2024-12-09 17:16:23.419877] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2397016 ] 00:05:54.475 [2024-12-09 17:16:23.494843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.475 [2024-12-09 17:16:23.535421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.734 17:16:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:54.734 17:16:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:54.734 17:16:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2397027 00:05:54.734 17:16:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2397027 /var/tmp/spdk2.sock 00:05:54.734 17:16:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:54.734 17:16:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:54.734 17:16:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2397027 /var/tmp/spdk2.sock 00:05:54.734 17:16:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:54.734 17:16:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:54.734 17:16:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:54.734 17:16:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:54.734 17:16:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2397027 /var/tmp/spdk2.sock 00:05:54.734 17:16:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2397027 ']' 00:05:54.734 17:16:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:54.734 17:16:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:54.734 17:16:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:54.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:54.734 17:16:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:54.734 17:16:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:54.734 [2024-12-09 17:16:23.803599] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:05:54.734 [2024-12-09 17:16:23.803647] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2397027 ] 00:05:54.734 [2024-12-09 17:16:23.888146] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2397016 has claimed it. 00:05:54.734 [2024-12-09 17:16:23.888178] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:55.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2397027) - No such process 00:05:55.301 ERROR: process (pid: 2397027) is no longer running 00:05:55.301 17:16:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:55.301 17:16:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:55.301 17:16:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:55.301 17:16:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:55.301 17:16:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:55.301 17:16:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:55.301 17:16:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2397016 00:05:55.301 17:16:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2397016 00:05:55.301 17:16:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:55.868 lslocks: write error 00:05:55.868 17:16:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2397016 00:05:55.868 17:16:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2397016 ']' 00:05:55.868 17:16:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2397016 00:05:55.868 17:16:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:55.868 17:16:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:55.868 17:16:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2397016 00:05:55.868 17:16:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:55.868 17:16:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:55.868 17:16:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2397016' 00:05:55.868 killing process with pid 2397016 00:05:55.868 17:16:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2397016 00:05:55.868 17:16:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2397016 00:05:56.129 00:05:56.129 real 0m1.774s 00:05:56.129 user 0m1.879s 00:05:56.129 sys 0m0.601s 00:05:56.129 17:16:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:56.129 17:16:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.129 ************************************ 00:05:56.129 END TEST locking_app_on_locked_coremask 00:05:56.129 ************************************ 00:05:56.129 17:16:25 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:56.129 17:16:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:56.129 17:16:25 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.129 17:16:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.129 ************************************ 00:05:56.129 START TEST locking_overlapped_coremask 00:05:56.129 ************************************ 00:05:56.129 17:16:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:56.129 17:16:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2397280 00:05:56.129 17:16:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2397280 /var/tmp/spdk.sock 00:05:56.129 17:16:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:56.129 17:16:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2397280 ']' 00:05:56.129 17:16:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.129 17:16:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:56.129 17:16:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.129 17:16:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:56.129 17:16:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.129 [2024-12-09 17:16:25.263662] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:05:56.129 [2024-12-09 17:16:25.263707] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2397280 ] 00:05:56.388 [2024-12-09 17:16:25.340175] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:56.388 [2024-12-09 17:16:25.385512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.388 [2024-12-09 17:16:25.385621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.388 [2024-12-09 17:16:25.385622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:56.956 17:16:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:56.956 17:16:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:56.956 17:16:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:56.956 17:16:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2397511 00:05:56.956 17:16:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2397511 /var/tmp/spdk2.sock 00:05:56.956 17:16:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:56.956 17:16:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2397511 /var/tmp/spdk2.sock 00:05:56.956 17:16:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:56.956 17:16:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:56.956 17:16:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:56.956 17:16:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:56.956 17:16:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2397511 /var/tmp/spdk2.sock 00:05:56.956 17:16:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2397511 ']' 00:05:56.956 17:16:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:56.956 17:16:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:56.956 17:16:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:56.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:56.956 17:16:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:56.956 17:16:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:57.214 [2024-12-09 17:16:26.145893] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:05:57.214 [2024-12-09 17:16:26.145941] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2397511 ] 00:05:57.214 [2024-12-09 17:16:26.237384] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2397280 has claimed it. 00:05:57.214 [2024-12-09 17:16:26.237422] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:57.783 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2397511) - No such process 00:05:57.783 ERROR: process (pid: 2397511) is no longer running 00:05:57.783 17:16:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:57.783 17:16:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:57.783 17:16:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:57.783 17:16:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:57.783 17:16:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:57.783 17:16:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:57.783 17:16:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:57.784 17:16:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:57.784 17:16:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:57.784 17:16:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:57.784 17:16:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2397280 00:05:57.784 17:16:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 2397280 ']' 00:05:57.784 17:16:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 2397280 00:05:57.784 17:16:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:57.784 17:16:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:57.784 17:16:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2397280 00:05:57.784 17:16:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:57.784 17:16:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:57.784 17:16:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2397280' 00:05:57.784 killing process with pid 2397280 00:05:57.784 17:16:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 2397280 00:05:57.784 17:16:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 2397280 00:05:58.043 00:05:58.043 real 0m1.914s 00:05:58.043 user 0m5.478s 00:05:58.043 sys 0m0.443s 00:05:58.043 17:16:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:58.043 17:16:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.043 ************************************ 00:05:58.043 END TEST locking_overlapped_coremask 00:05:58.043 ************************************ 00:05:58.043 17:16:27 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:58.043 17:16:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:58.043 17:16:27 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.043 17:16:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:58.043 ************************************ 00:05:58.043 START TEST locking_overlapped_coremask_via_rpc 00:05:58.043 ************************************ 00:05:58.043 17:16:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:58.043 17:16:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2397764 00:05:58.043 17:16:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2397764 /var/tmp/spdk.sock 00:05:58.043 17:16:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:58.043 17:16:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2397764 ']' 00:05:58.043 17:16:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.043 17:16:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:58.043 17:16:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.043 17:16:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:58.043 17:16:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.302 [2024-12-09 17:16:27.251285] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:05:58.303 [2024-12-09 17:16:27.251330] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2397764 ] 00:05:58.303 [2024-12-09 17:16:27.324523] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:58.303 [2024-12-09 17:16:27.324548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:58.303 [2024-12-09 17:16:27.363152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:58.303 [2024-12-09 17:16:27.363261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.303 [2024-12-09 17:16:27.363261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:58.563 17:16:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:58.563 17:16:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:58.563 17:16:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2397770 00:05:58.563 17:16:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2397770 /var/tmp/spdk2.sock 00:05:58.563 17:16:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:58.563 17:16:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2397770 ']' 00:05:58.563 17:16:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:58.563 17:16:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:58.563 17:16:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:58.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:58.563 17:16:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:58.563 17:16:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.563 [2024-12-09 17:16:27.645009] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:05:58.563 [2024-12-09 17:16:27.645057] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2397770 ] 00:05:58.563 [2024-12-09 17:16:27.735631] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:58.563 [2024-12-09 17:16:27.735661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:58.823 [2024-12-09 17:16:27.817651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:58.823 [2024-12-09 17:16:27.821266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:58.823 [2024-12-09 17:16:27.821267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:59.391 17:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:59.391 17:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:59.391 17:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:59.391 17:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.391 17:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.391 17:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:59.391 17:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:59.391 17:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:59.391 17:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:59.391 17:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:59.391 17:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:59.391 17:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:59.391 17:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:59.391 17:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:59.391 17:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.391 17:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.391 [2024-12-09 17:16:28.501287] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2397764 has claimed it. 00:05:59.391 request: 00:05:59.391 { 00:05:59.391 "method": "framework_enable_cpumask_locks", 00:05:59.391 "req_id": 1 00:05:59.391 } 00:05:59.391 Got JSON-RPC error response 00:05:59.391 response: 00:05:59.391 { 00:05:59.391 "code": -32603, 00:05:59.391 "message": "Failed to claim CPU core: 2" 00:05:59.391 } 00:05:59.391 17:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:59.391 17:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:59.391 17:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:59.391 17:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:59.392 17:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:59.392 17:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2397764 /var/tmp/spdk.sock 00:05:59.392 17:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2397764 ']' 00:05:59.392 17:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.392 17:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.392 17:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.392 17:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.392 17:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.651 17:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:59.651 17:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:59.651 17:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2397770 /var/tmp/spdk2.sock 00:05:59.651 17:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2397770 ']' 00:05:59.651 17:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:59.651 17:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.651 17:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:59.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:59.651 17:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.651 17:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.911 17:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:59.911 17:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:59.911 17:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:59.911 17:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:59.911 17:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:59.911 17:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:59.911 00:05:59.911 real 0m1.712s 00:05:59.911 user 0m0.819s 00:05:59.911 sys 0m0.137s 00:05:59.911 17:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:59.911 17:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.911 ************************************ 00:05:59.911 END TEST locking_overlapped_coremask_via_rpc 00:05:59.911 ************************************ 00:05:59.911 17:16:28 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:59.911 17:16:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2397764 ]] 00:05:59.911 17:16:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2397764 00:05:59.911 17:16:28 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2397764 ']' 00:05:59.911 17:16:28 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2397764 00:05:59.911 17:16:28 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:59.911 17:16:28 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:59.911 17:16:28 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2397764 00:05:59.911 17:16:28 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:59.911 17:16:28 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:59.911 17:16:28 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2397764' 00:05:59.911 killing process with pid 2397764 00:05:59.911 17:16:28 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2397764 00:05:59.911 17:16:28 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2397764 00:06:00.171 17:16:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2397770 ]] 00:06:00.171 17:16:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2397770 00:06:00.171 17:16:29 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2397770 ']' 00:06:00.171 17:16:29 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2397770 00:06:00.171 17:16:29 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:00.171 17:16:29 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:00.171 17:16:29 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2397770 00:06:00.430 17:16:29 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:00.430 17:16:29 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:00.430 17:16:29 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2397770' 00:06:00.430 killing process with pid 2397770 00:06:00.430 17:16:29 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2397770 00:06:00.430 17:16:29 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2397770 00:06:00.689 17:16:29 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:00.689 17:16:29 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:00.690 17:16:29 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2397764 ]] 00:06:00.690 17:16:29 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2397764 00:06:00.690 17:16:29 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2397764 ']' 00:06:00.690 17:16:29 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2397764 00:06:00.690 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2397764) - No such process 00:06:00.690 17:16:29 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2397764 is not found' 00:06:00.690 Process with pid 2397764 is not found 00:06:00.690 17:16:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2397770 ]] 00:06:00.690 17:16:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2397770 00:06:00.690 17:16:29 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2397770 ']' 00:06:00.690 17:16:29 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2397770 00:06:00.690 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2397770) - No such process 00:06:00.690 17:16:29 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2397770 is not found' 00:06:00.690 Process with pid 2397770 is not found 00:06:00.690 17:16:29 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:00.690 00:06:00.690 real 0m14.639s 00:06:00.690 user 0m25.952s 00:06:00.690 sys 0m5.026s 00:06:00.690 17:16:29 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.690 17:16:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:00.690 ************************************ 00:06:00.690 END TEST cpu_locks 00:06:00.690 ************************************ 00:06:00.690 00:06:00.690 real 0m39.618s 00:06:00.690 user 1m16.461s 00:06:00.690 sys 0m8.434s 00:06:00.690 17:16:29 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.690 17:16:29 event -- common/autotest_common.sh@10 -- # set +x 00:06:00.690 ************************************ 00:06:00.690 END TEST event 00:06:00.690 ************************************ 00:06:00.690 17:16:29 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:00.690 17:16:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:00.690 17:16:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.690 17:16:29 -- common/autotest_common.sh@10 -- # set +x 00:06:00.690 ************************************ 00:06:00.690 START TEST thread 00:06:00.690 ************************************ 00:06:00.690 17:16:29 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:00.690 * Looking for test storage... 00:06:00.690 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:00.690 17:16:29 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:00.690 17:16:29 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:06:00.690 17:16:29 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:00.949 17:16:29 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:00.949 17:16:29 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:00.949 17:16:29 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:00.949 17:16:29 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:00.949 17:16:29 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:00.949 17:16:29 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:00.949 17:16:29 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:00.949 17:16:29 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:00.949 17:16:29 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:00.949 17:16:29 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:00.949 17:16:29 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:00.949 17:16:29 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:00.949 17:16:29 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:00.949 17:16:29 thread -- scripts/common.sh@345 -- # : 1 00:06:00.949 17:16:29 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:00.949 17:16:29 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:00.949 17:16:29 thread -- scripts/common.sh@365 -- # decimal 1 00:06:00.949 17:16:29 thread -- scripts/common.sh@353 -- # local d=1 00:06:00.949 17:16:29 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:00.949 17:16:29 thread -- scripts/common.sh@355 -- # echo 1 00:06:00.949 17:16:29 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:00.949 17:16:29 thread -- scripts/common.sh@366 -- # decimal 2 00:06:00.949 17:16:29 thread -- scripts/common.sh@353 -- # local d=2 00:06:00.949 17:16:29 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:00.949 17:16:29 thread -- scripts/common.sh@355 -- # echo 2 00:06:00.949 17:16:29 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:00.949 17:16:29 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:00.949 17:16:29 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:00.949 17:16:29 thread -- scripts/common.sh@368 -- # return 0 00:06:00.949 17:16:29 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:00.949 17:16:29 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:00.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.949 --rc genhtml_branch_coverage=1 00:06:00.949 --rc genhtml_function_coverage=1 00:06:00.949 --rc genhtml_legend=1 00:06:00.949 --rc geninfo_all_blocks=1 00:06:00.949 --rc geninfo_unexecuted_blocks=1 00:06:00.949 00:06:00.949 ' 00:06:00.949 17:16:29 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:00.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.949 --rc genhtml_branch_coverage=1 00:06:00.949 --rc genhtml_function_coverage=1 00:06:00.949 --rc genhtml_legend=1 00:06:00.949 --rc geninfo_all_blocks=1 00:06:00.949 --rc geninfo_unexecuted_blocks=1 00:06:00.949 00:06:00.949 ' 00:06:00.949 17:16:29 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:00.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.949 --rc genhtml_branch_coverage=1 00:06:00.949 --rc genhtml_function_coverage=1 00:06:00.949 --rc genhtml_legend=1 00:06:00.949 --rc geninfo_all_blocks=1 00:06:00.949 --rc geninfo_unexecuted_blocks=1 00:06:00.949 00:06:00.949 ' 00:06:00.949 17:16:29 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:00.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.949 --rc genhtml_branch_coverage=1 00:06:00.949 --rc genhtml_function_coverage=1 00:06:00.949 --rc genhtml_legend=1 00:06:00.949 --rc geninfo_all_blocks=1 00:06:00.949 --rc geninfo_unexecuted_blocks=1 00:06:00.949 00:06:00.949 ' 00:06:00.949 17:16:29 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:00.949 17:16:29 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:00.949 17:16:29 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.949 17:16:29 thread -- common/autotest_common.sh@10 -- # set +x 00:06:00.949 ************************************ 00:06:00.949 START TEST thread_poller_perf 00:06:00.949 ************************************ 00:06:00.949 17:16:29 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:00.949 [2024-12-09 17:16:29.998477] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:06:00.950 [2024-12-09 17:16:29.998544] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2398331 ] 00:06:00.950 [2024-12-09 17:16:30.071389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.950 [2024-12-09 17:16:30.112520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.950 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:02.329 [2024-12-09T16:16:31.508Z] ====================================== 00:06:02.329 [2024-12-09T16:16:31.508Z] busy:2107762178 (cyc) 00:06:02.329 [2024-12-09T16:16:31.508Z] total_run_count: 411000 00:06:02.329 [2024-12-09T16:16:31.508Z] tsc_hz: 2100000000 (cyc) 00:06:02.329 [2024-12-09T16:16:31.508Z] ====================================== 00:06:02.329 [2024-12-09T16:16:31.508Z] poller_cost: 5128 (cyc), 2441 (nsec) 00:06:02.329 00:06:02.329 real 0m1.180s 00:06:02.329 user 0m1.106s 00:06:02.329 sys 0m0.070s 00:06:02.329 17:16:31 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.329 17:16:31 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:02.329 ************************************ 00:06:02.329 END TEST thread_poller_perf 00:06:02.329 ************************************ 00:06:02.329 17:16:31 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:02.329 17:16:31 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:02.330 17:16:31 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.330 17:16:31 thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.330 ************************************ 00:06:02.330 START TEST thread_poller_perf 00:06:02.330 ************************************ 00:06:02.330 17:16:31 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:02.330 [2024-12-09 17:16:31.250946] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:06:02.330 [2024-12-09 17:16:31.251007] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2398575 ] 00:06:02.330 [2024-12-09 17:16:31.326789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.330 [2024-12-09 17:16:31.367090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.330 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:03.269 [2024-12-09T16:16:32.448Z] ====================================== 00:06:03.269 [2024-12-09T16:16:32.448Z] busy:2101567766 (cyc) 00:06:03.269 [2024-12-09T16:16:32.448Z] total_run_count: 5178000 00:06:03.269 [2024-12-09T16:16:32.448Z] tsc_hz: 2100000000 (cyc) 00:06:03.269 [2024-12-09T16:16:32.448Z] ====================================== 00:06:03.269 [2024-12-09T16:16:32.448Z] poller_cost: 405 (cyc), 192 (nsec) 00:06:03.269 00:06:03.269 real 0m1.175s 00:06:03.269 user 0m1.089s 00:06:03.269 sys 0m0.082s 00:06:03.269 17:16:32 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.269 17:16:32 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:03.269 ************************************ 00:06:03.269 END TEST thread_poller_perf 00:06:03.269 ************************************ 00:06:03.269 17:16:32 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:03.269 00:06:03.269 real 0m2.671s 00:06:03.269 user 0m2.354s 00:06:03.269 sys 0m0.329s 00:06:03.269 17:16:32 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.269 17:16:32 thread -- common/autotest_common.sh@10 -- # set +x 00:06:03.269 ************************************ 00:06:03.269 END TEST thread 00:06:03.269 ************************************ 00:06:03.529 17:16:32 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:03.529 17:16:32 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:03.529 17:16:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:03.529 17:16:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.529 17:16:32 -- common/autotest_common.sh@10 -- # set +x 00:06:03.529 ************************************ 00:06:03.529 START TEST app_cmdline 00:06:03.529 ************************************ 00:06:03.529 17:16:32 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:03.529 * Looking for test storage... 00:06:03.529 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:03.529 17:16:32 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:03.529 17:16:32 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:06:03.529 17:16:32 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:03.529 17:16:32 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:03.529 17:16:32 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:03.529 17:16:32 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:03.529 17:16:32 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:03.529 17:16:32 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:03.529 17:16:32 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:03.529 17:16:32 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:03.529 17:16:32 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:03.529 17:16:32 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:03.529 17:16:32 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:03.529 17:16:32 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:03.529 17:16:32 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:03.529 17:16:32 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:03.529 17:16:32 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:03.529 17:16:32 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:03.529 17:16:32 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:03.529 17:16:32 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:03.529 17:16:32 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:03.529 17:16:32 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:03.529 17:16:32 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:03.529 17:16:32 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:03.529 17:16:32 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:03.529 17:16:32 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:03.529 17:16:32 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:03.529 17:16:32 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:03.529 17:16:32 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:03.529 17:16:32 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:03.529 17:16:32 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:03.529 17:16:32 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:03.529 17:16:32 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:03.529 17:16:32 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:03.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.529 --rc genhtml_branch_coverage=1 00:06:03.529 --rc genhtml_function_coverage=1 00:06:03.529 --rc genhtml_legend=1 00:06:03.529 --rc geninfo_all_blocks=1 00:06:03.529 --rc geninfo_unexecuted_blocks=1 00:06:03.529 00:06:03.529 ' 00:06:03.529 17:16:32 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:03.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.529 --rc genhtml_branch_coverage=1 00:06:03.529 --rc genhtml_function_coverage=1 00:06:03.529 --rc genhtml_legend=1 00:06:03.529 --rc geninfo_all_blocks=1 00:06:03.529 --rc geninfo_unexecuted_blocks=1 00:06:03.529 00:06:03.529 ' 00:06:03.529 17:16:32 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:03.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.529 --rc genhtml_branch_coverage=1 00:06:03.529 --rc genhtml_function_coverage=1 00:06:03.529 --rc genhtml_legend=1 00:06:03.529 --rc geninfo_all_blocks=1 00:06:03.529 --rc geninfo_unexecuted_blocks=1 00:06:03.529 00:06:03.529 ' 00:06:03.529 17:16:32 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:03.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.529 --rc genhtml_branch_coverage=1 00:06:03.529 --rc genhtml_function_coverage=1 00:06:03.529 --rc genhtml_legend=1 00:06:03.529 --rc geninfo_all_blocks=1 00:06:03.529 --rc geninfo_unexecuted_blocks=1 00:06:03.529 00:06:03.529 ' 00:06:03.529 17:16:32 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:03.529 17:16:32 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2398871 00:06:03.529 17:16:32 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:03.529 17:16:32 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2398871 00:06:03.529 17:16:32 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 2398871 ']' 00:06:03.529 17:16:32 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.529 17:16:32 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:03.529 17:16:32 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.529 17:16:32 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:03.529 17:16:32 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:03.789 [2024-12-09 17:16:32.743659] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:06:03.789 [2024-12-09 17:16:32.743707] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2398871 ] 00:06:03.789 [2024-12-09 17:16:32.817973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.789 [2024-12-09 17:16:32.858594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.049 17:16:33 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.049 17:16:33 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:04.049 17:16:33 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:04.308 { 00:06:04.308 "version": "SPDK v25.01-pre git sha1 6584139bf", 00:06:04.308 "fields": { 00:06:04.308 "major": 25, 00:06:04.308 "minor": 1, 00:06:04.308 "patch": 0, 00:06:04.308 "suffix": "-pre", 00:06:04.308 "commit": "6584139bf" 00:06:04.308 } 00:06:04.308 } 00:06:04.308 17:16:33 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:04.308 17:16:33 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:04.308 17:16:33 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:04.308 17:16:33 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:04.308 17:16:33 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:04.308 17:16:33 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:04.308 17:16:33 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.308 17:16:33 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:04.308 17:16:33 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:04.308 17:16:33 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.308 17:16:33 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:04.308 17:16:33 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:04.308 17:16:33 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:04.308 17:16:33 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:04.308 17:16:33 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:04.308 17:16:33 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:04.308 17:16:33 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:04.308 17:16:33 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:04.308 17:16:33 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:04.308 17:16:33 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:04.308 17:16:33 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:04.308 17:16:33 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:04.308 17:16:33 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:04.308 17:16:33 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:04.308 request: 00:06:04.308 { 00:06:04.308 "method": "env_dpdk_get_mem_stats", 00:06:04.308 "req_id": 1 00:06:04.308 } 00:06:04.308 Got JSON-RPC error response 00:06:04.308 response: 00:06:04.308 { 00:06:04.308 "code": -32601, 00:06:04.308 "message": "Method not found" 00:06:04.308 } 00:06:04.567 17:16:33 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:04.567 17:16:33 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:04.567 17:16:33 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:04.567 17:16:33 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:04.567 17:16:33 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2398871 00:06:04.567 17:16:33 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 2398871 ']' 00:06:04.567 17:16:33 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 2398871 00:06:04.567 17:16:33 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:04.567 17:16:33 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:04.567 17:16:33 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2398871 00:06:04.567 17:16:33 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:04.567 17:16:33 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:04.567 17:16:33 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2398871' 00:06:04.567 killing process with pid 2398871 00:06:04.567 17:16:33 app_cmdline -- common/autotest_common.sh@973 -- # kill 2398871 00:06:04.567 17:16:33 app_cmdline -- common/autotest_common.sh@978 -- # wait 2398871 00:06:04.827 00:06:04.827 real 0m1.338s 00:06:04.827 user 0m1.528s 00:06:04.827 sys 0m0.472s 00:06:04.827 17:16:33 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.827 17:16:33 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:04.827 ************************************ 00:06:04.827 END TEST app_cmdline 00:06:04.827 ************************************ 00:06:04.827 17:16:33 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:04.827 17:16:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:04.827 17:16:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.827 17:16:33 -- common/autotest_common.sh@10 -- # set +x 00:06:04.827 ************************************ 00:06:04.827 START TEST version 00:06:04.827 ************************************ 00:06:04.827 17:16:33 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:04.827 * Looking for test storage... 00:06:05.087 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:05.088 17:16:34 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:05.088 17:16:34 version -- common/autotest_common.sh@1711 -- # lcov --version 00:06:05.088 17:16:34 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:05.088 17:16:34 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:05.088 17:16:34 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:05.088 17:16:34 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:05.088 17:16:34 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:05.088 17:16:34 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:05.088 17:16:34 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:05.088 17:16:34 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:05.088 17:16:34 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:05.088 17:16:34 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:05.088 17:16:34 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:05.088 17:16:34 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:05.088 17:16:34 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:05.088 17:16:34 version -- scripts/common.sh@344 -- # case "$op" in 00:06:05.088 17:16:34 version -- scripts/common.sh@345 -- # : 1 00:06:05.088 17:16:34 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:05.088 17:16:34 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:05.088 17:16:34 version -- scripts/common.sh@365 -- # decimal 1 00:06:05.088 17:16:34 version -- scripts/common.sh@353 -- # local d=1 00:06:05.088 17:16:34 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:05.088 17:16:34 version -- scripts/common.sh@355 -- # echo 1 00:06:05.088 17:16:34 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:05.088 17:16:34 version -- scripts/common.sh@366 -- # decimal 2 00:06:05.088 17:16:34 version -- scripts/common.sh@353 -- # local d=2 00:06:05.088 17:16:34 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:05.088 17:16:34 version -- scripts/common.sh@355 -- # echo 2 00:06:05.088 17:16:34 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:05.088 17:16:34 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:05.088 17:16:34 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:05.088 17:16:34 version -- scripts/common.sh@368 -- # return 0 00:06:05.088 17:16:34 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:05.088 17:16:34 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:05.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.088 --rc genhtml_branch_coverage=1 00:06:05.088 --rc genhtml_function_coverage=1 00:06:05.088 --rc genhtml_legend=1 00:06:05.088 --rc geninfo_all_blocks=1 00:06:05.088 --rc geninfo_unexecuted_blocks=1 00:06:05.088 00:06:05.088 ' 00:06:05.088 17:16:34 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:05.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.088 --rc genhtml_branch_coverage=1 00:06:05.088 --rc genhtml_function_coverage=1 00:06:05.088 --rc genhtml_legend=1 00:06:05.088 --rc geninfo_all_blocks=1 00:06:05.088 --rc geninfo_unexecuted_blocks=1 00:06:05.088 00:06:05.088 ' 00:06:05.088 17:16:34 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:05.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.088 --rc genhtml_branch_coverage=1 00:06:05.088 --rc genhtml_function_coverage=1 00:06:05.088 --rc genhtml_legend=1 00:06:05.088 --rc geninfo_all_blocks=1 00:06:05.088 --rc geninfo_unexecuted_blocks=1 00:06:05.088 00:06:05.088 ' 00:06:05.088 17:16:34 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:05.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.088 --rc genhtml_branch_coverage=1 00:06:05.088 --rc genhtml_function_coverage=1 00:06:05.088 --rc genhtml_legend=1 00:06:05.088 --rc geninfo_all_blocks=1 00:06:05.088 --rc geninfo_unexecuted_blocks=1 00:06:05.088 00:06:05.088 ' 00:06:05.088 17:16:34 version -- app/version.sh@17 -- # get_header_version major 00:06:05.088 17:16:34 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:05.088 17:16:34 version -- app/version.sh@14 -- # cut -f2 00:06:05.088 17:16:34 version -- app/version.sh@14 -- # tr -d '"' 00:06:05.088 17:16:34 version -- app/version.sh@17 -- # major=25 00:06:05.088 17:16:34 version -- app/version.sh@18 -- # get_header_version minor 00:06:05.088 17:16:34 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:05.088 17:16:34 version -- app/version.sh@14 -- # cut -f2 00:06:05.088 17:16:34 version -- app/version.sh@14 -- # tr -d '"' 00:06:05.088 17:16:34 version -- app/version.sh@18 -- # minor=1 00:06:05.088 17:16:34 version -- app/version.sh@19 -- # get_header_version patch 00:06:05.088 17:16:34 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:05.088 17:16:34 version -- app/version.sh@14 -- # cut -f2 00:06:05.088 17:16:34 version -- app/version.sh@14 -- # tr -d '"' 00:06:05.088 17:16:34 version -- app/version.sh@19 -- # patch=0 00:06:05.088 17:16:34 version -- app/version.sh@20 -- # get_header_version suffix 00:06:05.088 17:16:34 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:05.088 17:16:34 version -- app/version.sh@14 -- # cut -f2 00:06:05.088 17:16:34 version -- app/version.sh@14 -- # tr -d '"' 00:06:05.088 17:16:34 version -- app/version.sh@20 -- # suffix=-pre 00:06:05.088 17:16:34 version -- app/version.sh@22 -- # version=25.1 00:06:05.088 17:16:34 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:05.088 17:16:34 version -- app/version.sh@28 -- # version=25.1rc0 00:06:05.088 17:16:34 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:05.088 17:16:34 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:05.088 17:16:34 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:05.088 17:16:34 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:05.088 00:06:05.088 real 0m0.243s 00:06:05.088 user 0m0.157s 00:06:05.088 sys 0m0.129s 00:06:05.088 17:16:34 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.088 17:16:34 version -- common/autotest_common.sh@10 -- # set +x 00:06:05.088 ************************************ 00:06:05.088 END TEST version 00:06:05.088 ************************************ 00:06:05.088 17:16:34 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:05.088 17:16:34 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:05.088 17:16:34 -- spdk/autotest.sh@194 -- # uname -s 00:06:05.088 17:16:34 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:05.088 17:16:34 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:05.088 17:16:34 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:05.088 17:16:34 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:05.088 17:16:34 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:05.088 17:16:34 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:05.088 17:16:34 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:05.088 17:16:34 -- common/autotest_common.sh@10 -- # set +x 00:06:05.088 17:16:34 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:05.088 17:16:34 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:06:05.088 17:16:34 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:06:05.088 17:16:34 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:06:05.088 17:16:34 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:06:05.088 17:16:34 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:06:05.088 17:16:34 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:05.088 17:16:34 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:05.088 17:16:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.088 17:16:34 -- common/autotest_common.sh@10 -- # set +x 00:06:05.348 ************************************ 00:06:05.348 START TEST nvmf_tcp 00:06:05.348 ************************************ 00:06:05.348 17:16:34 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:05.348 * Looking for test storage... 00:06:05.348 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:05.348 17:16:34 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:05.348 17:16:34 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:06:05.348 17:16:34 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:05.348 17:16:34 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:05.348 17:16:34 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:05.348 17:16:34 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:05.348 17:16:34 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:05.348 17:16:34 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:05.348 17:16:34 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:05.348 17:16:34 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:05.348 17:16:34 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:05.348 17:16:34 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:05.348 17:16:34 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:05.348 17:16:34 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:05.348 17:16:34 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:05.349 17:16:34 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:05.349 17:16:34 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:06:05.349 17:16:34 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:05.349 17:16:34 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:05.349 17:16:34 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:05.349 17:16:34 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:06:05.349 17:16:34 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:05.349 17:16:34 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:06:05.349 17:16:34 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:05.349 17:16:34 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:05.349 17:16:34 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:06:05.349 17:16:34 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:05.349 17:16:34 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:06:05.349 17:16:34 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:05.349 17:16:34 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:05.349 17:16:34 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:05.349 17:16:34 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:06:05.349 17:16:34 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:05.349 17:16:34 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:05.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.349 --rc genhtml_branch_coverage=1 00:06:05.349 --rc genhtml_function_coverage=1 00:06:05.349 --rc genhtml_legend=1 00:06:05.349 --rc geninfo_all_blocks=1 00:06:05.349 --rc geninfo_unexecuted_blocks=1 00:06:05.349 00:06:05.349 ' 00:06:05.349 17:16:34 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:05.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.349 --rc genhtml_branch_coverage=1 00:06:05.349 --rc genhtml_function_coverage=1 00:06:05.349 --rc genhtml_legend=1 00:06:05.349 --rc geninfo_all_blocks=1 00:06:05.349 --rc geninfo_unexecuted_blocks=1 00:06:05.349 00:06:05.349 ' 00:06:05.349 17:16:34 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:05.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.349 --rc genhtml_branch_coverage=1 00:06:05.349 --rc genhtml_function_coverage=1 00:06:05.349 --rc genhtml_legend=1 00:06:05.349 --rc geninfo_all_blocks=1 00:06:05.349 --rc geninfo_unexecuted_blocks=1 00:06:05.349 00:06:05.349 ' 00:06:05.349 17:16:34 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:05.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.349 --rc genhtml_branch_coverage=1 00:06:05.349 --rc genhtml_function_coverage=1 00:06:05.349 --rc genhtml_legend=1 00:06:05.349 --rc geninfo_all_blocks=1 00:06:05.349 --rc geninfo_unexecuted_blocks=1 00:06:05.349 00:06:05.349 ' 00:06:05.349 17:16:34 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:05.349 17:16:34 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:05.349 17:16:34 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:05.349 17:16:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:05.349 17:16:34 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.349 17:16:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:05.349 ************************************ 00:06:05.349 START TEST nvmf_target_core 00:06:05.349 ************************************ 00:06:05.349 17:16:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:05.609 * Looking for test storage... 00:06:05.609 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:05.609 17:16:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:05.609 17:16:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:06:05.609 17:16:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:05.609 17:16:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:05.609 17:16:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:05.609 17:16:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:05.609 17:16:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:05.609 17:16:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:06:05.609 17:16:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:06:05.609 17:16:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:06:05.609 17:16:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:06:05.609 17:16:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:06:05.609 17:16:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:06:05.609 17:16:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:06:05.609 17:16:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:05.609 17:16:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:06:05.609 17:16:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:06:05.609 17:16:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:05.609 17:16:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:05.609 17:16:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:06:05.609 17:16:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:06:05.609 17:16:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:05.609 17:16:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:06:05.609 17:16:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:06:05.609 17:16:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:06:05.609 17:16:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:06:05.609 17:16:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:05.609 17:16:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:06:05.609 17:16:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:06:05.609 17:16:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:05.609 17:16:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:05.609 17:16:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:06:05.609 17:16:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:05.609 17:16:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:05.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.609 --rc genhtml_branch_coverage=1 00:06:05.609 --rc genhtml_function_coverage=1 00:06:05.609 --rc genhtml_legend=1 00:06:05.609 --rc geninfo_all_blocks=1 00:06:05.609 --rc geninfo_unexecuted_blocks=1 00:06:05.609 00:06:05.609 ' 00:06:05.609 17:16:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:05.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.609 --rc genhtml_branch_coverage=1 00:06:05.610 --rc genhtml_function_coverage=1 00:06:05.610 --rc genhtml_legend=1 00:06:05.610 --rc geninfo_all_blocks=1 00:06:05.610 --rc geninfo_unexecuted_blocks=1 00:06:05.610 00:06:05.610 ' 00:06:05.610 17:16:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:05.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.610 --rc genhtml_branch_coverage=1 00:06:05.610 --rc genhtml_function_coverage=1 00:06:05.610 --rc genhtml_legend=1 00:06:05.610 --rc geninfo_all_blocks=1 00:06:05.610 --rc geninfo_unexecuted_blocks=1 00:06:05.610 00:06:05.610 ' 00:06:05.610 17:16:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:05.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.610 --rc genhtml_branch_coverage=1 00:06:05.610 --rc genhtml_function_coverage=1 00:06:05.610 --rc genhtml_legend=1 00:06:05.610 --rc geninfo_all_blocks=1 00:06:05.610 --rc geninfo_unexecuted_blocks=1 00:06:05.610 00:06:05.610 ' 00:06:05.610 17:16:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:05.610 17:16:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:05.610 17:16:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:05.610 17:16:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:05.610 17:16:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:05.610 17:16:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:05.610 17:16:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:05.610 17:16:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:05.610 17:16:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:05.610 17:16:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:05.610 17:16:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:05.610 17:16:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:05.610 17:16:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:05.610 17:16:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:05.610 17:16:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:06:05.610 17:16:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:06:05.610 17:16:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:05.610 17:16:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:05.610 17:16:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:05.610 17:16:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:05.610 17:16:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:05.610 17:16:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:06:05.610 17:16:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:05.610 17:16:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:05.610 17:16:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:05.610 17:16:34 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.610 17:16:34 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.610 17:16:34 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.610 17:16:34 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:05.610 17:16:34 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.610 17:16:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:06:05.610 17:16:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:05.610 17:16:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:05.610 17:16:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:05.610 17:16:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:05.610 17:16:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:05.610 17:16:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:05.610 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:05.610 17:16:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:05.610 17:16:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:05.610 17:16:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:05.610 17:16:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:05.610 17:16:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:05.610 17:16:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:05.610 17:16:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:05.610 17:16:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:05.610 17:16:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.610 17:16:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:05.610 ************************************ 00:06:05.610 START TEST nvmf_abort 00:06:05.610 ************************************ 00:06:05.610 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:05.870 * Looking for test storage... 00:06:05.870 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:05.870 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:05.870 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:06:05.870 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:05.870 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:05.870 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:05.870 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:05.870 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:05.870 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:06:05.870 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:06:05.870 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:06:05.870 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:06:05.870 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:06:05.870 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:06:05.870 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:06:05.870 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:05.870 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:06:05.870 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:06:05.870 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:05.870 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:05.870 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:06:05.870 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:06:05.870 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:05.870 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:06:05.870 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:06:05.870 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:06:05.870 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:06:05.870 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:05.870 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:06:05.870 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:06:05.870 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:05.870 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:05.870 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:06:05.870 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:05.870 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:05.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.871 --rc genhtml_branch_coverage=1 00:06:05.871 --rc genhtml_function_coverage=1 00:06:05.871 --rc genhtml_legend=1 00:06:05.871 --rc geninfo_all_blocks=1 00:06:05.871 --rc geninfo_unexecuted_blocks=1 00:06:05.871 00:06:05.871 ' 00:06:05.871 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:05.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.871 --rc genhtml_branch_coverage=1 00:06:05.871 --rc genhtml_function_coverage=1 00:06:05.871 --rc genhtml_legend=1 00:06:05.871 --rc geninfo_all_blocks=1 00:06:05.871 --rc geninfo_unexecuted_blocks=1 00:06:05.871 00:06:05.871 ' 00:06:05.871 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:05.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.871 --rc genhtml_branch_coverage=1 00:06:05.871 --rc genhtml_function_coverage=1 00:06:05.871 --rc genhtml_legend=1 00:06:05.871 --rc geninfo_all_blocks=1 00:06:05.871 --rc geninfo_unexecuted_blocks=1 00:06:05.871 00:06:05.871 ' 00:06:05.871 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:05.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.871 --rc genhtml_branch_coverage=1 00:06:05.871 --rc genhtml_function_coverage=1 00:06:05.871 --rc genhtml_legend=1 00:06:05.871 --rc geninfo_all_blocks=1 00:06:05.871 --rc geninfo_unexecuted_blocks=1 00:06:05.871 00:06:05.871 ' 00:06:05.871 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:05.871 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:05.871 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:05.871 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:05.871 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:05.871 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:05.871 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:05.871 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:05.871 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:05.871 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:05.871 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:05.871 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:05.871 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:06:05.871 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:06:05.871 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:05.871 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:05.871 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:05.871 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:05.871 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:05.871 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:06:05.871 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:05.871 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:05.871 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:05.871 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.871 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.871 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.871 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:05.871 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.871 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:06:05.871 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:05.871 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:05.871 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:05.871 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:05.871 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:05.871 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:05.871 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:05.871 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:05.871 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:05.871 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:05.871 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:05.871 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:05.871 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:05.871 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:05.871 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:05.871 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:05.871 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:05.871 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:05.871 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:05.871 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:05.871 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:05.871 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:05.871 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:05.871 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:06:05.871 17:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:12.449 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:12.449 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:06:12.449 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:12.449 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:12.449 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:12.449 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:12.449 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:12.449 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:06:12.449 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:12.449 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:06:12.449 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:06:12.449 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:06:12.449 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:06:12.449 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:06:12.449 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:06:12.449 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:12.449 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:12.449 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:12.449 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:12.449 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:12.449 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:12.449 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:12.449 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:12.449 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:12.449 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:12.449 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:12.449 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:12.449 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:12.449 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:12.449 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:12.449 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:12.449 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:12.449 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:12.449 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:12.449 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:12.449 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:12.449 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:12.449 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:12.449 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:12.449 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:12.449 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:12.449 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:12.449 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:12.449 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:12.449 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:12.449 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:12.449 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:12.449 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:12.449 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:12.449 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:12.450 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:12.450 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:12.450 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:12.450 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:12.450 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:12.450 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:12.450 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:12.450 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:12.450 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:12.450 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:12.450 Found net devices under 0000:af:00.0: cvl_0_0 00:06:12.450 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:12.450 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:12.450 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:12.450 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:12.450 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:12.450 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:12.450 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:12.450 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:12.450 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:12.450 Found net devices under 0000:af:00.1: cvl_0_1 00:06:12.450 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:12.450 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:12.450 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:06:12.450 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:12.450 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:12.450 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:12.450 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:12.450 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:12.450 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:12.450 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:12.450 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:12.450 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:12.450 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:12.450 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:12.450 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:12.450 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:12.450 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:12.450 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:12.450 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:12.450 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:12.450 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:12.450 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:12.450 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:12.450 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:12.450 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:12.450 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:12.450 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:12.450 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:12.450 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:12.450 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:12.450 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.324 ms 00:06:12.450 00:06:12.450 --- 10.0.0.2 ping statistics --- 00:06:12.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:12.450 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:06:12.450 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:12.450 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:12.450 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:06:12.450 00:06:12.450 --- 10.0.0.1 ping statistics --- 00:06:12.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:12.450 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:06:12.450 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:12.450 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:06:12.450 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:12.450 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:12.450 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:12.450 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:12.450 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:12.450 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:12.450 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:12.450 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:12.450 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:12.450 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:12.450 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:12.450 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2402518 00:06:12.450 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:12.450 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2402518 00:06:12.450 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2402518 ']' 00:06:12.450 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.450 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.450 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.450 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.450 17:16:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:12.450 [2024-12-09 17:16:40.995545] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:06:12.450 [2024-12-09 17:16:40.995588] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:12.450 [2024-12-09 17:16:41.074508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:12.450 [2024-12-09 17:16:41.116620] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:12.450 [2024-12-09 17:16:41.116657] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:12.450 [2024-12-09 17:16:41.116665] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:12.450 [2024-12-09 17:16:41.116671] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:12.450 [2024-12-09 17:16:41.116676] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:12.450 [2024-12-09 17:16:41.117972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:12.450 [2024-12-09 17:16:41.118079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.450 [2024-12-09 17:16:41.118081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:12.450 17:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:12.450 17:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:06:12.450 17:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:12.450 17:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:12.450 17:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:12.450 17:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:12.450 17:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:12.450 17:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:12.450 17:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:12.450 [2024-12-09 17:16:41.254544] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:12.450 17:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:12.450 17:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:12.450 17:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:12.450 17:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:12.451 Malloc0 00:06:12.451 17:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:12.451 17:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:12.451 17:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:12.451 17:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:12.451 Delay0 00:06:12.451 17:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:12.451 17:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:12.451 17:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:12.451 17:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:12.451 17:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:12.451 17:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:12.451 17:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:12.451 17:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:12.451 17:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:12.451 17:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:12.451 17:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:12.451 17:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:12.451 [2024-12-09 17:16:41.325669] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:12.451 17:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:12.451 17:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:12.451 17:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:12.451 17:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:12.451 17:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:12.451 17:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:12.451 [2024-12-09 17:16:41.411557] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:14.355 [2024-12-09 17:16:43.439300] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c390 is same with the state(6) to be set 00:06:14.355 Initializing NVMe Controllers 00:06:14.355 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:14.355 controller IO queue size 128 less than required 00:06:14.355 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:14.355 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:14.355 Initialization complete. Launching workers. 00:06:14.355 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 37458 00:06:14.355 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37519, failed to submit 62 00:06:14.355 success 37462, unsuccessful 57, failed 0 00:06:14.355 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:14.355 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.355 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:14.355 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.355 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:14.355 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:14.355 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:14.355 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:06:14.355 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:14.355 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:06:14.355 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:14.355 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:14.355 rmmod nvme_tcp 00:06:14.355 rmmod nvme_fabrics 00:06:14.355 rmmod nvme_keyring 00:06:14.355 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:14.355 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:06:14.614 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:06:14.614 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2402518 ']' 00:06:14.614 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2402518 00:06:14.614 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2402518 ']' 00:06:14.614 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2402518 00:06:14.614 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:06:14.614 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:14.614 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2402518 00:06:14.614 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:14.614 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:14.614 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2402518' 00:06:14.614 killing process with pid 2402518 00:06:14.614 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2402518 00:06:14.614 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2402518 00:06:14.614 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:14.614 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:14.614 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:14.614 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:06:14.614 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:06:14.614 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:14.614 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:06:14.614 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:14.614 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:14.614 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:14.614 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:14.614 17:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:17.152 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:17.152 00:06:17.152 real 0m11.105s 00:06:17.152 user 0m11.331s 00:06:17.152 sys 0m5.362s 00:06:17.152 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:17.152 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:17.152 ************************************ 00:06:17.152 END TEST nvmf_abort 00:06:17.152 ************************************ 00:06:17.152 17:16:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:17.152 17:16:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:17.152 17:16:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.152 17:16:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:17.152 ************************************ 00:06:17.152 START TEST nvmf_ns_hotplug_stress 00:06:17.152 ************************************ 00:06:17.152 17:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:17.152 * Looking for test storage... 00:06:17.152 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:17.152 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:17.152 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:06:17.152 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:17.152 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:17.152 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:17.152 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:17.152 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:17.152 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:06:17.152 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:06:17.152 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:06:17.152 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:06:17.152 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:06:17.152 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:06:17.152 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:06:17.152 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:17.152 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:06:17.152 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:06:17.152 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:17.152 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:17.152 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:06:17.152 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:06:17.152 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:17.152 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:06:17.152 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:06:17.152 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:06:17.152 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:06:17.152 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:17.152 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:06:17.152 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:06:17.152 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:17.152 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:17.152 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:06:17.152 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:17.152 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:17.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.152 --rc genhtml_branch_coverage=1 00:06:17.152 --rc genhtml_function_coverage=1 00:06:17.152 --rc genhtml_legend=1 00:06:17.152 --rc geninfo_all_blocks=1 00:06:17.152 --rc geninfo_unexecuted_blocks=1 00:06:17.152 00:06:17.152 ' 00:06:17.152 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:17.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.152 --rc genhtml_branch_coverage=1 00:06:17.152 --rc genhtml_function_coverage=1 00:06:17.152 --rc genhtml_legend=1 00:06:17.152 --rc geninfo_all_blocks=1 00:06:17.152 --rc geninfo_unexecuted_blocks=1 00:06:17.152 00:06:17.152 ' 00:06:17.152 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:17.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.152 --rc genhtml_branch_coverage=1 00:06:17.152 --rc genhtml_function_coverage=1 00:06:17.152 --rc genhtml_legend=1 00:06:17.152 --rc geninfo_all_blocks=1 00:06:17.152 --rc geninfo_unexecuted_blocks=1 00:06:17.152 00:06:17.152 ' 00:06:17.152 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:17.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.152 --rc genhtml_branch_coverage=1 00:06:17.152 --rc genhtml_function_coverage=1 00:06:17.152 --rc genhtml_legend=1 00:06:17.152 --rc geninfo_all_blocks=1 00:06:17.152 --rc geninfo_unexecuted_blocks=1 00:06:17.152 00:06:17.152 ' 00:06:17.152 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:17.152 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:17.152 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:17.152 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:17.152 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:17.152 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:17.152 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:17.152 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:17.152 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:17.152 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:17.152 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:17.152 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:17.152 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:06:17.152 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:06:17.152 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:17.152 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:17.152 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:17.152 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:17.152 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:17.152 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:06:17.152 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:17.152 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:17.152 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:17.153 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.153 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.153 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.153 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:17.153 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.153 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:06:17.153 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:17.153 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:17.153 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:17.153 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:17.153 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:17.153 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:17.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:17.153 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:17.153 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:17.153 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:17.153 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:17.153 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:17.153 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:17.153 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:17.153 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:17.153 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:17.153 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:17.153 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:17.153 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:17.153 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:17.153 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:17.153 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:17.153 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:06:17.153 17:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:23.723 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:23.724 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:06:23.724 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:23.724 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:23.724 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:23.724 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:23.724 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:23.724 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:06:23.724 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:23.724 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:06:23.724 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:06:23.724 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:06:23.724 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:06:23.724 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:06:23.724 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:06:23.724 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:23.724 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:23.724 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:23.724 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:23.724 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:23.724 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:23.724 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:23.724 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:23.724 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:23.724 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:23.724 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:23.724 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:23.724 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:23.724 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:23.724 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:23.724 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:23.724 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:23.724 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:23.724 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:23.724 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:23.724 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:23.724 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:23.724 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:23.724 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:23.724 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:23.724 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:23.724 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:23.724 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:23.724 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:23.724 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:23.724 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:23.724 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:23.724 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:23.724 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:23.724 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:23.724 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:23.724 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:23.724 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:23.724 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:23.724 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:23.724 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:23.724 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:23.724 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:23.724 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:23.724 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:23.724 Found net devices under 0000:af:00.0: cvl_0_0 00:06:23.724 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:23.724 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:23.724 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:23.724 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:23.724 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:23.724 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:23.724 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:23.724 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:23.724 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:23.724 Found net devices under 0000:af:00.1: cvl_0_1 00:06:23.724 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:23.724 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:23.724 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:06:23.724 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:23.724 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:23.724 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:23.725 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:23.725 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:23.725 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:23.725 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:23.725 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:23.725 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:23.725 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:23.725 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:23.725 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:23.725 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:23.725 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:23.725 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:23.725 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:23.725 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:23.725 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:23.725 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:23.725 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:23.725 17:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:23.725 17:16:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:23.725 17:16:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:23.725 17:16:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:23.725 17:16:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:23.725 17:16:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:23.725 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:23.725 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.357 ms 00:06:23.725 00:06:23.725 --- 10.0.0.2 ping statistics --- 00:06:23.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:23.725 rtt min/avg/max/mdev = 0.357/0.357/0.357/0.000 ms 00:06:23.725 17:16:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:23.725 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:23.725 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:06:23.725 00:06:23.725 --- 10.0.0.1 ping statistics --- 00:06:23.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:23.725 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:06:23.725 17:16:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:23.725 17:16:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:06:23.725 17:16:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:23.725 17:16:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:23.725 17:16:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:23.725 17:16:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:23.725 17:16:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:23.725 17:16:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:23.725 17:16:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:23.725 17:16:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:23.725 17:16:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:23.725 17:16:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:23.725 17:16:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:23.725 17:16:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2406502 00:06:23.725 17:16:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2406502 00:06:23.725 17:16:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:23.725 17:16:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2406502 ']' 00:06:23.725 17:16:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.725 17:16:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:23.725 17:16:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.725 17:16:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:23.725 17:16:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:23.725 [2024-12-09 17:16:52.298662] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:06:23.725 [2024-12-09 17:16:52.298711] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:23.725 [2024-12-09 17:16:52.379435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:23.725 [2024-12-09 17:16:52.419018] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:23.725 [2024-12-09 17:16:52.419055] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:23.725 [2024-12-09 17:16:52.419062] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:23.725 [2024-12-09 17:16:52.419068] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:23.725 [2024-12-09 17:16:52.419074] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:23.725 [2024-12-09 17:16:52.420374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:23.725 [2024-12-09 17:16:52.420483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.725 [2024-12-09 17:16:52.420484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:23.984 17:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:23.984 17:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:06:23.984 17:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:23.984 17:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:23.984 17:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:23.984 17:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:23.984 17:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:23.984 17:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:24.243 [2024-12-09 17:16:53.342943] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:24.243 17:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:24.546 17:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:24.855 [2024-12-09 17:16:53.744374] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:24.855 17:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:24.855 17:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:25.114 Malloc0 00:06:25.114 17:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:25.372 Delay0 00:06:25.372 17:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:25.631 17:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:25.631 NULL1 00:06:25.631 17:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:25.890 17:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2406988 00:06:25.890 17:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:25.890 17:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2406988 00:06:25.890 17:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:26.148 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:26.406 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:26.406 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:26.406 true 00:06:26.406 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2406988 00:06:26.406 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:26.665 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:26.923 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:26.923 17:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:27.181 true 00:06:27.181 17:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2406988 00:06:27.181 17:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:28.117 Read completed with error (sct=0, sc=11) 00:06:28.117 17:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:28.375 17:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:28.375 17:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:28.634 true 00:06:28.634 17:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2406988 00:06:28.634 17:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:28.634 17:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:28.892 17:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:28.892 17:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:29.150 true 00:06:29.150 17:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2406988 00:06:29.150 17:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:30.086 17:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:30.345 17:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:30.345 17:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:30.603 true 00:06:30.603 17:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2406988 00:06:30.603 17:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:30.862 17:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:30.862 17:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:30.862 17:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:31.120 true 00:06:31.120 17:17:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2406988 00:06:31.120 17:17:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:32.495 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:32.495 17:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:32.495 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:32.495 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:32.495 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:32.495 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:32.495 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:32.495 17:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:32.495 17:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:32.753 true 00:06:32.753 17:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2406988 00:06:32.753 17:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:33.320 17:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:33.578 17:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:33.578 17:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:33.837 true 00:06:33.837 17:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2406988 00:06:33.837 17:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:34.096 17:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:34.355 17:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:34.355 17:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:34.355 true 00:06:34.355 17:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2406988 00:06:34.355 17:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:35.731 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:35.732 17:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:35.732 17:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:35.732 17:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:35.990 true 00:06:35.990 17:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2406988 00:06:35.990 17:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:36.248 17:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:36.248 17:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:36.248 17:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:36.507 true 00:06:36.507 17:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2406988 00:06:36.507 17:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:37.443 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.701 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:37.701 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.701 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.701 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.701 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.701 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.701 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.701 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:37.701 17:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:37.960 true 00:06:37.960 17:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2406988 00:06:37.960 17:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.894 17:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:38.894 17:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:38.894 17:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:39.152 true 00:06:39.152 17:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2406988 00:06:39.152 17:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.410 17:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:39.703 17:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:39.703 17:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:39.703 true 00:06:39.703 17:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2406988 00:06:39.703 17:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.078 17:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:41.078 17:17:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:41.078 17:17:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:41.336 true 00:06:41.336 17:17:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2406988 00:06:41.336 17:17:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.336 17:17:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:41.594 17:17:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:41.594 17:17:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:41.852 true 00:06:41.852 17:17:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2406988 00:06:41.852 17:17:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:43.227 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:43.227 17:17:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:43.227 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:43.227 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:43.227 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:43.227 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:43.227 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:43.227 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:43.227 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:43.227 true 00:06:43.227 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2406988 00:06:43.227 17:17:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.163 17:17:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:44.422 17:17:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:44.422 17:17:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:44.422 true 00:06:44.422 17:17:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2406988 00:06:44.422 17:17:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.680 17:17:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:44.938 17:17:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:44.938 17:17:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:45.196 true 00:06:45.196 17:17:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2406988 00:06:45.196 17:17:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.571 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.571 17:17:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:46.571 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.571 17:17:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:46.571 17:17:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:46.571 true 00:06:46.571 17:17:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2406988 00:06:46.571 17:17:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.830 17:17:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:47.089 17:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:47.089 17:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:47.347 true 00:06:47.347 17:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2406988 00:06:47.347 17:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.282 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:48.541 17:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:48.541 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:48.541 17:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:48.541 17:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:48.799 true 00:06:48.799 17:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2406988 00:06:48.799 17:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.057 17:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:49.316 17:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:49.316 17:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:49.316 true 00:06:49.316 17:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2406988 00:06:49.316 17:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.692 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:50.692 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:50.692 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:50.692 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:50.692 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:50.692 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:50.692 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:50.692 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:50.692 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:50.692 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:50.951 true 00:06:50.951 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2406988 00:06:50.951 17:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.886 17:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:51.886 17:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:51.886 17:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:52.144 true 00:06:52.144 17:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2406988 00:06:52.144 17:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.402 17:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:52.402 17:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:52.402 17:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:52.660 true 00:06:52.660 17:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2406988 00:06:52.660 17:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.036 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:54.036 17:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:54.036 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:54.036 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:54.036 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:54.036 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:54.036 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:54.036 17:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:54.036 17:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:54.036 true 00:06:54.294 17:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2406988 00:06:54.294 17:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.860 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:55.118 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:55.118 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:55.376 true 00:06:55.376 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2406988 00:06:55.376 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.635 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:55.893 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:06:55.893 17:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:06:55.893 true 00:06:55.894 17:17:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2406988 00:06:55.894 17:17:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:56.872 Initializing NVMe Controllers 00:06:56.872 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:56.872 Controller IO queue size 128, less than required. 00:06:56.872 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:56.872 Controller IO queue size 128, less than required. 00:06:56.872 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:56.872 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:56.872 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:56.872 Initialization complete. Launching workers. 00:06:56.872 ======================================================== 00:06:56.872 Latency(us) 00:06:56.872 Device Information : IOPS MiB/s Average min max 00:06:56.872 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1321.77 0.65 61076.21 2240.19 1013355.39 00:06:56.872 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 16178.77 7.90 7891.79 1555.58 300135.99 00:06:56.872 ======================================================== 00:06:56.872 Total : 17500.53 8.55 11908.66 1555.58 1013355.39 00:06:56.872 00:06:57.165 17:17:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:57.165 17:17:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:06:57.165 17:17:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:06:57.451 true 00:06:57.451 17:17:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2406988 00:06:57.451 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2406988) - No such process 00:06:57.451 17:17:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2406988 00:06:57.451 17:17:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.710 17:17:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:57.710 17:17:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:57.710 17:17:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:57.710 17:17:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:57.710 17:17:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:57.710 17:17:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:57.968 null0 00:06:57.968 17:17:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:57.968 17:17:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:57.968 17:17:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:58.226 null1 00:06:58.226 17:17:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:58.226 17:17:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:58.226 17:17:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:58.226 null2 00:06:58.485 17:17:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:58.485 17:17:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:58.485 17:17:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:58.485 null3 00:06:58.485 17:17:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:58.485 17:17:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:58.485 17:17:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:58.743 null4 00:06:58.743 17:17:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:58.743 17:17:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:58.743 17:17:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:59.002 null5 00:06:59.002 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:59.002 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:59.002 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:59.261 null6 00:06:59.261 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:59.261 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:59.261 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:59.261 null7 00:06:59.261 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:59.261 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:59.261 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:59.261 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:59.261 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:59.261 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:59.261 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:59.261 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:59.261 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.261 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:59.261 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:59.261 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:59.261 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:59.261 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:59.261 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:59.261 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:59.261 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:59.261 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:59.261 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.261 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:59.261 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:59.261 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:59.261 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:59.261 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:59.261 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:59.261 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:59.261 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.261 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:59.261 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:59.261 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:59.261 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:59.261 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:59.261 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:59.261 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:59.261 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.261 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:59.261 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:59.261 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:59.261 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:59.261 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:59.261 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:59.261 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:59.261 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:59.261 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.261 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:59.261 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:59.261 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:59.261 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:59.261 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:59.261 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:59.261 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:59.261 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:59.261 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:59.261 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.262 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:59.262 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:59.262 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:59.262 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:59.262 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.262 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:59.262 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:59.262 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:59.262 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:59.262 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:59.262 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2412542 2412544 2412545 2412547 2412549 2412551 2412552 2412554 00:06:59.262 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:59.262 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:59.262 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.262 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:59.521 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:59.521 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:59.521 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.521 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:59.521 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:59.521 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:59.521 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:59.521 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:59.780 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.780 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.780 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:59.780 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.780 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.780 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:59.780 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.780 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.780 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:59.780 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.780 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.780 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.780 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.780 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:59.780 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:59.780 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.780 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.780 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:59.780 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.780 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.780 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:59.780 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.780 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.780 17:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:00.039 17:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:00.039 17:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:00.039 17:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:00.039 17:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:00.039 17:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:00.039 17:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.039 17:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:00.039 17:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:00.039 17:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.039 17:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.298 17:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:00.298 17:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.298 17:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.298 17:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:00.298 17:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.298 17:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.298 17:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:00.298 17:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.298 17:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.298 17:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:00.298 17:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.298 17:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.298 17:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.298 17:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.298 17:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:00.298 17:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:00.298 17:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.298 17:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.298 17:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:00.298 17:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.298 17:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.298 17:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:00.298 17:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:00.298 17:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:00.298 17:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:00.298 17:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:00.298 17:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:00.298 17:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:00.298 17:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.557 17:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:00.557 17:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.557 17:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.557 17:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:00.557 17:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.557 17:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.557 17:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:00.557 17:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.557 17:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.557 17:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:00.557 17:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.557 17:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.557 17:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:00.558 17:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.558 17:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.558 17:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:00.558 17:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.558 17:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.558 17:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:00.558 17:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.558 17:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.558 17:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:00.558 17:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.558 17:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.558 17:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:00.816 17:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:00.816 17:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:00.816 17:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:00.816 17:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:00.817 17:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.817 17:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:00.817 17:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:00.817 17:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:01.075 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.075 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.075 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:01.075 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.075 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.075 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:01.075 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.075 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.076 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.076 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.076 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:01.076 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:01.076 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.076 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.076 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:01.076 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.076 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.076 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:01.076 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.076 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.076 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:01.076 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.076 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.076 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:01.076 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:01.076 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.076 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:01.335 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:01.335 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:01.335 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:01.335 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:01.335 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:01.335 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.335 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.335 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:01.335 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.335 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.335 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:01.335 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.335 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.335 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.335 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:01.335 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.335 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:01.335 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.335 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.335 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:01.335 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.335 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.335 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:01.335 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.335 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.335 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:01.594 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.594 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.594 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:01.594 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:01.594 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:01.594 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:01.594 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.594 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:01.594 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:01.594 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:01.594 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:01.853 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.853 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.853 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:01.853 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.853 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.853 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:01.853 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.853 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.853 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:01.853 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.853 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.853 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:01.853 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.853 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.853 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:01.853 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.853 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.853 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:01.853 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.853 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.853 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:01.853 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.853 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.853 17:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:02.112 17:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:02.112 17:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:02.112 17:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.112 17:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:02.112 17:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:02.112 17:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:02.112 17:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:02.113 17:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:02.372 17:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:02.372 17:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:02.372 17:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:02.372 17:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:02.372 17:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:02.372 17:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:02.372 17:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:02.372 17:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:02.372 17:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:02.372 17:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:02.372 17:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:02.372 17:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:02.372 17:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:02.372 17:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:02.372 17:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:02.372 17:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:02.372 17:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:02.372 17:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:02.372 17:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:02.372 17:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:02.372 17:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:02.372 17:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:02.372 17:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:02.372 17:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:02.372 17:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:02.372 17:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:02.372 17:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:02.372 17:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:02.372 17:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:02.372 17:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.372 17:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:02.372 17:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:02.632 17:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:02.632 17:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:02.632 17:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:02.632 17:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:02.632 17:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:02.632 17:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:02.632 17:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:02.632 17:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:02.632 17:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:02.632 17:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:02.632 17:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:02.632 17:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:02.632 17:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:02.632 17:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:02.632 17:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:02.632 17:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:02.632 17:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:02.632 17:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:02.632 17:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:02.632 17:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:02.632 17:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:02.632 17:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:02.632 17:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:02.632 17:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:02.891 17:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:02.891 17:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:02.891 17:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:02.891 17:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:02.891 17:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:02.891 17:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.891 17:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:02.891 17:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:03.150 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.150 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.150 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:03.150 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.150 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.150 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.150 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.150 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:03.150 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:03.150 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.150 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.150 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:03.150 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.150 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.150 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:03.150 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.150 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.150 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:03.150 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.150 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.150 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:03.150 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.150 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.150 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:03.409 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:03.409 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:03.409 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:03.409 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:03.410 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:03.410 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.410 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:03.410 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:03.410 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.410 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.410 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.410 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.410 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.410 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.410 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.410 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.410 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.410 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.410 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.410 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.410 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.410 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.669 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.669 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.669 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:03.669 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:03.669 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:03.669 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:07:03.669 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:03.669 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:07:03.669 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:03.669 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:03.669 rmmod nvme_tcp 00:07:03.669 rmmod nvme_fabrics 00:07:03.669 rmmod nvme_keyring 00:07:03.669 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:03.669 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:07:03.669 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:07:03.669 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2406502 ']' 00:07:03.669 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2406502 00:07:03.669 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2406502 ']' 00:07:03.669 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2406502 00:07:03.669 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:07:03.669 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:03.669 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2406502 00:07:03.669 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:03.669 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:03.669 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2406502' 00:07:03.669 killing process with pid 2406502 00:07:03.669 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2406502 00:07:03.669 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2406502 00:07:03.928 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:03.928 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:03.928 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:03.928 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:07:03.928 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:07:03.928 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:03.928 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:07:03.928 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:03.928 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:03.928 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:03.928 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:03.928 17:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:05.834 17:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:05.834 00:07:05.834 real 0m49.029s 00:07:05.834 user 3m18.104s 00:07:05.834 sys 0m15.387s 00:07:05.834 17:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:05.834 17:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:05.834 ************************************ 00:07:05.834 END TEST nvmf_ns_hotplug_stress 00:07:05.834 ************************************ 00:07:05.834 17:17:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:05.834 17:17:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:05.834 17:17:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:05.834 17:17:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:06.092 ************************************ 00:07:06.092 START TEST nvmf_delete_subsystem 00:07:06.092 ************************************ 00:07:06.092 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:06.092 * Looking for test storage... 00:07:06.092 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:06.092 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:06.092 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:07:06.092 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:06.092 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:06.092 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:06.092 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:06.092 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:06.092 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:07:06.092 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:07:06.092 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:07:06.092 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:07:06.092 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:07:06.092 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:07:06.092 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:07:06.092 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:06.092 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:07:06.092 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:07:06.092 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:06.092 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:06.092 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:07:06.092 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:07:06.092 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:06.092 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:07:06.092 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:07:06.092 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:07:06.092 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:07:06.092 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:06.092 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:07:06.092 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:07:06.092 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:06.092 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:06.092 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:07:06.092 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:06.092 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:06.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.092 --rc genhtml_branch_coverage=1 00:07:06.092 --rc genhtml_function_coverage=1 00:07:06.092 --rc genhtml_legend=1 00:07:06.092 --rc geninfo_all_blocks=1 00:07:06.092 --rc geninfo_unexecuted_blocks=1 00:07:06.092 00:07:06.092 ' 00:07:06.092 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:06.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.092 --rc genhtml_branch_coverage=1 00:07:06.092 --rc genhtml_function_coverage=1 00:07:06.092 --rc genhtml_legend=1 00:07:06.092 --rc geninfo_all_blocks=1 00:07:06.092 --rc geninfo_unexecuted_blocks=1 00:07:06.092 00:07:06.092 ' 00:07:06.092 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:06.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.092 --rc genhtml_branch_coverage=1 00:07:06.092 --rc genhtml_function_coverage=1 00:07:06.092 --rc genhtml_legend=1 00:07:06.092 --rc geninfo_all_blocks=1 00:07:06.092 --rc geninfo_unexecuted_blocks=1 00:07:06.092 00:07:06.092 ' 00:07:06.092 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:06.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.092 --rc genhtml_branch_coverage=1 00:07:06.092 --rc genhtml_function_coverage=1 00:07:06.092 --rc genhtml_legend=1 00:07:06.092 --rc geninfo_all_blocks=1 00:07:06.092 --rc geninfo_unexecuted_blocks=1 00:07:06.092 00:07:06.092 ' 00:07:06.092 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:06.092 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:06.092 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:06.092 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:06.092 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:06.092 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:06.092 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:06.092 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:06.092 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:06.092 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:06.092 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:06.092 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:06.092 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:07:06.093 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:07:06.093 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:06.093 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:06.093 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:06.093 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:06.093 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:06.093 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:07:06.093 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:06.093 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:06.093 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:06.093 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.093 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.093 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.093 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:06.093 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.093 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:07:06.093 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:06.093 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:06.093 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:06.093 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:06.093 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:06.093 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:06.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:06.093 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:06.093 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:06.093 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:06.093 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:06.093 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:06.093 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:06.093 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:06.093 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:06.093 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:06.093 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:06.093 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:06.093 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:06.093 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:06.093 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:06.093 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:07:06.093 17:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:12.660 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:12.660 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:07:12.660 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:12.660 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:12.660 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:12.660 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:12.660 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:12.660 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:07:12.660 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:12.660 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:07:12.660 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:07:12.660 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:07:12.660 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:07:12.660 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:07:12.660 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:07:12.660 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:12.660 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:12.660 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:12.660 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:12.660 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:12.660 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:12.660 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:12.660 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:12.660 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:12.660 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:12.660 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:12.660 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:12.660 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:12.660 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:12.660 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:12.660 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:12.660 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:12.660 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:12.660 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:12.660 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:12.660 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:12.660 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:12.660 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:12.660 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:12.660 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:12.660 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:12.660 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:12.660 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:12.660 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:12.660 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:12.660 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:12.660 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:12.660 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:12.661 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:12.661 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:12.661 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:12.661 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:12.661 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:12.661 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:12.661 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:12.661 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:12.661 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:12.661 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:12.661 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:12.661 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:12.661 Found net devices under 0000:af:00.0: cvl_0_0 00:07:12.661 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:12.661 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:12.661 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:12.661 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:12.661 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:12.661 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:12.661 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:12.661 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:12.661 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:12.661 Found net devices under 0000:af:00.1: cvl_0_1 00:07:12.661 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:12.661 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:12.661 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:07:12.661 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:12.661 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:12.661 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:12.661 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:12.661 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:12.661 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:12.661 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:12.661 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:12.661 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:12.661 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:12.661 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:12.661 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:12.661 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:12.661 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:12.661 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:12.661 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:12.661 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:12.661 17:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:12.661 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:12.661 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:12.661 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:12.661 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:12.661 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:12.661 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:12.661 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:12.661 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:12.661 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:12.661 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.278 ms 00:07:12.661 00:07:12.661 --- 10.0.0.2 ping statistics --- 00:07:12.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:12.661 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:07:12.661 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:12.661 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:12.661 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:07:12.661 00:07:12.661 --- 10.0.0.1 ping statistics --- 00:07:12.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:12.661 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:07:12.661 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:12.661 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:07:12.661 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:12.661 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:12.661 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:12.661 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:12.661 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:12.661 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:12.661 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:12.661 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:12.661 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:12.661 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:12.661 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:12.661 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2416937 00:07:12.661 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2416937 00:07:12.661 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:12.661 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2416937 ']' 00:07:12.661 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.661 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:12.661 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.661 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:12.661 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:12.661 [2024-12-09 17:17:41.382231] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:07:12.661 [2024-12-09 17:17:41.382280] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:12.661 [2024-12-09 17:17:41.461454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:12.661 [2024-12-09 17:17:41.506302] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:12.661 [2024-12-09 17:17:41.506337] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:12.661 [2024-12-09 17:17:41.506345] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:12.661 [2024-12-09 17:17:41.506352] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:12.661 [2024-12-09 17:17:41.506359] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:12.661 [2024-12-09 17:17:41.507461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:12.661 [2024-12-09 17:17:41.507462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.661 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:12.661 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:07:12.661 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:12.661 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:12.661 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:12.661 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:12.661 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:12.661 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.661 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:12.661 [2024-12-09 17:17:41.651346] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:12.661 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.661 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:12.661 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.662 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:12.662 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.662 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:12.662 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.662 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:12.662 [2024-12-09 17:17:41.671565] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:12.662 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.662 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:12.662 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.662 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:12.662 NULL1 00:07:12.662 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.662 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:12.662 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.662 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:12.662 Delay0 00:07:12.662 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.662 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:12.662 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.662 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:12.662 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.662 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2417130 00:07:12.662 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:12.662 17:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:12.662 [2024-12-09 17:17:41.782484] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:14.566 17:17:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:14.566 17:17:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.566 17:17:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:15.133 Read completed with error (sct=0, sc=8) 00:07:15.133 starting I/O failed: -6 00:07:15.133 Read completed with error (sct=0, sc=8) 00:07:15.133 Write completed with error (sct=0, sc=8) 00:07:15.133 Read completed with error (sct=0, sc=8) 00:07:15.133 Write completed with error (sct=0, sc=8) 00:07:15.133 starting I/O failed: -6 00:07:15.134 Write completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Write completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 starting I/O failed: -6 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Write completed with error (sct=0, sc=8) 00:07:15.134 starting I/O failed: -6 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 starting I/O failed: -6 00:07:15.134 Write completed with error (sct=0, sc=8) 00:07:15.134 Write completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Write completed with error (sct=0, sc=8) 00:07:15.134 starting I/O failed: -6 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 starting I/O failed: -6 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Write completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 starting I/O failed: -6 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Write completed with error (sct=0, sc=8) 00:07:15.134 starting I/O failed: -6 00:07:15.134 Write completed with error (sct=0, sc=8) 00:07:15.134 Write completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 starting I/O failed: -6 00:07:15.134 [2024-12-09 17:17:44.030734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cdb40 is same with the state(6) to be set 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Write completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Write completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Write completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Write completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Write completed with error (sct=0, sc=8) 00:07:15.134 Write completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Write completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Write completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Write completed with error (sct=0, sc=8) 00:07:15.134 Write completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Write completed with error (sct=0, sc=8) 00:07:15.134 Write completed with error (sct=0, sc=8) 00:07:15.134 Write completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Write completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Write completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Write completed with error (sct=0, sc=8) 00:07:15.134 Write completed with error (sct=0, sc=8) 00:07:15.134 Write completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Write completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Write completed with error (sct=0, sc=8) 00:07:15.134 Write completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Write completed with error (sct=0, sc=8) 00:07:15.134 Write completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Write completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Write completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Write completed with error (sct=0, sc=8) 00:07:15.134 Write completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Write completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Write completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Write completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Write completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Write completed with error (sct=0, sc=8) 00:07:15.134 Write completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 starting I/O failed: -6 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Write completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 starting I/O failed: -6 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 starting I/O failed: -6 00:07:15.134 Write completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Write completed with error (sct=0, sc=8) 00:07:15.134 Write completed with error (sct=0, sc=8) 00:07:15.134 starting I/O failed: -6 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Write completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 starting I/O failed: -6 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Write completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 starting I/O failed: -6 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Write completed with error (sct=0, sc=8) 00:07:15.134 starting I/O failed: -6 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Write completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 starting I/O failed: -6 00:07:15.134 Write completed with error (sct=0, sc=8) 00:07:15.134 Write completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Write completed with error (sct=0, sc=8) 00:07:15.134 starting I/O failed: -6 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Write completed with error (sct=0, sc=8) 00:07:15.134 Write completed with error (sct=0, sc=8) 00:07:15.134 Write completed with error (sct=0, sc=8) 00:07:15.134 starting I/O failed: -6 00:07:15.134 Write completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 Write completed with error (sct=0, sc=8) 00:07:15.134 Read completed with error (sct=0, sc=8) 00:07:15.134 starting I/O failed: -6 00:07:15.134 Write completed with error (sct=0, sc=8) 00:07:15.134 [2024-12-09 17:17:44.031500] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5fe000d490 is same with the state(6) to be set 00:07:16.071 [2024-12-09 17:17:45.000076] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ce9b0 is same with the state(6) to be set 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Write completed with error (sct=0, sc=8) 00:07:16.071 Write completed with error (sct=0, sc=8) 00:07:16.071 Write completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 [2024-12-09 17:17:45.032397] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cd960 is same with the state(6) to be set 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Write completed with error (sct=0, sc=8) 00:07:16.071 Write completed with error (sct=0, sc=8) 00:07:16.071 Write completed with error (sct=0, sc=8) 00:07:16.071 Write completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Write completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Write completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Write completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Write completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Write completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Write completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Write completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Write completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Write completed with error (sct=0, sc=8) 00:07:16.071 Write completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Write completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 [2024-12-09 17:17:45.033675] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5fe000d7c0 is same with the state(6) to be set 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Write completed with error (sct=0, sc=8) 00:07:16.071 Write completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Write completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Write completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Write completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Write completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Write completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Write completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 [2024-12-09 17:17:45.033929] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5fe000d020 is same with the state(6) to be set 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Write completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Write completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Write completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Write completed with error (sct=0, sc=8) 00:07:16.071 Write completed with error (sct=0, sc=8) 00:07:16.071 Write completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Write completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Write completed with error (sct=0, sc=8) 00:07:16.071 Read completed with error (sct=0, sc=8) 00:07:16.071 Write completed with error (sct=0, sc=8) 00:07:16.071 Write completed with error (sct=0, sc=8) 00:07:16.071 Write completed with error (sct=0, sc=8) 00:07:16.071 [2024-12-09 17:17:45.034489] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5fe0000c40 is same with the state(6) to be set 00:07:16.071 Initializing NVMe Controllers 00:07:16.071 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:16.071 Controller IO queue size 128, less than required. 00:07:16.071 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:16.071 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:16.071 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:16.071 Initialization complete. Launching workers. 00:07:16.071 ======================================================== 00:07:16.071 Latency(us) 00:07:16.071 Device Information : IOPS MiB/s Average min max 00:07:16.071 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 152.10 0.07 890040.40 255.59 1008409.26 00:07:16.071 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 165.52 0.08 1106150.00 459.30 2002167.34 00:07:16.071 ======================================================== 00:07:16.071 Total : 317.62 0.16 1002660.89 255.59 2002167.34 00:07:16.071 00:07:16.071 [2024-12-09 17:17:45.035026] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ce9b0 (9): Bad file descriptor 00:07:16.071 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:16.071 17:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.071 17:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:16.071 17:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2417130 00:07:16.072 17:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:16.638 17:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:16.638 17:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2417130 00:07:16.638 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2417130) - No such process 00:07:16.638 17:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2417130 00:07:16.638 17:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:07:16.638 17:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2417130 00:07:16.638 17:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:07:16.638 17:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:16.638 17:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:07:16.638 17:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:16.638 17:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2417130 00:07:16.638 17:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:07:16.638 17:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:16.638 17:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:16.638 17:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:16.638 17:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:16.638 17:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.638 17:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:16.638 17:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.638 17:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:16.638 17:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.638 17:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:16.638 [2024-12-09 17:17:45.562348] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:16.638 17:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.639 17:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:16.639 17:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.639 17:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:16.639 17:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.639 17:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2417710 00:07:16.639 17:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:16.639 17:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:16.639 17:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2417710 00:07:16.639 17:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:16.639 [2024-12-09 17:17:45.655670] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:17.206 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:17.206 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2417710 00:07:17.206 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:17.465 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:17.465 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2417710 00:07:17.465 17:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:18.032 17:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:18.032 17:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2417710 00:07:18.032 17:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:18.598 17:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:18.599 17:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2417710 00:07:18.599 17:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:19.166 17:17:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:19.166 17:17:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2417710 00:07:19.166 17:17:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:19.733 17:17:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:19.734 17:17:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2417710 00:07:19.734 17:17:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:19.734 Initializing NVMe Controllers 00:07:19.734 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:19.734 Controller IO queue size 128, less than required. 00:07:19.734 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:19.734 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:19.734 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:19.734 Initialization complete. Launching workers. 00:07:19.734 ======================================================== 00:07:19.734 Latency(us) 00:07:19.734 Device Information : IOPS MiB/s Average min max 00:07:19.734 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001966.43 1000137.48 1005708.92 00:07:19.734 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003477.74 1000133.74 1009488.04 00:07:19.734 ======================================================== 00:07:19.734 Total : 256.00 0.12 1002722.09 1000133.74 1009488.04 00:07:19.734 00:07:19.993 17:17:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:19.993 17:17:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2417710 00:07:19.993 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2417710) - No such process 00:07:19.993 17:17:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2417710 00:07:19.993 17:17:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:19.993 17:17:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:19.993 17:17:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:19.993 17:17:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:07:19.993 17:17:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:19.993 17:17:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:07:19.993 17:17:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:19.993 17:17:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:19.993 rmmod nvme_tcp 00:07:19.993 rmmod nvme_fabrics 00:07:19.993 rmmod nvme_keyring 00:07:20.252 17:17:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:20.252 17:17:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:07:20.252 17:17:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:07:20.252 17:17:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2416937 ']' 00:07:20.252 17:17:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2416937 00:07:20.252 17:17:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2416937 ']' 00:07:20.252 17:17:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2416937 00:07:20.252 17:17:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:07:20.252 17:17:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:20.252 17:17:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2416937 00:07:20.252 17:17:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:20.252 17:17:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:20.252 17:17:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2416937' 00:07:20.252 killing process with pid 2416937 00:07:20.252 17:17:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2416937 00:07:20.252 17:17:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2416937 00:07:20.252 17:17:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:20.252 17:17:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:20.252 17:17:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:20.252 17:17:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:07:20.252 17:17:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:20.252 17:17:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:07:20.252 17:17:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:07:20.252 17:17:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:20.252 17:17:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:20.252 17:17:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:20.252 17:17:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:20.252 17:17:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:22.790 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:22.790 00:07:22.790 real 0m16.452s 00:07:22.790 user 0m29.627s 00:07:22.790 sys 0m5.509s 00:07:22.790 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:22.790 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:22.790 ************************************ 00:07:22.790 END TEST nvmf_delete_subsystem 00:07:22.790 ************************************ 00:07:22.790 17:17:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:22.790 17:17:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:22.790 17:17:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:22.790 17:17:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:22.790 ************************************ 00:07:22.790 START TEST nvmf_host_management 00:07:22.790 ************************************ 00:07:22.790 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:22.790 * Looking for test storage... 00:07:22.790 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:22.790 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:22.790 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:07:22.790 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:22.790 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:22.790 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:22.790 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:22.790 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:22.790 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:22.790 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:22.790 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:22.790 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:22.790 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:22.790 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:22.790 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:22.790 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:22.790 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:22.790 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:22.790 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:22.790 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:22.790 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:22.790 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:22.790 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:22.790 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:22.790 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:22.790 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:22.790 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:22.790 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:22.790 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:22.790 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:22.790 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:22.790 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:22.790 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:22.790 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:22.790 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:22.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.790 --rc genhtml_branch_coverage=1 00:07:22.790 --rc genhtml_function_coverage=1 00:07:22.790 --rc genhtml_legend=1 00:07:22.790 --rc geninfo_all_blocks=1 00:07:22.790 --rc geninfo_unexecuted_blocks=1 00:07:22.790 00:07:22.790 ' 00:07:22.790 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:22.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.790 --rc genhtml_branch_coverage=1 00:07:22.790 --rc genhtml_function_coverage=1 00:07:22.790 --rc genhtml_legend=1 00:07:22.790 --rc geninfo_all_blocks=1 00:07:22.790 --rc geninfo_unexecuted_blocks=1 00:07:22.790 00:07:22.790 ' 00:07:22.790 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:22.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.790 --rc genhtml_branch_coverage=1 00:07:22.790 --rc genhtml_function_coverage=1 00:07:22.790 --rc genhtml_legend=1 00:07:22.790 --rc geninfo_all_blocks=1 00:07:22.790 --rc geninfo_unexecuted_blocks=1 00:07:22.790 00:07:22.790 ' 00:07:22.790 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:22.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.790 --rc genhtml_branch_coverage=1 00:07:22.790 --rc genhtml_function_coverage=1 00:07:22.790 --rc genhtml_legend=1 00:07:22.790 --rc geninfo_all_blocks=1 00:07:22.790 --rc geninfo_unexecuted_blocks=1 00:07:22.790 00:07:22.790 ' 00:07:22.790 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:22.790 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:22.790 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:22.790 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:22.790 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:22.790 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:22.790 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:22.790 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:22.790 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:22.790 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:22.790 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:22.790 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:22.790 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:07:22.790 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:07:22.790 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:22.790 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:22.791 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:22.791 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:22.791 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:22.791 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:22.791 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:22.791 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:22.791 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:22.791 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.791 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.791 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.791 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:22.791 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.791 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:22.791 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:22.791 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:22.791 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:22.791 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:22.791 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:22.791 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:22.791 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:22.791 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:22.791 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:22.791 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:22.791 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:22.791 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:22.791 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:22.791 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:22.791 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:22.791 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:22.791 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:22.791 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:22.791 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:22.791 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:22.791 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:22.791 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:22.791 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:22.791 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:07:22.791 17:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:29.360 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:29.360 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:07:29.360 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:29.360 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:29.360 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:29.360 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:29.360 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:29.360 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:07:29.360 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:29.360 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:07:29.360 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:07:29.360 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:07:29.360 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:07:29.360 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:07:29.360 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:07:29.360 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:29.360 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:29.360 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:29.360 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:29.360 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:29.360 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:29.360 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:29.360 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:29.360 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:29.360 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:29.360 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:29.360 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:29.361 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:29.361 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:29.361 Found net devices under 0000:af:00.0: cvl_0_0 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:29.361 Found net devices under 0000:af:00.1: cvl_0_1 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:29.361 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:29.361 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.331 ms 00:07:29.361 00:07:29.361 --- 10.0.0.2 ping statistics --- 00:07:29.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:29.361 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:29.361 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:29.361 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:07:29.361 00:07:29.361 --- 10.0.0.1 ping statistics --- 00:07:29.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:29.361 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2421787 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2421787 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2421787 ']' 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:29.361 17:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:29.361 [2024-12-09 17:17:57.807546] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:07:29.361 [2024-12-09 17:17:57.807589] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:29.362 [2024-12-09 17:17:57.881683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:29.362 [2024-12-09 17:17:57.921688] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:29.362 [2024-12-09 17:17:57.921721] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:29.362 [2024-12-09 17:17:57.921728] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:29.362 [2024-12-09 17:17:57.921734] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:29.362 [2024-12-09 17:17:57.921740] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:29.362 [2024-12-09 17:17:57.923099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:29.362 [2024-12-09 17:17:57.923122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:29.362 [2024-12-09 17:17:57.923206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:29.362 [2024-12-09 17:17:57.923208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:29.362 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:29.362 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:29.362 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:29.362 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:29.362 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:29.362 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:29.362 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:29.362 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.362 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:29.362 [2024-12-09 17:17:58.068344] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:29.362 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.362 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:29.362 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:29.362 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:29.362 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:29.362 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:29.362 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:29.362 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.362 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:29.362 Malloc0 00:07:29.362 [2024-12-09 17:17:58.141138] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:29.362 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.362 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:29.362 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:29.362 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:29.362 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2421970 00:07:29.362 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2421970 /var/tmp/bdevperf.sock 00:07:29.362 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2421970 ']' 00:07:29.362 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:29.362 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:29.362 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:29.362 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:29.362 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:29.362 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:29.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:29.362 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:29.362 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:29.362 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:29.362 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:29.362 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:29.362 { 00:07:29.362 "params": { 00:07:29.362 "name": "Nvme$subsystem", 00:07:29.362 "trtype": "$TEST_TRANSPORT", 00:07:29.362 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:29.362 "adrfam": "ipv4", 00:07:29.362 "trsvcid": "$NVMF_PORT", 00:07:29.362 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:29.362 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:29.362 "hdgst": ${hdgst:-false}, 00:07:29.362 "ddgst": ${ddgst:-false} 00:07:29.362 }, 00:07:29.362 "method": "bdev_nvme_attach_controller" 00:07:29.362 } 00:07:29.362 EOF 00:07:29.362 )") 00:07:29.362 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:29.362 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:29.362 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:29.362 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:29.362 "params": { 00:07:29.362 "name": "Nvme0", 00:07:29.362 "trtype": "tcp", 00:07:29.362 "traddr": "10.0.0.2", 00:07:29.362 "adrfam": "ipv4", 00:07:29.362 "trsvcid": "4420", 00:07:29.362 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:29.362 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:29.362 "hdgst": false, 00:07:29.362 "ddgst": false 00:07:29.362 }, 00:07:29.362 "method": "bdev_nvme_attach_controller" 00:07:29.362 }' 00:07:29.362 [2024-12-09 17:17:58.237837] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:07:29.362 [2024-12-09 17:17:58.237883] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2421970 ] 00:07:29.362 [2024-12-09 17:17:58.315590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.362 [2024-12-09 17:17:58.355078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.620 Running I/O for 10 seconds... 00:07:29.620 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:29.620 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:29.620 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:29.620 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.620 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:29.620 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.620 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:29.620 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:29.620 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:29.620 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:29.620 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:29.620 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:29.620 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:29.620 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:29.620 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:29.620 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:29.620 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.620 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:29.620 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.620 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=78 00:07:29.620 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 78 -ge 100 ']' 00:07:29.620 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:07:29.880 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:07:29.880 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:29.880 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:29.880 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:29.880 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.880 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:29.880 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.880 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:07:29.880 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:07:29.880 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:29.880 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:29.880 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:29.880 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:29.880 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.880 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:29.880 [2024-12-09 17:17:58.955853] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174d710 is same with the state(6) to be set 00:07:29.880 [2024-12-09 17:17:58.956015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.880 [2024-12-09 17:17:58.956048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.880 [2024-12-09 17:17:58.956063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.880 [2024-12-09 17:17:58.956070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.880 [2024-12-09 17:17:58.956079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.880 [2024-12-09 17:17:58.956086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.880 [2024-12-09 17:17:58.956094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.880 [2024-12-09 17:17:58.956106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.880 [2024-12-09 17:17:58.956114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.880 [2024-12-09 17:17:58.956121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.880 [2024-12-09 17:17:58.956129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.881 [2024-12-09 17:17:58.956136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.881 [2024-12-09 17:17:58.956145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.881 [2024-12-09 17:17:58.956152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.881 [2024-12-09 17:17:58.956160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.881 [2024-12-09 17:17:58.956166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.881 [2024-12-09 17:17:58.956175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.881 [2024-12-09 17:17:58.956181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.881 [2024-12-09 17:17:58.956189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.881 [2024-12-09 17:17:58.956196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.881 [2024-12-09 17:17:58.956204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.881 [2024-12-09 17:17:58.956210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.881 [2024-12-09 17:17:58.956224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.881 [2024-12-09 17:17:58.956231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.881 [2024-12-09 17:17:58.956239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.881 [2024-12-09 17:17:58.956245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.881 [2024-12-09 17:17:58.956254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.881 [2024-12-09 17:17:58.956261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.881 [2024-12-09 17:17:58.956269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.881 [2024-12-09 17:17:58.956276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.881 [2024-12-09 17:17:58.956284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.881 [2024-12-09 17:17:58.956291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.881 [2024-12-09 17:17:58.956301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.881 [2024-12-09 17:17:58.956310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.881 [2024-12-09 17:17:58.956318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.881 [2024-12-09 17:17:58.956327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.881 [2024-12-09 17:17:58.956335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.881 [2024-12-09 17:17:58.956341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.881 [2024-12-09 17:17:58.956349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.881 [2024-12-09 17:17:58.956355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.881 [2024-12-09 17:17:58.956363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.881 [2024-12-09 17:17:58.956369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.881 [2024-12-09 17:17:58.956377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.881 [2024-12-09 17:17:58.956383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.881 [2024-12-09 17:17:58.956391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.881 [2024-12-09 17:17:58.956397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.881 [2024-12-09 17:17:58.956405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.881 [2024-12-09 17:17:58.956411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.881 [2024-12-09 17:17:58.956419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.881 [2024-12-09 17:17:58.956425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.881 [2024-12-09 17:17:58.956433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.881 [2024-12-09 17:17:58.956440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.881 [2024-12-09 17:17:58.956448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.881 [2024-12-09 17:17:58.956455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.881 [2024-12-09 17:17:58.956462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.881 [2024-12-09 17:17:58.956469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.881 [2024-12-09 17:17:58.956477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.881 [2024-12-09 17:17:58.956484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.881 [2024-12-09 17:17:58.956492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.881 [2024-12-09 17:17:58.956498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.881 [2024-12-09 17:17:58.956505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.881 [2024-12-09 17:17:58.956512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.881 [2024-12-09 17:17:58.956519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.881 [2024-12-09 17:17:58.956526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.881 [2024-12-09 17:17:58.956534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.881 [2024-12-09 17:17:58.956542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.881 [2024-12-09 17:17:58.956549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.881 [2024-12-09 17:17:58.956557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.881 [2024-12-09 17:17:58.956564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.881 [2024-12-09 17:17:58.956571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.881 [2024-12-09 17:17:58.956578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.881 [2024-12-09 17:17:58.956585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.881 [2024-12-09 17:17:58.956593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.881 [2024-12-09 17:17:58.956599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.881 [2024-12-09 17:17:58.956607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.881 [2024-12-09 17:17:58.956614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.881 [2024-12-09 17:17:58.956621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.881 [2024-12-09 17:17:58.956627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.881 [2024-12-09 17:17:58.956635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.881 [2024-12-09 17:17:58.956642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.881 [2024-12-09 17:17:58.956650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.881 [2024-12-09 17:17:58.956657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.881 [2024-12-09 17:17:58.956667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.881 [2024-12-09 17:17:58.956673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.881 [2024-12-09 17:17:58.956680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.881 [2024-12-09 17:17:58.956687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.881 [2024-12-09 17:17:58.956695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.881 [2024-12-09 17:17:58.956702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.881 [2024-12-09 17:17:58.956710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.881 [2024-12-09 17:17:58.956717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.881 [2024-12-09 17:17:58.956724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.882 [2024-12-09 17:17:58.956731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.882 [2024-12-09 17:17:58.956738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.882 [2024-12-09 17:17:58.956745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.882 [2024-12-09 17:17:58.956752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.882 [2024-12-09 17:17:58.956759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.882 [2024-12-09 17:17:58.956767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.882 [2024-12-09 17:17:58.956775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.882 [2024-12-09 17:17:58.956783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.882 [2024-12-09 17:17:58.956790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.882 [2024-12-09 17:17:58.956798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.882 [2024-12-09 17:17:58.956804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.882 [2024-12-09 17:17:58.956814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.882 [2024-12-09 17:17:58.956823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.882 [2024-12-09 17:17:58.956830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.882 [2024-12-09 17:17:58.956837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.882 [2024-12-09 17:17:58.956844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.882 [2024-12-09 17:17:58.956855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.882 [2024-12-09 17:17:58.956863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.882 [2024-12-09 17:17:58.956870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.882 [2024-12-09 17:17:58.956878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.882 [2024-12-09 17:17:58.956884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.882 [2024-12-09 17:17:58.956892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.882 [2024-12-09 17:17:58.956898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.882 [2024-12-09 17:17:58.956906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.882 [2024-12-09 17:17:58.956912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.882 [2024-12-09 17:17:58.956920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.882 [2024-12-09 17:17:58.956927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.882 [2024-12-09 17:17:58.956936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.882 [2024-12-09 17:17:58.956942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.882 [2024-12-09 17:17:58.956950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.882 [2024-12-09 17:17:58.956956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.882 [2024-12-09 17:17:58.956964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.882 [2024-12-09 17:17:58.956970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.882 [2024-12-09 17:17:58.956978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.882 [2024-12-09 17:17:58.956985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.882 [2024-12-09 17:17:58.956993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.882 [2024-12-09 17:17:58.956999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.882 [2024-12-09 17:17:58.957929] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:29.882 task offset: 98944 on job bdev=Nvme0n1 fails 00:07:29.882 00:07:29.882 Latency(us) 00:07:29.882 [2024-12-09T16:17:59.061Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:29.882 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:29.882 Job: Nvme0n1 ended in about 0.40 seconds with error 00:07:29.882 Verification LBA range: start 0x0 length 0x400 00:07:29.882 Nvme0n1 : 0.40 1930.13 120.63 160.84 0.00 29788.32 1396.54 27088.21 00:07:29.882 [2024-12-09T16:17:59.061Z] =================================================================================================================== 00:07:29.882 [2024-12-09T16:17:59.061Z] Total : 1930.13 120.63 160.84 0.00 29788.32 1396.54 27088.21 00:07:29.882 [2024-12-09 17:17:58.960369] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:29.882 [2024-12-09 17:17:58.960393] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x562aa0 (9): Bad file descriptor 00:07:29.882 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.882 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:29.882 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.882 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:29.882 [2024-12-09 17:17:58.963573] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:07:29.882 [2024-12-09 17:17:58.963639] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:07:29.882 [2024-12-09 17:17:58.963663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.882 [2024-12-09 17:17:58.963678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:07:29.882 [2024-12-09 17:17:58.963686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:07:29.882 [2024-12-09 17:17:58.963693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:07:29.882 [2024-12-09 17:17:58.963700] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x562aa0 00:07:29.882 [2024-12-09 17:17:58.963719] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x562aa0 (9): Bad file descriptor 00:07:29.882 [2024-12-09 17:17:58.963731] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:07:29.882 [2024-12-09 17:17:58.963738] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:07:29.882 [2024-12-09 17:17:58.963748] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:07:29.882 [2024-12-09 17:17:58.963756] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:07:29.882 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.882 17:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:30.818 17:17:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2421970 00:07:30.818 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2421970) - No such process 00:07:30.818 17:17:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:30.818 17:17:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:30.818 17:17:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:30.818 17:17:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:30.818 17:17:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:30.818 17:17:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:30.818 17:17:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:30.818 17:17:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:30.818 { 00:07:30.818 "params": { 00:07:30.818 "name": "Nvme$subsystem", 00:07:30.818 "trtype": "$TEST_TRANSPORT", 00:07:30.818 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:30.818 "adrfam": "ipv4", 00:07:30.818 "trsvcid": "$NVMF_PORT", 00:07:30.818 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:30.818 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:30.818 "hdgst": ${hdgst:-false}, 00:07:30.818 "ddgst": ${ddgst:-false} 00:07:30.818 }, 00:07:30.818 "method": "bdev_nvme_attach_controller" 00:07:30.818 } 00:07:30.818 EOF 00:07:30.818 )") 00:07:30.818 17:17:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:30.818 17:17:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:30.818 17:17:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:30.818 17:17:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:30.818 "params": { 00:07:30.818 "name": "Nvme0", 00:07:30.818 "trtype": "tcp", 00:07:30.818 "traddr": "10.0.0.2", 00:07:30.818 "adrfam": "ipv4", 00:07:30.818 "trsvcid": "4420", 00:07:30.818 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:30.818 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:30.818 "hdgst": false, 00:07:30.818 "ddgst": false 00:07:30.818 }, 00:07:30.818 "method": "bdev_nvme_attach_controller" 00:07:30.818 }' 00:07:31.077 [2024-12-09 17:18:00.028349] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:07:31.077 [2024-12-09 17:18:00.028402] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2422294 ] 00:07:31.077 [2024-12-09 17:18:00.105451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.077 [2024-12-09 17:18:00.145229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.334 Running I/O for 1 seconds... 00:07:32.268 1984.00 IOPS, 124.00 MiB/s 00:07:32.268 Latency(us) 00:07:32.268 [2024-12-09T16:18:01.447Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:32.268 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:32.268 Verification LBA range: start 0x0 length 0x400 00:07:32.268 Nvme0n1 : 1.00 2041.45 127.59 0.00 0.00 30861.14 6054.28 27462.70 00:07:32.268 [2024-12-09T16:18:01.447Z] =================================================================================================================== 00:07:32.268 [2024-12-09T16:18:01.447Z] Total : 2041.45 127.59 0.00 0.00 30861.14 6054.28 27462.70 00:07:32.526 17:18:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:32.526 17:18:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:32.526 17:18:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:32.526 17:18:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:32.526 17:18:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:32.526 17:18:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:32.526 17:18:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:32.526 17:18:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:32.526 17:18:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:32.526 17:18:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:32.526 17:18:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:32.526 rmmod nvme_tcp 00:07:32.526 rmmod nvme_fabrics 00:07:32.526 rmmod nvme_keyring 00:07:32.526 17:18:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:32.526 17:18:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:32.526 17:18:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:32.526 17:18:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2421787 ']' 00:07:32.526 17:18:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2421787 00:07:32.526 17:18:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2421787 ']' 00:07:32.527 17:18:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2421787 00:07:32.527 17:18:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:07:32.527 17:18:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:32.527 17:18:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2421787 00:07:32.527 17:18:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:32.527 17:18:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:32.527 17:18:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2421787' 00:07:32.527 killing process with pid 2421787 00:07:32.527 17:18:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2421787 00:07:32.527 17:18:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2421787 00:07:32.785 [2024-12-09 17:18:01.771882] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:32.785 17:18:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:32.785 17:18:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:32.785 17:18:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:32.785 17:18:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:32.785 17:18:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:07:32.785 17:18:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:32.785 17:18:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:07:32.785 17:18:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:32.785 17:18:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:32.785 17:18:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:32.785 17:18:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:32.785 17:18:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:34.737 17:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:34.737 17:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:34.737 00:07:34.737 real 0m12.329s 00:07:34.737 user 0m19.291s 00:07:34.737 sys 0m5.586s 00:07:34.737 17:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:34.737 17:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:34.737 ************************************ 00:07:34.737 END TEST nvmf_host_management 00:07:34.737 ************************************ 00:07:34.737 17:18:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:34.737 17:18:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:34.737 17:18:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:34.737 17:18:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:35.044 ************************************ 00:07:35.044 START TEST nvmf_lvol 00:07:35.044 ************************************ 00:07:35.044 17:18:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:35.044 * Looking for test storage... 00:07:35.044 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:35.044 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:35.044 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:07:35.044 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:35.044 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:35.044 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:35.044 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:35.044 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:35.044 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:35.044 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:35.044 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:35.044 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:35.044 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:35.044 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:35.044 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:35.044 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:35.044 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:35.044 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:35.044 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:35.044 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:35.044 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:35.044 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:35.044 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:35.044 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:35.044 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:35.044 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:35.044 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:35.044 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:35.044 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:35.044 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:35.044 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:35.044 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:35.044 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:35.044 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:35.044 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:35.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.044 --rc genhtml_branch_coverage=1 00:07:35.044 --rc genhtml_function_coverage=1 00:07:35.044 --rc genhtml_legend=1 00:07:35.044 --rc geninfo_all_blocks=1 00:07:35.044 --rc geninfo_unexecuted_blocks=1 00:07:35.044 00:07:35.044 ' 00:07:35.044 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:35.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.044 --rc genhtml_branch_coverage=1 00:07:35.044 --rc genhtml_function_coverage=1 00:07:35.044 --rc genhtml_legend=1 00:07:35.044 --rc geninfo_all_blocks=1 00:07:35.044 --rc geninfo_unexecuted_blocks=1 00:07:35.044 00:07:35.044 ' 00:07:35.044 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:35.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.044 --rc genhtml_branch_coverage=1 00:07:35.044 --rc genhtml_function_coverage=1 00:07:35.044 --rc genhtml_legend=1 00:07:35.044 --rc geninfo_all_blocks=1 00:07:35.044 --rc geninfo_unexecuted_blocks=1 00:07:35.044 00:07:35.044 ' 00:07:35.044 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:35.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.044 --rc genhtml_branch_coverage=1 00:07:35.044 --rc genhtml_function_coverage=1 00:07:35.044 --rc genhtml_legend=1 00:07:35.044 --rc geninfo_all_blocks=1 00:07:35.044 --rc geninfo_unexecuted_blocks=1 00:07:35.044 00:07:35.044 ' 00:07:35.044 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:35.044 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:35.044 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:35.044 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:35.044 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:35.044 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:35.044 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:35.044 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:35.044 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:35.044 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:35.044 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:35.044 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:35.044 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:07:35.044 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:07:35.044 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:35.044 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:35.044 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:35.044 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:35.045 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:35.045 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:35.045 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:35.045 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:35.045 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:35.045 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.045 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.045 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.045 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:35.045 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.045 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:35.045 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:35.045 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:35.045 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:35.045 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:35.045 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:35.045 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:35.045 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:35.045 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:35.045 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:35.045 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:35.045 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:35.045 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:35.045 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:35.045 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:35.045 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:35.045 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:35.045 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:35.045 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:35.045 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:35.045 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:35.045 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:35.045 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:35.045 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:35.045 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:35.045 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:35.045 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:35.045 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:07:35.045 17:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:41.616 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:41.616 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:07:41.616 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:41.616 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:41.616 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:41.616 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:41.616 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:41.616 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:07:41.616 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:41.616 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:07:41.616 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:07:41.616 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:07:41.616 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:07:41.616 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:07:41.616 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:07:41.616 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:41.616 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:41.616 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:41.616 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:41.616 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:41.616 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:41.616 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:41.616 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:41.616 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:41.616 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:41.616 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:41.616 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:41.616 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:41.616 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:41.616 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:41.616 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:41.616 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:41.616 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:41.616 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:41.616 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:41.616 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:41.616 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:41.616 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:41.616 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:41.616 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:41.616 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:41.616 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:41.616 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:41.616 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:41.616 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:41.616 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:41.616 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:41.616 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:41.616 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:41.616 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:41.616 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:41.616 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:41.616 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:41.616 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:41.616 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:41.616 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:41.616 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:41.617 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:41.617 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:41.617 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:41.617 Found net devices under 0000:af:00.0: cvl_0_0 00:07:41.617 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:41.617 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:41.617 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:41.617 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:41.617 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:41.617 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:41.617 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:41.617 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:41.617 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:41.617 Found net devices under 0000:af:00.1: cvl_0_1 00:07:41.617 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:41.617 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:41.617 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:07:41.617 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:41.617 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:41.617 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:41.617 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:41.617 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:41.617 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:41.617 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:41.617 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:41.617 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:41.617 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:41.617 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:41.617 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:41.617 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:41.617 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:41.617 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:41.617 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:41.617 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:41.617 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:41.617 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:41.617 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:41.617 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:41.617 17:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:41.617 17:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:41.617 17:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:41.617 17:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:41.617 17:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:41.617 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:41.617 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.270 ms 00:07:41.617 00:07:41.617 --- 10.0.0.2 ping statistics --- 00:07:41.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:41.617 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:07:41.617 17:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:41.617 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:41.617 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:07:41.617 00:07:41.617 --- 10.0.0.1 ping statistics --- 00:07:41.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:41.617 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:07:41.617 17:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:41.617 17:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:07:41.617 17:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:41.617 17:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:41.617 17:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:41.617 17:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:41.617 17:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:41.617 17:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:41.617 17:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:41.617 17:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:41.617 17:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:41.617 17:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:41.617 17:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:41.617 17:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2426553 00:07:41.617 17:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:41.617 17:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2426553 00:07:41.617 17:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2426553 ']' 00:07:41.617 17:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.617 17:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:41.617 17:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.617 17:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:41.617 17:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:41.617 [2024-12-09 17:18:10.294656] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:07:41.617 [2024-12-09 17:18:10.294700] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:41.617 [2024-12-09 17:18:10.374305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:41.617 [2024-12-09 17:18:10.414662] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:41.617 [2024-12-09 17:18:10.414697] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:41.617 [2024-12-09 17:18:10.414704] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:41.617 [2024-12-09 17:18:10.414710] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:41.617 [2024-12-09 17:18:10.414715] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:41.617 [2024-12-09 17:18:10.416037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:41.617 [2024-12-09 17:18:10.416146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:41.617 [2024-12-09 17:18:10.416145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.617 17:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:41.617 17:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:07:41.617 17:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:41.617 17:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:41.617 17:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:41.617 17:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:41.617 17:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:41.617 [2024-12-09 17:18:10.729849] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:41.617 17:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:41.876 17:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:41.876 17:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:42.135 17:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:42.135 17:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:42.393 17:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:42.652 17:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=3bfed71e-d654-41d8-9c30-43c67584ba53 00:07:42.652 17:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3bfed71e-d654-41d8-9c30-43c67584ba53 lvol 20 00:07:42.652 17:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=1febf405-86b6-4eaf-af54-3dbd4196cfcc 00:07:42.652 17:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:42.911 17:18:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1febf405-86b6-4eaf-af54-3dbd4196cfcc 00:07:43.169 17:18:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:43.427 [2024-12-09 17:18:12.395347] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:43.427 17:18:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:43.685 17:18:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2427034 00:07:43.685 17:18:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:43.685 17:18:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:44.620 17:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 1febf405-86b6-4eaf-af54-3dbd4196cfcc MY_SNAPSHOT 00:07:44.878 17:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=6a326488-8785-494e-b7a5-79c0a91d5d71 00:07:44.878 17:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 1febf405-86b6-4eaf-af54-3dbd4196cfcc 30 00:07:45.137 17:18:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 6a326488-8785-494e-b7a5-79c0a91d5d71 MY_CLONE 00:07:45.396 17:18:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=03c803d5-2544-47cc-8803-a026e5a2b145 00:07:45.396 17:18:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 03c803d5-2544-47cc-8803-a026e5a2b145 00:07:45.962 17:18:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2427034 00:07:54.075 Initializing NVMe Controllers 00:07:54.075 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:54.075 Controller IO queue size 128, less than required. 00:07:54.075 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:54.075 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:54.075 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:54.075 Initialization complete. Launching workers. 00:07:54.075 ======================================================== 00:07:54.075 Latency(us) 00:07:54.075 Device Information : IOPS MiB/s Average min max 00:07:54.075 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11808.40 46.13 10839.07 1581.64 60848.94 00:07:54.075 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11959.00 46.71 10704.09 1200.38 62975.63 00:07:54.075 ======================================================== 00:07:54.075 Total : 23767.40 92.84 10771.15 1200.38 62975.63 00:07:54.075 00:07:54.075 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:54.333 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1febf405-86b6-4eaf-af54-3dbd4196cfcc 00:07:54.333 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3bfed71e-d654-41d8-9c30-43c67584ba53 00:07:54.591 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:54.591 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:54.591 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:54.591 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:54.591 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:54.591 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:54.591 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:54.591 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:54.591 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:54.591 rmmod nvme_tcp 00:07:54.591 rmmod nvme_fabrics 00:07:54.591 rmmod nvme_keyring 00:07:54.591 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:54.591 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:54.592 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:54.592 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2426553 ']' 00:07:54.592 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2426553 00:07:54.592 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2426553 ']' 00:07:54.592 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2426553 00:07:54.592 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:07:54.592 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:54.592 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2426553 00:07:54.851 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:54.851 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:54.851 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2426553' 00:07:54.851 killing process with pid 2426553 00:07:54.851 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2426553 00:07:54.851 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2426553 00:07:54.851 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:54.851 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:54.851 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:54.851 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:54.851 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:07:54.851 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:54.851 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:07:54.851 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:54.851 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:54.851 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:54.851 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:54.851 17:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:57.386 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:57.386 00:07:57.386 real 0m22.124s 00:07:57.386 user 1m3.411s 00:07:57.386 sys 0m7.576s 00:07:57.386 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:57.386 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:57.386 ************************************ 00:07:57.386 END TEST nvmf_lvol 00:07:57.386 ************************************ 00:07:57.386 17:18:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:57.386 17:18:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:57.386 17:18:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:57.386 17:18:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:57.386 ************************************ 00:07:57.386 START TEST nvmf_lvs_grow 00:07:57.386 ************************************ 00:07:57.386 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:57.386 * Looking for test storage... 00:07:57.386 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:57.386 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:57.386 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:07:57.386 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:57.386 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:57.386 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:57.386 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:57.386 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:57.386 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:57.386 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:57.386 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:57.386 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:57.386 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:57.386 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:57.386 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:57.386 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:57.386 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:57.386 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:57.386 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:57.386 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:57.386 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:57.386 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:57.386 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:57.386 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:57.386 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:57.386 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:57.386 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:57.386 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:57.386 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:57.386 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:57.386 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:57.386 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:57.386 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:57.386 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:57.386 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:57.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.386 --rc genhtml_branch_coverage=1 00:07:57.386 --rc genhtml_function_coverage=1 00:07:57.386 --rc genhtml_legend=1 00:07:57.386 --rc geninfo_all_blocks=1 00:07:57.386 --rc geninfo_unexecuted_blocks=1 00:07:57.386 00:07:57.386 ' 00:07:57.386 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:57.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.386 --rc genhtml_branch_coverage=1 00:07:57.386 --rc genhtml_function_coverage=1 00:07:57.386 --rc genhtml_legend=1 00:07:57.386 --rc geninfo_all_blocks=1 00:07:57.386 --rc geninfo_unexecuted_blocks=1 00:07:57.386 00:07:57.386 ' 00:07:57.386 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:57.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.386 --rc genhtml_branch_coverage=1 00:07:57.386 --rc genhtml_function_coverage=1 00:07:57.386 --rc genhtml_legend=1 00:07:57.386 --rc geninfo_all_blocks=1 00:07:57.386 --rc geninfo_unexecuted_blocks=1 00:07:57.386 00:07:57.386 ' 00:07:57.386 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:57.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.387 --rc genhtml_branch_coverage=1 00:07:57.387 --rc genhtml_function_coverage=1 00:07:57.387 --rc genhtml_legend=1 00:07:57.387 --rc geninfo_all_blocks=1 00:07:57.387 --rc geninfo_unexecuted_blocks=1 00:07:57.387 00:07:57.387 ' 00:07:57.387 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:57.387 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:57.387 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:57.387 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:57.387 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:57.387 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:57.387 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:57.387 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:57.387 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:57.387 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:57.387 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:57.387 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:57.387 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:07:57.387 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:07:57.387 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:57.387 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:57.387 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:57.387 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:57.387 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:57.387 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:57.387 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:57.387 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:57.387 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:57.387 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.387 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.387 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.387 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:57.387 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.387 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:57.387 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:57.387 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:57.387 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:57.387 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:57.387 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:57.387 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:57.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:57.387 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:57.387 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:57.387 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:57.387 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:57.387 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:57.387 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:57.387 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:57.387 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:57.387 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:57.387 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:57.387 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:57.387 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:57.387 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:57.387 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:57.387 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:57.387 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:57.387 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:07:57.387 17:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:03.997 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:03.997 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:08:03.997 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:03.997 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:03.997 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:03.997 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:03.997 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:03.997 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:08:03.997 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:03.997 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:08:03.997 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:08:03.997 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:08:03.997 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:08:03.997 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:08:03.997 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:08:03.997 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:03.997 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:03.997 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:03.997 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:03.997 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:03.997 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:03.997 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:03.998 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:03.998 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:03.998 Found net devices under 0000:af:00.0: cvl_0_0 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:03.998 Found net devices under 0000:af:00.1: cvl_0_1 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:03.998 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:03.998 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.360 ms 00:08:03.998 00:08:03.998 --- 10.0.0.2 ping statistics --- 00:08:03.998 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:03.998 rtt min/avg/max/mdev = 0.360/0.360/0.360/0.000 ms 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:03.998 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:03.998 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:08:03.998 00:08:03.998 --- 10.0.0.1 ping statistics --- 00:08:03.998 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:03.998 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2432363 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2432363 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2432363 ']' 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:03.998 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:03.998 [2024-12-09 17:18:32.410650] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:08:03.998 [2024-12-09 17:18:32.410700] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:03.998 [2024-12-09 17:18:32.486708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.999 [2024-12-09 17:18:32.526502] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:03.999 [2024-12-09 17:18:32.526541] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:03.999 [2024-12-09 17:18:32.526549] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:03.999 [2024-12-09 17:18:32.526555] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:03.999 [2024-12-09 17:18:32.526561] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:03.999 [2024-12-09 17:18:32.527083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.999 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:03.999 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:08:03.999 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:03.999 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:03.999 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:03.999 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:03.999 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:03.999 [2024-12-09 17:18:32.827264] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:03.999 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:03.999 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:03.999 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:03.999 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:03.999 ************************************ 00:08:03.999 START TEST lvs_grow_clean 00:08:03.999 ************************************ 00:08:03.999 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:08:03.999 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:03.999 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:03.999 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:03.999 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:03.999 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:03.999 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:03.999 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:03.999 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:03.999 17:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:03.999 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:03.999 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:04.258 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=cccb41b7-9324-4f4f-bb04-d8b7f4809e12 00:08:04.258 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cccb41b7-9324-4f4f-bb04-d8b7f4809e12 00:08:04.258 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:04.516 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:04.516 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:04.516 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u cccb41b7-9324-4f4f-bb04-d8b7f4809e12 lvol 150 00:08:04.516 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=c4fea60a-97b7-472c-b4e7-caf15b587935 00:08:04.517 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:04.775 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:04.775 [2024-12-09 17:18:33.870115] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:04.775 [2024-12-09 17:18:33.870164] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:04.775 true 00:08:04.775 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cccb41b7-9324-4f4f-bb04-d8b7f4809e12 00:08:04.775 17:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:05.033 17:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:05.033 17:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:05.292 17:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c4fea60a-97b7-472c-b4e7-caf15b587935 00:08:05.292 17:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:05.551 [2024-12-09 17:18:34.620375] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:05.551 17:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:05.809 17:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2432856 00:08:05.809 17:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:05.809 17:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:05.809 17:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2432856 /var/tmp/bdevperf.sock 00:08:05.809 17:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2432856 ']' 00:08:05.809 17:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:05.809 17:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:05.809 17:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:05.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:05.809 17:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:05.809 17:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:05.809 [2024-12-09 17:18:34.848412] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:08:05.809 [2024-12-09 17:18:34.848458] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2432856 ] 00:08:05.810 [2024-12-09 17:18:34.921937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.810 [2024-12-09 17:18:34.960549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:06.075 17:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:06.075 17:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:08:06.075 17:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:06.334 Nvme0n1 00:08:06.334 17:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:06.593 [ 00:08:06.593 { 00:08:06.593 "name": "Nvme0n1", 00:08:06.593 "aliases": [ 00:08:06.593 "c4fea60a-97b7-472c-b4e7-caf15b587935" 00:08:06.593 ], 00:08:06.593 "product_name": "NVMe disk", 00:08:06.593 "block_size": 4096, 00:08:06.593 "num_blocks": 38912, 00:08:06.593 "uuid": "c4fea60a-97b7-472c-b4e7-caf15b587935", 00:08:06.593 "numa_id": 1, 00:08:06.593 "assigned_rate_limits": { 00:08:06.593 "rw_ios_per_sec": 0, 00:08:06.593 "rw_mbytes_per_sec": 0, 00:08:06.593 "r_mbytes_per_sec": 0, 00:08:06.593 "w_mbytes_per_sec": 0 00:08:06.593 }, 00:08:06.593 "claimed": false, 00:08:06.593 "zoned": false, 00:08:06.593 "supported_io_types": { 00:08:06.593 "read": true, 00:08:06.593 "write": true, 00:08:06.593 "unmap": true, 00:08:06.593 "flush": true, 00:08:06.593 "reset": true, 00:08:06.593 "nvme_admin": true, 00:08:06.593 "nvme_io": true, 00:08:06.593 "nvme_io_md": false, 00:08:06.593 "write_zeroes": true, 00:08:06.593 "zcopy": false, 00:08:06.593 "get_zone_info": false, 00:08:06.593 "zone_management": false, 00:08:06.593 "zone_append": false, 00:08:06.593 "compare": true, 00:08:06.593 "compare_and_write": true, 00:08:06.593 "abort": true, 00:08:06.593 "seek_hole": false, 00:08:06.593 "seek_data": false, 00:08:06.593 "copy": true, 00:08:06.593 "nvme_iov_md": false 00:08:06.593 }, 00:08:06.593 "memory_domains": [ 00:08:06.593 { 00:08:06.593 "dma_device_id": "system", 00:08:06.593 "dma_device_type": 1 00:08:06.593 } 00:08:06.593 ], 00:08:06.593 "driver_specific": { 00:08:06.593 "nvme": [ 00:08:06.593 { 00:08:06.593 "trid": { 00:08:06.593 "trtype": "TCP", 00:08:06.593 "adrfam": "IPv4", 00:08:06.593 "traddr": "10.0.0.2", 00:08:06.593 "trsvcid": "4420", 00:08:06.593 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:06.593 }, 00:08:06.594 "ctrlr_data": { 00:08:06.594 "cntlid": 1, 00:08:06.594 "vendor_id": "0x8086", 00:08:06.594 "model_number": "SPDK bdev Controller", 00:08:06.594 "serial_number": "SPDK0", 00:08:06.594 "firmware_revision": "25.01", 00:08:06.594 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:06.594 "oacs": { 00:08:06.594 "security": 0, 00:08:06.594 "format": 0, 00:08:06.594 "firmware": 0, 00:08:06.594 "ns_manage": 0 00:08:06.594 }, 00:08:06.594 "multi_ctrlr": true, 00:08:06.594 "ana_reporting": false 00:08:06.594 }, 00:08:06.594 "vs": { 00:08:06.594 "nvme_version": "1.3" 00:08:06.594 }, 00:08:06.594 "ns_data": { 00:08:06.594 "id": 1, 00:08:06.594 "can_share": true 00:08:06.594 } 00:08:06.594 } 00:08:06.594 ], 00:08:06.594 "mp_policy": "active_passive" 00:08:06.594 } 00:08:06.594 } 00:08:06.594 ] 00:08:06.594 17:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2432883 00:08:06.594 17:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:06.594 17:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:06.594 Running I/O for 10 seconds... 00:08:07.531 Latency(us) 00:08:07.531 [2024-12-09T16:18:36.710Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:07.531 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:07.531 Nvme0n1 : 1.00 23340.00 91.17 0.00 0.00 0.00 0.00 0.00 00:08:07.531 [2024-12-09T16:18:36.710Z] =================================================================================================================== 00:08:07.531 [2024-12-09T16:18:36.710Z] Total : 23340.00 91.17 0.00 0.00 0.00 0.00 0.00 00:08:07.531 00:08:08.467 17:18:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u cccb41b7-9324-4f4f-bb04-d8b7f4809e12 00:08:08.726 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:08.726 Nvme0n1 : 2.00 23532.00 91.92 0.00 0.00 0.00 0.00 0.00 00:08:08.726 [2024-12-09T16:18:37.905Z] =================================================================================================================== 00:08:08.726 [2024-12-09T16:18:37.905Z] Total : 23532.00 91.92 0.00 0.00 0.00 0.00 0.00 00:08:08.726 00:08:08.726 true 00:08:08.726 17:18:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cccb41b7-9324-4f4f-bb04-d8b7f4809e12 00:08:08.726 17:18:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:08.984 17:18:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:08.984 17:18:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:08.984 17:18:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2432883 00:08:09.553 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:09.553 Nvme0n1 : 3.00 23611.00 92.23 0.00 0.00 0.00 0.00 0.00 00:08:09.553 [2024-12-09T16:18:38.732Z] =================================================================================================================== 00:08:09.553 [2024-12-09T16:18:38.732Z] Total : 23611.00 92.23 0.00 0.00 0.00 0.00 0.00 00:08:09.553 00:08:10.489 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:10.489 Nvme0n1 : 4.00 23683.00 92.51 0.00 0.00 0.00 0.00 0.00 00:08:10.489 [2024-12-09T16:18:39.668Z] =================================================================================================================== 00:08:10.489 [2024-12-09T16:18:39.668Z] Total : 23683.00 92.51 0.00 0.00 0.00 0.00 0.00 00:08:10.489 00:08:11.870 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:11.870 Nvme0n1 : 5.00 23632.40 92.31 0.00 0.00 0.00 0.00 0.00 00:08:11.870 [2024-12-09T16:18:41.049Z] =================================================================================================================== 00:08:11.870 [2024-12-09T16:18:41.049Z] Total : 23632.40 92.31 0.00 0.00 0.00 0.00 0.00 00:08:11.870 00:08:12.806 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:12.806 Nvme0n1 : 6.00 23687.67 92.53 0.00 0.00 0.00 0.00 0.00 00:08:12.806 [2024-12-09T16:18:41.985Z] =================================================================================================================== 00:08:12.806 [2024-12-09T16:18:41.985Z] Total : 23687.67 92.53 0.00 0.00 0.00 0.00 0.00 00:08:12.806 00:08:13.742 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:13.742 Nvme0n1 : 7.00 23725.57 92.68 0.00 0.00 0.00 0.00 0.00 00:08:13.742 [2024-12-09T16:18:42.921Z] =================================================================================================================== 00:08:13.742 [2024-12-09T16:18:42.921Z] Total : 23725.57 92.68 0.00 0.00 0.00 0.00 0.00 00:08:13.742 00:08:14.679 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:14.679 Nvme0n1 : 8.00 23756.12 92.80 0.00 0.00 0.00 0.00 0.00 00:08:14.679 [2024-12-09T16:18:43.858Z] =================================================================================================================== 00:08:14.679 [2024-12-09T16:18:43.858Z] Total : 23756.12 92.80 0.00 0.00 0.00 0.00 0.00 00:08:14.679 00:08:15.616 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:15.616 Nvme0n1 : 9.00 23786.00 92.91 0.00 0.00 0.00 0.00 0.00 00:08:15.616 [2024-12-09T16:18:44.795Z] =================================================================================================================== 00:08:15.616 [2024-12-09T16:18:44.795Z] Total : 23786.00 92.91 0.00 0.00 0.00 0.00 0.00 00:08:15.616 00:08:16.550 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:16.550 Nvme0n1 : 10.00 23808.10 93.00 0.00 0.00 0.00 0.00 0.00 00:08:16.550 [2024-12-09T16:18:45.729Z] =================================================================================================================== 00:08:16.550 [2024-12-09T16:18:45.729Z] Total : 23808.10 93.00 0.00 0.00 0.00 0.00 0.00 00:08:16.550 00:08:16.550 00:08:16.551 Latency(us) 00:08:16.551 [2024-12-09T16:18:45.730Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:16.551 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:16.551 Nvme0n1 : 10.00 23810.17 93.01 0.00 0.00 5372.61 3167.57 13606.52 00:08:16.551 [2024-12-09T16:18:45.730Z] =================================================================================================================== 00:08:16.551 [2024-12-09T16:18:45.730Z] Total : 23810.17 93.01 0.00 0.00 5372.61 3167.57 13606.52 00:08:16.551 { 00:08:16.551 "results": [ 00:08:16.551 { 00:08:16.551 "job": "Nvme0n1", 00:08:16.551 "core_mask": "0x2", 00:08:16.551 "workload": "randwrite", 00:08:16.551 "status": "finished", 00:08:16.551 "queue_depth": 128, 00:08:16.551 "io_size": 4096, 00:08:16.551 "runtime": 10.004508, 00:08:16.551 "iops": 23810.16637699725, 00:08:16.551 "mibps": 93.00846241014551, 00:08:16.551 "io_failed": 0, 00:08:16.551 "io_timeout": 0, 00:08:16.551 "avg_latency_us": 5372.612029268415, 00:08:16.551 "min_latency_us": 3167.5733333333333, 00:08:16.551 "max_latency_us": 13606.521904761905 00:08:16.551 } 00:08:16.551 ], 00:08:16.551 "core_count": 1 00:08:16.551 } 00:08:16.551 17:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2432856 00:08:16.551 17:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2432856 ']' 00:08:16.551 17:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2432856 00:08:16.551 17:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:08:16.551 17:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:16.551 17:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2432856 00:08:16.810 17:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:16.810 17:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:16.810 17:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2432856' 00:08:16.810 killing process with pid 2432856 00:08:16.810 17:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2432856 00:08:16.810 Received shutdown signal, test time was about 10.000000 seconds 00:08:16.810 00:08:16.810 Latency(us) 00:08:16.810 [2024-12-09T16:18:45.989Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:16.810 [2024-12-09T16:18:45.989Z] =================================================================================================================== 00:08:16.810 [2024-12-09T16:18:45.989Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:16.810 17:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2432856 00:08:16.810 17:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:17.069 17:18:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:17.328 17:18:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:17.328 17:18:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cccb41b7-9324-4f4f-bb04-d8b7f4809e12 00:08:17.328 17:18:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:17.328 17:18:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:17.328 17:18:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:17.587 [2024-12-09 17:18:46.636863] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:17.587 17:18:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cccb41b7-9324-4f4f-bb04-d8b7f4809e12 00:08:17.587 17:18:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:08:17.587 17:18:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cccb41b7-9324-4f4f-bb04-d8b7f4809e12 00:08:17.587 17:18:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:17.587 17:18:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:17.587 17:18:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:17.587 17:18:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:17.587 17:18:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:17.587 17:18:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:17.587 17:18:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:17.587 17:18:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:17.588 17:18:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cccb41b7-9324-4f4f-bb04-d8b7f4809e12 00:08:17.846 request: 00:08:17.847 { 00:08:17.847 "uuid": "cccb41b7-9324-4f4f-bb04-d8b7f4809e12", 00:08:17.847 "method": "bdev_lvol_get_lvstores", 00:08:17.847 "req_id": 1 00:08:17.847 } 00:08:17.847 Got JSON-RPC error response 00:08:17.847 response: 00:08:17.847 { 00:08:17.847 "code": -19, 00:08:17.847 "message": "No such device" 00:08:17.847 } 00:08:17.847 17:18:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:08:17.847 17:18:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:17.847 17:18:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:17.847 17:18:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:17.847 17:18:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:18.106 aio_bdev 00:08:18.106 17:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c4fea60a-97b7-472c-b4e7-caf15b587935 00:08:18.106 17:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=c4fea60a-97b7-472c-b4e7-caf15b587935 00:08:18.106 17:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:18.106 17:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:08:18.106 17:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:18.106 17:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:18.106 17:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:18.106 17:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c4fea60a-97b7-472c-b4e7-caf15b587935 -t 2000 00:08:18.365 [ 00:08:18.365 { 00:08:18.365 "name": "c4fea60a-97b7-472c-b4e7-caf15b587935", 00:08:18.365 "aliases": [ 00:08:18.365 "lvs/lvol" 00:08:18.365 ], 00:08:18.365 "product_name": "Logical Volume", 00:08:18.365 "block_size": 4096, 00:08:18.365 "num_blocks": 38912, 00:08:18.365 "uuid": "c4fea60a-97b7-472c-b4e7-caf15b587935", 00:08:18.365 "assigned_rate_limits": { 00:08:18.365 "rw_ios_per_sec": 0, 00:08:18.365 "rw_mbytes_per_sec": 0, 00:08:18.365 "r_mbytes_per_sec": 0, 00:08:18.365 "w_mbytes_per_sec": 0 00:08:18.365 }, 00:08:18.365 "claimed": false, 00:08:18.365 "zoned": false, 00:08:18.365 "supported_io_types": { 00:08:18.365 "read": true, 00:08:18.365 "write": true, 00:08:18.365 "unmap": true, 00:08:18.365 "flush": false, 00:08:18.365 "reset": true, 00:08:18.365 "nvme_admin": false, 00:08:18.365 "nvme_io": false, 00:08:18.365 "nvme_io_md": false, 00:08:18.365 "write_zeroes": true, 00:08:18.365 "zcopy": false, 00:08:18.365 "get_zone_info": false, 00:08:18.365 "zone_management": false, 00:08:18.365 "zone_append": false, 00:08:18.365 "compare": false, 00:08:18.365 "compare_and_write": false, 00:08:18.365 "abort": false, 00:08:18.365 "seek_hole": true, 00:08:18.365 "seek_data": true, 00:08:18.365 "copy": false, 00:08:18.365 "nvme_iov_md": false 00:08:18.365 }, 00:08:18.365 "driver_specific": { 00:08:18.365 "lvol": { 00:08:18.365 "lvol_store_uuid": "cccb41b7-9324-4f4f-bb04-d8b7f4809e12", 00:08:18.365 "base_bdev": "aio_bdev", 00:08:18.365 "thin_provision": false, 00:08:18.365 "num_allocated_clusters": 38, 00:08:18.365 "snapshot": false, 00:08:18.365 "clone": false, 00:08:18.365 "esnap_clone": false 00:08:18.365 } 00:08:18.365 } 00:08:18.365 } 00:08:18.365 ] 00:08:18.365 17:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:08:18.365 17:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cccb41b7-9324-4f4f-bb04-d8b7f4809e12 00:08:18.365 17:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:18.709 17:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:18.709 17:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cccb41b7-9324-4f4f-bb04-d8b7f4809e12 00:08:18.709 17:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:18.709 17:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:18.709 17:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c4fea60a-97b7-472c-b4e7-caf15b587935 00:08:18.987 17:18:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u cccb41b7-9324-4f4f-bb04-d8b7f4809e12 00:08:19.245 17:18:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:19.245 17:18:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:19.504 00:08:19.504 real 0m15.551s 00:08:19.504 user 0m15.098s 00:08:19.504 sys 0m1.513s 00:08:19.504 17:18:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:19.504 17:18:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:19.504 ************************************ 00:08:19.504 END TEST lvs_grow_clean 00:08:19.504 ************************************ 00:08:19.504 17:18:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:19.504 17:18:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:19.504 17:18:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:19.504 17:18:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:19.504 ************************************ 00:08:19.504 START TEST lvs_grow_dirty 00:08:19.504 ************************************ 00:08:19.504 17:18:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:08:19.504 17:18:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:19.504 17:18:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:19.504 17:18:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:19.504 17:18:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:19.504 17:18:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:19.504 17:18:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:19.504 17:18:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:19.504 17:18:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:19.504 17:18:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:19.763 17:18:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:19.763 17:18:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:19.763 17:18:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=bb0d87d0-179a-420e-834d-c8743073e610 00:08:19.763 17:18:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb0d87d0-179a-420e-834d-c8743073e610 00:08:19.763 17:18:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:20.022 17:18:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:20.022 17:18:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:20.022 17:18:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u bb0d87d0-179a-420e-834d-c8743073e610 lvol 150 00:08:20.281 17:18:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=26906cc4-2711-43bd-8939-7ecc09fae22e 00:08:20.281 17:18:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:20.281 17:18:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:20.540 [2024-12-09 17:18:49.474122] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:20.540 [2024-12-09 17:18:49.474169] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:20.540 true 00:08:20.540 17:18:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb0d87d0-179a-420e-834d-c8743073e610 00:08:20.540 17:18:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:20.540 17:18:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:20.540 17:18:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:20.799 17:18:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 26906cc4-2711-43bd-8939-7ecc09fae22e 00:08:21.057 17:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:21.316 [2024-12-09 17:18:50.236384] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:21.316 17:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:21.316 17:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2435432 00:08:21.316 17:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:21.316 17:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:21.316 17:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2435432 /var/tmp/bdevperf.sock 00:08:21.316 17:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2435432 ']' 00:08:21.316 17:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:21.316 17:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:21.316 17:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:21.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:21.316 17:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:21.316 17:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:21.575 [2024-12-09 17:18:50.506073] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:08:21.575 [2024-12-09 17:18:50.506116] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2435432 ] 00:08:21.575 [2024-12-09 17:18:50.579402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.575 [2024-12-09 17:18:50.619768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:21.575 17:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:21.575 17:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:21.575 17:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:21.834 Nvme0n1 00:08:21.834 17:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:22.093 [ 00:08:22.093 { 00:08:22.093 "name": "Nvme0n1", 00:08:22.093 "aliases": [ 00:08:22.093 "26906cc4-2711-43bd-8939-7ecc09fae22e" 00:08:22.093 ], 00:08:22.093 "product_name": "NVMe disk", 00:08:22.093 "block_size": 4096, 00:08:22.093 "num_blocks": 38912, 00:08:22.093 "uuid": "26906cc4-2711-43bd-8939-7ecc09fae22e", 00:08:22.093 "numa_id": 1, 00:08:22.093 "assigned_rate_limits": { 00:08:22.093 "rw_ios_per_sec": 0, 00:08:22.093 "rw_mbytes_per_sec": 0, 00:08:22.093 "r_mbytes_per_sec": 0, 00:08:22.093 "w_mbytes_per_sec": 0 00:08:22.093 }, 00:08:22.093 "claimed": false, 00:08:22.093 "zoned": false, 00:08:22.093 "supported_io_types": { 00:08:22.093 "read": true, 00:08:22.093 "write": true, 00:08:22.093 "unmap": true, 00:08:22.093 "flush": true, 00:08:22.093 "reset": true, 00:08:22.093 "nvme_admin": true, 00:08:22.093 "nvme_io": true, 00:08:22.093 "nvme_io_md": false, 00:08:22.093 "write_zeroes": true, 00:08:22.093 "zcopy": false, 00:08:22.093 "get_zone_info": false, 00:08:22.093 "zone_management": false, 00:08:22.093 "zone_append": false, 00:08:22.093 "compare": true, 00:08:22.093 "compare_and_write": true, 00:08:22.093 "abort": true, 00:08:22.093 "seek_hole": false, 00:08:22.093 "seek_data": false, 00:08:22.093 "copy": true, 00:08:22.093 "nvme_iov_md": false 00:08:22.093 }, 00:08:22.093 "memory_domains": [ 00:08:22.093 { 00:08:22.093 "dma_device_id": "system", 00:08:22.093 "dma_device_type": 1 00:08:22.093 } 00:08:22.093 ], 00:08:22.093 "driver_specific": { 00:08:22.093 "nvme": [ 00:08:22.093 { 00:08:22.093 "trid": { 00:08:22.093 "trtype": "TCP", 00:08:22.093 "adrfam": "IPv4", 00:08:22.093 "traddr": "10.0.0.2", 00:08:22.093 "trsvcid": "4420", 00:08:22.093 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:22.093 }, 00:08:22.093 "ctrlr_data": { 00:08:22.093 "cntlid": 1, 00:08:22.093 "vendor_id": "0x8086", 00:08:22.093 "model_number": "SPDK bdev Controller", 00:08:22.093 "serial_number": "SPDK0", 00:08:22.093 "firmware_revision": "25.01", 00:08:22.093 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:22.093 "oacs": { 00:08:22.093 "security": 0, 00:08:22.093 "format": 0, 00:08:22.093 "firmware": 0, 00:08:22.093 "ns_manage": 0 00:08:22.093 }, 00:08:22.093 "multi_ctrlr": true, 00:08:22.093 "ana_reporting": false 00:08:22.093 }, 00:08:22.093 "vs": { 00:08:22.093 "nvme_version": "1.3" 00:08:22.093 }, 00:08:22.093 "ns_data": { 00:08:22.093 "id": 1, 00:08:22.093 "can_share": true 00:08:22.093 } 00:08:22.093 } 00:08:22.093 ], 00:08:22.093 "mp_policy": "active_passive" 00:08:22.093 } 00:08:22.093 } 00:08:22.093 ] 00:08:22.093 17:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2435653 00:08:22.093 17:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:22.093 17:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:22.093 Running I/O for 10 seconds... 00:08:23.470 Latency(us) 00:08:23.470 [2024-12-09T16:18:52.649Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:23.470 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:23.470 Nvme0n1 : 1.00 23439.00 91.56 0.00 0.00 0.00 0.00 0.00 00:08:23.470 [2024-12-09T16:18:52.649Z] =================================================================================================================== 00:08:23.470 [2024-12-09T16:18:52.649Z] Total : 23439.00 91.56 0.00 0.00 0.00 0.00 0.00 00:08:23.470 00:08:24.037 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u bb0d87d0-179a-420e-834d-c8743073e610 00:08:24.295 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:24.295 Nvme0n1 : 2.00 23642.00 92.35 0.00 0.00 0.00 0.00 0.00 00:08:24.295 [2024-12-09T16:18:53.474Z] =================================================================================================================== 00:08:24.295 [2024-12-09T16:18:53.475Z] Total : 23642.00 92.35 0.00 0.00 0.00 0.00 0.00 00:08:24.296 00:08:24.296 true 00:08:24.296 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb0d87d0-179a-420e-834d-c8743073e610 00:08:24.296 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:24.554 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:24.554 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:24.554 17:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2435653 00:08:25.121 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:25.121 Nvme0n1 : 3.00 23700.33 92.58 0.00 0.00 0.00 0.00 0.00 00:08:25.121 [2024-12-09T16:18:54.300Z] =================================================================================================================== 00:08:25.121 [2024-12-09T16:18:54.300Z] Total : 23700.33 92.58 0.00 0.00 0.00 0.00 0.00 00:08:25.121 00:08:26.498 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:26.498 Nvme0n1 : 4.00 23794.00 92.95 0.00 0.00 0.00 0.00 0.00 00:08:26.498 [2024-12-09T16:18:55.677Z] =================================================================================================================== 00:08:26.498 [2024-12-09T16:18:55.677Z] Total : 23794.00 92.95 0.00 0.00 0.00 0.00 0.00 00:08:26.498 00:08:27.433 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:27.433 Nvme0n1 : 5.00 23836.40 93.11 0.00 0.00 0.00 0.00 0.00 00:08:27.433 [2024-12-09T16:18:56.612Z] =================================================================================================================== 00:08:27.433 [2024-12-09T16:18:56.612Z] Total : 23836.40 93.11 0.00 0.00 0.00 0.00 0.00 00:08:27.433 00:08:28.370 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:28.370 Nvme0n1 : 6.00 23886.50 93.31 0.00 0.00 0.00 0.00 0.00 00:08:28.370 [2024-12-09T16:18:57.549Z] =================================================================================================================== 00:08:28.370 [2024-12-09T16:18:57.549Z] Total : 23886.50 93.31 0.00 0.00 0.00 0.00 0.00 00:08:28.370 00:08:29.306 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:29.306 Nvme0n1 : 7.00 23914.14 93.41 0.00 0.00 0.00 0.00 0.00 00:08:29.306 [2024-12-09T16:18:58.485Z] =================================================================================================================== 00:08:29.306 [2024-12-09T16:18:58.485Z] Total : 23914.14 93.41 0.00 0.00 0.00 0.00 0.00 00:08:29.306 00:08:30.242 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:30.242 Nvme0n1 : 8.00 23941.88 93.52 0.00 0.00 0.00 0.00 0.00 00:08:30.242 [2024-12-09T16:18:59.421Z] =================================================================================================================== 00:08:30.242 [2024-12-09T16:18:59.421Z] Total : 23941.88 93.52 0.00 0.00 0.00 0.00 0.00 00:08:30.242 00:08:31.177 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:31.177 Nvme0n1 : 9.00 23936.56 93.50 0.00 0.00 0.00 0.00 0.00 00:08:31.177 [2024-12-09T16:19:00.356Z] =================================================================================================================== 00:08:31.177 [2024-12-09T16:19:00.356Z] Total : 23936.56 93.50 0.00 0.00 0.00 0.00 0.00 00:08:31.177 00:08:32.113 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:32.113 Nvme0n1 : 10.00 23951.20 93.56 0.00 0.00 0.00 0.00 0.00 00:08:32.113 [2024-12-09T16:19:01.292Z] =================================================================================================================== 00:08:32.113 [2024-12-09T16:19:01.292Z] Total : 23951.20 93.56 0.00 0.00 0.00 0.00 0.00 00:08:32.113 00:08:32.113 00:08:32.113 Latency(us) 00:08:32.113 [2024-12-09T16:19:01.292Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:32.113 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:32.113 Nvme0n1 : 10.00 23948.51 93.55 0.00 0.00 5341.23 2324.97 10610.59 00:08:32.113 [2024-12-09T16:19:01.292Z] =================================================================================================================== 00:08:32.113 [2024-12-09T16:19:01.292Z] Total : 23948.51 93.55 0.00 0.00 5341.23 2324.97 10610.59 00:08:32.113 { 00:08:32.113 "results": [ 00:08:32.113 { 00:08:32.113 "job": "Nvme0n1", 00:08:32.113 "core_mask": "0x2", 00:08:32.113 "workload": "randwrite", 00:08:32.113 "status": "finished", 00:08:32.113 "queue_depth": 128, 00:08:32.113 "io_size": 4096, 00:08:32.113 "runtime": 10.003838, 00:08:32.113 "iops": 23948.508562413746, 00:08:32.113 "mibps": 93.5488615719287, 00:08:32.113 "io_failed": 0, 00:08:32.113 "io_timeout": 0, 00:08:32.113 "avg_latency_us": 5341.234827971602, 00:08:32.113 "min_latency_us": 2324.967619047619, 00:08:32.113 "max_latency_us": 10610.590476190477 00:08:32.113 } 00:08:32.113 ], 00:08:32.113 "core_count": 1 00:08:32.113 } 00:08:32.372 17:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2435432 00:08:32.372 17:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2435432 ']' 00:08:32.372 17:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2435432 00:08:32.372 17:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:08:32.372 17:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:32.372 17:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2435432 00:08:32.372 17:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:32.372 17:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:32.372 17:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2435432' 00:08:32.372 killing process with pid 2435432 00:08:32.373 17:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2435432 00:08:32.373 Received shutdown signal, test time was about 10.000000 seconds 00:08:32.373 00:08:32.373 Latency(us) 00:08:32.373 [2024-12-09T16:19:01.552Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:32.373 [2024-12-09T16:19:01.552Z] =================================================================================================================== 00:08:32.373 [2024-12-09T16:19:01.552Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:32.373 17:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2435432 00:08:32.373 17:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:32.631 17:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:32.890 17:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb0d87d0-179a-420e-834d-c8743073e610 00:08:32.890 17:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:33.149 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:33.149 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:33.149 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2432363 00:08:33.149 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2432363 00:08:33.149 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2432363 Killed "${NVMF_APP[@]}" "$@" 00:08:33.149 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:33.149 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:33.149 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:33.149 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:33.149 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:33.149 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2437450 00:08:33.149 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2437450 00:08:33.149 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:33.149 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2437450 ']' 00:08:33.149 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:33.149 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:33.149 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:33.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:33.149 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:33.149 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:33.149 [2024-12-09 17:19:02.191390] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:08:33.149 [2024-12-09 17:19:02.191438] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:33.149 [2024-12-09 17:19:02.270082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.149 [2024-12-09 17:19:02.306363] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:33.149 [2024-12-09 17:19:02.306397] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:33.150 [2024-12-09 17:19:02.306404] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:33.150 [2024-12-09 17:19:02.306410] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:33.150 [2024-12-09 17:19:02.306415] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:33.150 [2024-12-09 17:19:02.306944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.409 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:33.409 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:33.409 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:33.409 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:33.409 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:33.409 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:33.409 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:33.668 [2024-12-09 17:19:02.624592] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:33.668 [2024-12-09 17:19:02.624725] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:33.668 [2024-12-09 17:19:02.624753] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:33.668 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:33.668 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 26906cc4-2711-43bd-8939-7ecc09fae22e 00:08:33.668 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=26906cc4-2711-43bd-8939-7ecc09fae22e 00:08:33.668 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:33.668 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:33.668 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:33.668 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:33.668 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:33.927 17:19:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 26906cc4-2711-43bd-8939-7ecc09fae22e -t 2000 00:08:33.927 [ 00:08:33.927 { 00:08:33.927 "name": "26906cc4-2711-43bd-8939-7ecc09fae22e", 00:08:33.927 "aliases": [ 00:08:33.927 "lvs/lvol" 00:08:33.927 ], 00:08:33.927 "product_name": "Logical Volume", 00:08:33.927 "block_size": 4096, 00:08:33.927 "num_blocks": 38912, 00:08:33.927 "uuid": "26906cc4-2711-43bd-8939-7ecc09fae22e", 00:08:33.927 "assigned_rate_limits": { 00:08:33.927 "rw_ios_per_sec": 0, 00:08:33.927 "rw_mbytes_per_sec": 0, 00:08:33.927 "r_mbytes_per_sec": 0, 00:08:33.927 "w_mbytes_per_sec": 0 00:08:33.927 }, 00:08:33.927 "claimed": false, 00:08:33.927 "zoned": false, 00:08:33.927 "supported_io_types": { 00:08:33.927 "read": true, 00:08:33.927 "write": true, 00:08:33.927 "unmap": true, 00:08:33.927 "flush": false, 00:08:33.927 "reset": true, 00:08:33.927 "nvme_admin": false, 00:08:33.927 "nvme_io": false, 00:08:33.927 "nvme_io_md": false, 00:08:33.927 "write_zeroes": true, 00:08:33.927 "zcopy": false, 00:08:33.927 "get_zone_info": false, 00:08:33.927 "zone_management": false, 00:08:33.927 "zone_append": false, 00:08:33.927 "compare": false, 00:08:33.927 "compare_and_write": false, 00:08:33.927 "abort": false, 00:08:33.927 "seek_hole": true, 00:08:33.927 "seek_data": true, 00:08:33.927 "copy": false, 00:08:33.927 "nvme_iov_md": false 00:08:33.927 }, 00:08:33.927 "driver_specific": { 00:08:33.927 "lvol": { 00:08:33.927 "lvol_store_uuid": "bb0d87d0-179a-420e-834d-c8743073e610", 00:08:33.927 "base_bdev": "aio_bdev", 00:08:33.927 "thin_provision": false, 00:08:33.927 "num_allocated_clusters": 38, 00:08:33.927 "snapshot": false, 00:08:33.927 "clone": false, 00:08:33.927 "esnap_clone": false 00:08:33.927 } 00:08:33.927 } 00:08:33.927 } 00:08:33.927 ] 00:08:33.927 17:19:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:33.927 17:19:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb0d87d0-179a-420e-834d-c8743073e610 00:08:33.927 17:19:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:34.186 17:19:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:34.186 17:19:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb0d87d0-179a-420e-834d-c8743073e610 00:08:34.186 17:19:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:34.445 17:19:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:34.445 17:19:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:34.445 [2024-12-09 17:19:03.569340] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:34.445 17:19:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb0d87d0-179a-420e-834d-c8743073e610 00:08:34.445 17:19:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:08:34.445 17:19:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb0d87d0-179a-420e-834d-c8743073e610 00:08:34.445 17:19:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:34.445 17:19:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:34.445 17:19:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:34.445 17:19:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:34.445 17:19:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:34.445 17:19:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:34.445 17:19:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:34.445 17:19:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:34.445 17:19:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb0d87d0-179a-420e-834d-c8743073e610 00:08:34.704 request: 00:08:34.704 { 00:08:34.704 "uuid": "bb0d87d0-179a-420e-834d-c8743073e610", 00:08:34.704 "method": "bdev_lvol_get_lvstores", 00:08:34.704 "req_id": 1 00:08:34.704 } 00:08:34.704 Got JSON-RPC error response 00:08:34.704 response: 00:08:34.704 { 00:08:34.704 "code": -19, 00:08:34.704 "message": "No such device" 00:08:34.704 } 00:08:34.704 17:19:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:08:34.704 17:19:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:34.704 17:19:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:34.704 17:19:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:34.704 17:19:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:34.965 aio_bdev 00:08:34.965 17:19:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 26906cc4-2711-43bd-8939-7ecc09fae22e 00:08:34.965 17:19:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=26906cc4-2711-43bd-8939-7ecc09fae22e 00:08:34.965 17:19:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:34.965 17:19:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:34.965 17:19:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:34.965 17:19:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:34.965 17:19:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:35.224 17:19:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 26906cc4-2711-43bd-8939-7ecc09fae22e -t 2000 00:08:35.224 [ 00:08:35.224 { 00:08:35.224 "name": "26906cc4-2711-43bd-8939-7ecc09fae22e", 00:08:35.224 "aliases": [ 00:08:35.224 "lvs/lvol" 00:08:35.224 ], 00:08:35.224 "product_name": "Logical Volume", 00:08:35.224 "block_size": 4096, 00:08:35.224 "num_blocks": 38912, 00:08:35.224 "uuid": "26906cc4-2711-43bd-8939-7ecc09fae22e", 00:08:35.224 "assigned_rate_limits": { 00:08:35.224 "rw_ios_per_sec": 0, 00:08:35.224 "rw_mbytes_per_sec": 0, 00:08:35.224 "r_mbytes_per_sec": 0, 00:08:35.224 "w_mbytes_per_sec": 0 00:08:35.224 }, 00:08:35.224 "claimed": false, 00:08:35.224 "zoned": false, 00:08:35.224 "supported_io_types": { 00:08:35.224 "read": true, 00:08:35.224 "write": true, 00:08:35.224 "unmap": true, 00:08:35.224 "flush": false, 00:08:35.224 "reset": true, 00:08:35.224 "nvme_admin": false, 00:08:35.224 "nvme_io": false, 00:08:35.224 "nvme_io_md": false, 00:08:35.224 "write_zeroes": true, 00:08:35.224 "zcopy": false, 00:08:35.224 "get_zone_info": false, 00:08:35.224 "zone_management": false, 00:08:35.224 "zone_append": false, 00:08:35.224 "compare": false, 00:08:35.224 "compare_and_write": false, 00:08:35.224 "abort": false, 00:08:35.224 "seek_hole": true, 00:08:35.224 "seek_data": true, 00:08:35.224 "copy": false, 00:08:35.224 "nvme_iov_md": false 00:08:35.224 }, 00:08:35.224 "driver_specific": { 00:08:35.224 "lvol": { 00:08:35.224 "lvol_store_uuid": "bb0d87d0-179a-420e-834d-c8743073e610", 00:08:35.224 "base_bdev": "aio_bdev", 00:08:35.224 "thin_provision": false, 00:08:35.224 "num_allocated_clusters": 38, 00:08:35.224 "snapshot": false, 00:08:35.224 "clone": false, 00:08:35.224 "esnap_clone": false 00:08:35.224 } 00:08:35.224 } 00:08:35.224 } 00:08:35.224 ] 00:08:35.224 17:19:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:35.224 17:19:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb0d87d0-179a-420e-834d-c8743073e610 00:08:35.224 17:19:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:35.483 17:19:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:35.483 17:19:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb0d87d0-179a-420e-834d-c8743073e610 00:08:35.483 17:19:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:35.741 17:19:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:35.741 17:19:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 26906cc4-2711-43bd-8939-7ecc09fae22e 00:08:35.741 17:19:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bb0d87d0-179a-420e-834d-c8743073e610 00:08:36.000 17:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:36.259 17:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:36.259 00:08:36.259 real 0m16.818s 00:08:36.259 user 0m43.478s 00:08:36.259 sys 0m3.747s 00:08:36.259 17:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:36.259 17:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:36.259 ************************************ 00:08:36.259 END TEST lvs_grow_dirty 00:08:36.259 ************************************ 00:08:36.259 17:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:36.259 17:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:08:36.259 17:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:08:36.259 17:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:08:36.259 17:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:36.259 17:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:08:36.259 17:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:08:36.259 17:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:08:36.259 17:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:36.259 nvmf_trace.0 00:08:36.259 17:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:08:36.259 17:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:36.259 17:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:36.259 17:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:36.259 17:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:36.259 17:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:36.259 17:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:36.259 17:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:36.259 rmmod nvme_tcp 00:08:36.518 rmmod nvme_fabrics 00:08:36.518 rmmod nvme_keyring 00:08:36.518 17:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:36.518 17:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:36.518 17:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:36.518 17:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2437450 ']' 00:08:36.518 17:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2437450 00:08:36.518 17:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2437450 ']' 00:08:36.518 17:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2437450 00:08:36.518 17:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:08:36.518 17:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:36.518 17:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2437450 00:08:36.518 17:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:36.518 17:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:36.518 17:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2437450' 00:08:36.518 killing process with pid 2437450 00:08:36.518 17:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2437450 00:08:36.518 17:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2437450 00:08:36.777 17:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:36.777 17:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:36.777 17:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:36.777 17:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:36.777 17:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:08:36.778 17:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:36.778 17:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:08:36.778 17:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:36.778 17:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:36.778 17:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:36.778 17:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:36.778 17:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:38.684 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:38.684 00:08:38.684 real 0m41.636s 00:08:38.684 user 1m4.154s 00:08:38.684 sys 0m10.232s 00:08:38.684 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:38.684 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:38.684 ************************************ 00:08:38.684 END TEST nvmf_lvs_grow 00:08:38.684 ************************************ 00:08:38.684 17:19:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:38.684 17:19:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:38.684 17:19:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:38.684 17:19:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:38.684 ************************************ 00:08:38.684 START TEST nvmf_bdev_io_wait 00:08:38.684 ************************************ 00:08:38.684 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:38.944 * Looking for test storage... 00:08:38.944 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:38.944 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:38.944 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:08:38.944 17:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:38.944 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:38.944 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:38.944 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:38.944 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:38.944 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:38.944 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:38.944 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:38.944 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:38.944 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:38.944 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:38.944 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:38.944 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:38.944 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:38.944 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:38.944 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:38.944 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:38.944 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:38.944 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:38.944 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:38.944 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:38.944 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:38.944 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:38.944 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:38.944 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:38.944 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:38.944 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:38.944 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:38.944 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:38.944 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:38.944 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:38.944 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:38.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.944 --rc genhtml_branch_coverage=1 00:08:38.944 --rc genhtml_function_coverage=1 00:08:38.944 --rc genhtml_legend=1 00:08:38.944 --rc geninfo_all_blocks=1 00:08:38.944 --rc geninfo_unexecuted_blocks=1 00:08:38.944 00:08:38.944 ' 00:08:38.944 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:38.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.944 --rc genhtml_branch_coverage=1 00:08:38.944 --rc genhtml_function_coverage=1 00:08:38.944 --rc genhtml_legend=1 00:08:38.944 --rc geninfo_all_blocks=1 00:08:38.944 --rc geninfo_unexecuted_blocks=1 00:08:38.944 00:08:38.944 ' 00:08:38.944 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:38.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.944 --rc genhtml_branch_coverage=1 00:08:38.944 --rc genhtml_function_coverage=1 00:08:38.944 --rc genhtml_legend=1 00:08:38.944 --rc geninfo_all_blocks=1 00:08:38.944 --rc geninfo_unexecuted_blocks=1 00:08:38.944 00:08:38.944 ' 00:08:38.944 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:38.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.944 --rc genhtml_branch_coverage=1 00:08:38.944 --rc genhtml_function_coverage=1 00:08:38.944 --rc genhtml_legend=1 00:08:38.944 --rc geninfo_all_blocks=1 00:08:38.944 --rc geninfo_unexecuted_blocks=1 00:08:38.944 00:08:38.944 ' 00:08:38.944 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:38.944 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:38.944 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:38.944 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:38.944 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:38.944 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:38.944 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:38.944 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:38.944 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:38.944 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:38.944 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:38.944 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:38.944 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:08:38.944 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:08:38.944 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:38.944 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:38.944 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:38.944 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:38.944 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:38.944 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:38.944 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:38.944 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:38.944 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:38.944 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.945 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.945 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.945 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:38.945 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.945 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:38.945 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:38.945 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:38.945 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:38.945 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:38.945 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:38.945 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:38.945 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:38.945 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:38.945 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:38.945 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:38.945 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:38.945 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:38.945 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:38.945 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:38.945 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:38.945 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:38.945 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:38.945 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:38.945 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:38.945 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:38.945 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:38.945 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:38.945 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:38.945 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:08:38.945 17:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:45.517 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:45.517 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:08:45.517 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:45.517 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:45.517 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:45.517 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:45.517 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:45.517 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:08:45.517 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:45.517 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:08:45.517 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:08:45.517 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:08:45.517 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:08:45.517 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:08:45.517 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:08:45.517 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:45.517 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:45.517 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:45.517 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:45.517 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:45.517 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:45.517 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:45.517 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:45.517 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:45.517 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:45.517 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:45.517 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:45.517 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:45.517 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:45.517 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:45.517 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:45.517 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:45.517 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:45.517 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:45.517 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:45.517 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:45.517 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:45.517 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:45.517 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:45.517 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:45.517 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:45.517 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:45.517 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:45.517 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:45.517 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:45.517 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:45.517 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:45.517 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:45.517 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:45.517 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:45.517 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:45.518 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:45.518 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:45.518 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:45.518 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:45.518 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:45.518 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:45.518 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:45.518 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:45.518 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:45.518 Found net devices under 0000:af:00.0: cvl_0_0 00:08:45.518 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:45.518 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:45.518 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:45.518 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:45.518 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:45.518 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:45.518 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:45.518 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:45.518 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:45.518 Found net devices under 0000:af:00.1: cvl_0_1 00:08:45.518 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:45.518 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:45.518 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:08:45.518 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:45.518 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:45.518 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:45.518 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:45.518 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:45.518 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:45.518 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:45.518 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:45.518 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:45.518 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:45.518 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:45.518 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:45.518 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:45.518 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:45.518 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:45.518 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:45.518 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:45.518 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:45.518 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:45.518 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:45.518 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:45.518 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:45.518 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:45.518 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:45.518 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:45.518 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:45.518 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:45.518 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.355 ms 00:08:45.518 00:08:45.518 --- 10.0.0.2 ping statistics --- 00:08:45.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.518 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:08:45.518 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:45.518 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:45.518 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:08:45.518 00:08:45.518 --- 10.0.0.1 ping statistics --- 00:08:45.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.518 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:08:45.518 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:45.518 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:08:45.518 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:45.518 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:45.518 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:45.518 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:45.518 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:45.518 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:45.518 17:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:45.518 17:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:45.518 17:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:45.518 17:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:45.518 17:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:45.518 17:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2441507 00:08:45.518 17:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:45.519 17:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2441507 00:08:45.519 17:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2441507 ']' 00:08:45.519 17:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.519 17:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:45.519 17:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.519 17:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:45.519 17:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:45.519 [2024-12-09 17:19:14.064590] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:08:45.519 [2024-12-09 17:19:14.064638] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:45.519 [2024-12-09 17:19:14.144561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:45.519 [2024-12-09 17:19:14.186279] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:45.519 [2024-12-09 17:19:14.186316] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:45.519 [2024-12-09 17:19:14.186323] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:45.519 [2024-12-09 17:19:14.186329] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:45.519 [2024-12-09 17:19:14.186334] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:45.519 [2024-12-09 17:19:14.187803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:45.519 [2024-12-09 17:19:14.187913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:45.519 [2024-12-09 17:19:14.188024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.519 [2024-12-09 17:19:14.188024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:45.778 17:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:45.778 17:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:08:45.778 17:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:45.778 17:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:45.778 17:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:45.778 17:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:45.778 17:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:45.778 17:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.778 17:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:45.778 17:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.778 17:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:45.778 17:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.778 17:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:46.037 17:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.037 17:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:46.037 17:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.037 17:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:46.037 [2024-12-09 17:19:15.013689] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:46.037 17:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.037 17:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:46.037 17:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.037 17:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:46.037 Malloc0 00:08:46.037 17:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.037 17:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:46.037 17:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.037 17:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:46.037 17:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.037 17:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:46.037 17:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.037 17:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:46.037 17:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.037 17:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:46.037 17:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.037 17:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:46.037 [2024-12-09 17:19:15.068879] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:46.037 17:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.037 17:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2441754 00:08:46.037 17:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:46.037 17:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:46.037 17:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2441756 00:08:46.037 17:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:46.037 17:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:46.037 17:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:46.037 17:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:46.037 { 00:08:46.037 "params": { 00:08:46.037 "name": "Nvme$subsystem", 00:08:46.037 "trtype": "$TEST_TRANSPORT", 00:08:46.037 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:46.037 "adrfam": "ipv4", 00:08:46.037 "trsvcid": "$NVMF_PORT", 00:08:46.037 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:46.037 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:46.037 "hdgst": ${hdgst:-false}, 00:08:46.037 "ddgst": ${ddgst:-false} 00:08:46.037 }, 00:08:46.037 "method": "bdev_nvme_attach_controller" 00:08:46.037 } 00:08:46.037 EOF 00:08:46.037 )") 00:08:46.037 17:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:46.037 17:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:46.037 17:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2441758 00:08:46.037 17:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:46.037 17:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:46.037 17:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:46.037 17:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:46.037 { 00:08:46.037 "params": { 00:08:46.037 "name": "Nvme$subsystem", 00:08:46.037 "trtype": "$TEST_TRANSPORT", 00:08:46.037 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:46.037 "adrfam": "ipv4", 00:08:46.037 "trsvcid": "$NVMF_PORT", 00:08:46.037 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:46.037 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:46.037 "hdgst": ${hdgst:-false}, 00:08:46.037 "ddgst": ${ddgst:-false} 00:08:46.037 }, 00:08:46.037 "method": "bdev_nvme_attach_controller" 00:08:46.037 } 00:08:46.037 EOF 00:08:46.037 )") 00:08:46.037 17:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:46.037 17:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:46.037 17:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2441761 00:08:46.037 17:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:46.037 17:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:46.037 17:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:46.037 17:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:46.037 17:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:46.037 17:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:46.037 17:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:46.037 17:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:46.037 { 00:08:46.037 "params": { 00:08:46.037 "name": "Nvme$subsystem", 00:08:46.037 "trtype": "$TEST_TRANSPORT", 00:08:46.037 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:46.037 "adrfam": "ipv4", 00:08:46.037 "trsvcid": "$NVMF_PORT", 00:08:46.037 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:46.037 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:46.037 "hdgst": ${hdgst:-false}, 00:08:46.037 "ddgst": ${ddgst:-false} 00:08:46.037 }, 00:08:46.037 "method": "bdev_nvme_attach_controller" 00:08:46.037 } 00:08:46.037 EOF 00:08:46.037 )") 00:08:46.037 17:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:46.037 17:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:46.037 17:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:46.037 17:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:46.037 17:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:46.037 { 00:08:46.037 "params": { 00:08:46.037 "name": "Nvme$subsystem", 00:08:46.037 "trtype": "$TEST_TRANSPORT", 00:08:46.037 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:46.037 "adrfam": "ipv4", 00:08:46.037 "trsvcid": "$NVMF_PORT", 00:08:46.037 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:46.038 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:46.038 "hdgst": ${hdgst:-false}, 00:08:46.038 "ddgst": ${ddgst:-false} 00:08:46.038 }, 00:08:46.038 "method": "bdev_nvme_attach_controller" 00:08:46.038 } 00:08:46.038 EOF 00:08:46.038 )") 00:08:46.038 17:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:46.038 17:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2441754 00:08:46.038 17:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:46.038 17:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:46.038 17:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:46.038 17:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:46.038 17:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:46.038 17:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:46.038 "params": { 00:08:46.038 "name": "Nvme1", 00:08:46.038 "trtype": "tcp", 00:08:46.038 "traddr": "10.0.0.2", 00:08:46.038 "adrfam": "ipv4", 00:08:46.038 "trsvcid": "4420", 00:08:46.038 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:46.038 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:46.038 "hdgst": false, 00:08:46.038 "ddgst": false 00:08:46.038 }, 00:08:46.038 "method": "bdev_nvme_attach_controller" 00:08:46.038 }' 00:08:46.038 17:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:46.038 17:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:46.038 17:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:46.038 "params": { 00:08:46.038 "name": "Nvme1", 00:08:46.038 "trtype": "tcp", 00:08:46.038 "traddr": "10.0.0.2", 00:08:46.038 "adrfam": "ipv4", 00:08:46.038 "trsvcid": "4420", 00:08:46.038 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:46.038 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:46.038 "hdgst": false, 00:08:46.038 "ddgst": false 00:08:46.038 }, 00:08:46.038 "method": "bdev_nvme_attach_controller" 00:08:46.038 }' 00:08:46.038 17:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:46.038 17:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:46.038 "params": { 00:08:46.038 "name": "Nvme1", 00:08:46.038 "trtype": "tcp", 00:08:46.038 "traddr": "10.0.0.2", 00:08:46.038 "adrfam": "ipv4", 00:08:46.038 "trsvcid": "4420", 00:08:46.038 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:46.038 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:46.038 "hdgst": false, 00:08:46.038 "ddgst": false 00:08:46.038 }, 00:08:46.038 "method": "bdev_nvme_attach_controller" 00:08:46.038 }' 00:08:46.038 17:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:46.038 17:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:46.038 "params": { 00:08:46.038 "name": "Nvme1", 00:08:46.038 "trtype": "tcp", 00:08:46.038 "traddr": "10.0.0.2", 00:08:46.038 "adrfam": "ipv4", 00:08:46.038 "trsvcid": "4420", 00:08:46.038 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:46.038 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:46.038 "hdgst": false, 00:08:46.038 "ddgst": false 00:08:46.038 }, 00:08:46.038 "method": "bdev_nvme_attach_controller" 00:08:46.038 }' 00:08:46.038 [2024-12-09 17:19:15.120499] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:08:46.038 [2024-12-09 17:19:15.120550] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:46.038 [2024-12-09 17:19:15.120606] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:08:46.038 [2024-12-09 17:19:15.120643] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:46.038 [2024-12-09 17:19:15.121377] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:08:46.038 [2024-12-09 17:19:15.121413] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:46.038 [2024-12-09 17:19:15.124394] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:08:46.038 [2024-12-09 17:19:15.124443] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:46.296 [2024-12-09 17:19:15.308368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.296 [2024-12-09 17:19:15.353388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:46.296 [2024-12-09 17:19:15.409225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.296 [2024-12-09 17:19:15.453727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:46.554 [2024-12-09 17:19:15.523935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.554 [2024-12-09 17:19:15.573429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.554 [2024-12-09 17:19:15.576198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:08:46.554 [2024-12-09 17:19:15.615292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:46.554 Running I/O for 1 seconds... 00:08:46.554 Running I/O for 1 seconds... 00:08:46.812 Running I/O for 1 seconds... 00:08:46.812 Running I/O for 1 seconds... 00:08:47.747 13241.00 IOPS, 51.72 MiB/s 00:08:47.747 Latency(us) 00:08:47.747 [2024-12-09T16:19:16.926Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:47.747 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:47.747 Nvme1n1 : 1.01 13289.34 51.91 0.00 0.00 9600.46 5024.43 16727.28 00:08:47.747 [2024-12-09T16:19:16.926Z] =================================================================================================================== 00:08:47.747 [2024-12-09T16:19:16.926Z] Total : 13289.34 51.91 0.00 0.00 9600.46 5024.43 16727.28 00:08:47.747 6939.00 IOPS, 27.11 MiB/s 00:08:47.747 Latency(us) 00:08:47.747 [2024-12-09T16:19:16.926Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:47.747 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:47.747 Nvme1n1 : 1.02 6960.26 27.19 0.00 0.00 18270.63 6928.09 29834.48 00:08:47.747 [2024-12-09T16:19:16.926Z] =================================================================================================================== 00:08:47.747 [2024-12-09T16:19:16.926Z] Total : 6960.26 27.19 0.00 0.00 18270.63 6928.09 29834.48 00:08:47.747 17:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2441756 00:08:47.747 17:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2441758 00:08:47.747 242272.00 IOPS, 946.38 MiB/s 00:08:47.747 Latency(us) 00:08:47.747 [2024-12-09T16:19:16.926Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:47.747 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:47.747 Nvme1n1 : 1.00 241911.09 944.97 0.00 0.00 526.80 221.38 1482.36 00:08:47.747 [2024-12-09T16:19:16.926Z] =================================================================================================================== 00:08:47.747 [2024-12-09T16:19:16.926Z] Total : 241911.09 944.97 0.00 0.00 526.80 221.38 1482.36 00:08:47.747 7856.00 IOPS, 30.69 MiB/s 00:08:47.747 Latency(us) 00:08:47.747 [2024-12-09T16:19:16.926Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:47.747 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:47.748 Nvme1n1 : 1.00 7954.95 31.07 0.00 0.00 16051.53 3198.78 44439.65 00:08:47.748 [2024-12-09T16:19:16.927Z] =================================================================================================================== 00:08:47.748 [2024-12-09T16:19:16.927Z] Total : 7954.95 31.07 0.00 0.00 16051.53 3198.78 44439.65 00:08:48.007 17:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2441761 00:08:48.007 17:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:48.007 17:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.007 17:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:48.007 17:19:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.007 17:19:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:48.007 17:19:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:48.007 17:19:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:48.007 17:19:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:48.007 17:19:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:48.007 17:19:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:48.007 17:19:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:48.007 17:19:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:48.007 rmmod nvme_tcp 00:08:48.007 rmmod nvme_fabrics 00:08:48.007 rmmod nvme_keyring 00:08:48.007 17:19:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:48.007 17:19:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:48.007 17:19:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:48.007 17:19:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2441507 ']' 00:08:48.007 17:19:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2441507 00:08:48.007 17:19:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2441507 ']' 00:08:48.007 17:19:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2441507 00:08:48.007 17:19:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:08:48.007 17:19:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:48.007 17:19:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2441507 00:08:48.007 17:19:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:48.007 17:19:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:48.007 17:19:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2441507' 00:08:48.007 killing process with pid 2441507 00:08:48.007 17:19:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2441507 00:08:48.007 17:19:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2441507 00:08:48.267 17:19:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:48.267 17:19:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:48.267 17:19:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:48.267 17:19:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:48.267 17:19:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:08:48.267 17:19:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:48.267 17:19:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:08:48.267 17:19:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:48.267 17:19:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:48.267 17:19:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:48.267 17:19:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:48.267 17:19:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:50.173 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:50.173 00:08:50.173 real 0m11.481s 00:08:50.173 user 0m19.259s 00:08:50.173 sys 0m6.179s 00:08:50.173 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:50.173 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:50.173 ************************************ 00:08:50.173 END TEST nvmf_bdev_io_wait 00:08:50.173 ************************************ 00:08:50.433 17:19:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:50.433 17:19:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:50.433 17:19:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:50.433 17:19:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:50.433 ************************************ 00:08:50.433 START TEST nvmf_queue_depth 00:08:50.433 ************************************ 00:08:50.433 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:50.433 * Looking for test storage... 00:08:50.433 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:50.433 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:50.433 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:08:50.433 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:50.433 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:50.433 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:50.433 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:50.433 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:50.433 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:50.433 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:50.433 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:50.433 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:50.433 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:50.433 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:50.433 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:50.433 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:50.433 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:50.433 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:50.433 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:50.433 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:50.433 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:50.433 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:50.433 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:50.433 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:50.433 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:50.433 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:50.433 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:50.433 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:50.433 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:50.433 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:50.433 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:50.434 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:50.434 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:50.434 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:50.434 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:50.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.434 --rc genhtml_branch_coverage=1 00:08:50.434 --rc genhtml_function_coverage=1 00:08:50.434 --rc genhtml_legend=1 00:08:50.434 --rc geninfo_all_blocks=1 00:08:50.434 --rc geninfo_unexecuted_blocks=1 00:08:50.434 00:08:50.434 ' 00:08:50.434 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:50.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.434 --rc genhtml_branch_coverage=1 00:08:50.434 --rc genhtml_function_coverage=1 00:08:50.434 --rc genhtml_legend=1 00:08:50.434 --rc geninfo_all_blocks=1 00:08:50.434 --rc geninfo_unexecuted_blocks=1 00:08:50.434 00:08:50.434 ' 00:08:50.434 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:50.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.434 --rc genhtml_branch_coverage=1 00:08:50.434 --rc genhtml_function_coverage=1 00:08:50.434 --rc genhtml_legend=1 00:08:50.434 --rc geninfo_all_blocks=1 00:08:50.434 --rc geninfo_unexecuted_blocks=1 00:08:50.434 00:08:50.434 ' 00:08:50.434 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:50.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.434 --rc genhtml_branch_coverage=1 00:08:50.434 --rc genhtml_function_coverage=1 00:08:50.434 --rc genhtml_legend=1 00:08:50.434 --rc geninfo_all_blocks=1 00:08:50.434 --rc geninfo_unexecuted_blocks=1 00:08:50.434 00:08:50.434 ' 00:08:50.434 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:50.434 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:50.434 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:50.434 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:50.434 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:50.434 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:50.434 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:50.434 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:50.434 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:50.434 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:50.434 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:50.434 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:50.434 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:08:50.434 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:08:50.434 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:50.434 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:50.434 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:50.434 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:50.434 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:50.434 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:50.434 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:50.434 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:50.434 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:50.434 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.434 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.434 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.434 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:50.434 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.434 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:50.434 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:50.693 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:50.693 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:50.693 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:50.693 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:50.693 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:50.694 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:50.694 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:50.694 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:50.694 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:50.694 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:50.694 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:50.694 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:50.694 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:50.694 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:50.694 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:50.694 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:50.694 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:50.694 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:50.694 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:50.694 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:50.694 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:50.694 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:50.694 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:50.694 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:08:50.694 17:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:57.267 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:57.267 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:08:57.267 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:57.267 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:57.267 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:57.267 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:57.267 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:57.267 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:08:57.267 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:57.267 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:08:57.267 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:08:57.267 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:08:57.267 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:08:57.267 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:08:57.267 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:08:57.267 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:57.267 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:57.267 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:57.267 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:57.267 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:57.267 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:57.267 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:57.267 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:57.267 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:57.267 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:57.267 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:57.267 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:57.267 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:57.267 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:57.267 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:57.267 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:57.267 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:57.267 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:57.267 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:57.267 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:57.267 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:57.267 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:57.267 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:57.267 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:57.267 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:57.267 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:57.267 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:57.267 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:57.267 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:57.267 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:57.267 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:57.267 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:57.267 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:57.267 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:57.267 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:57.267 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:57.267 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:57.267 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:57.267 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:57.267 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:57.267 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:57.267 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:57.267 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:57.267 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:57.267 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:57.267 Found net devices under 0000:af:00.0: cvl_0_0 00:08:57.267 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:57.267 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:57.267 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:57.267 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:57.267 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:57.267 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:57.268 Found net devices under 0000:af:00.1: cvl_0_1 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:57.268 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:57.268 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.354 ms 00:08:57.268 00:08:57.268 --- 10.0.0.2 ping statistics --- 00:08:57.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:57.268 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:57.268 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:57.268 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:08:57.268 00:08:57.268 --- 10.0.0.1 ping statistics --- 00:08:57.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:57.268 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2445526 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2445526 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2445526 ']' 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:57.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:57.268 [2024-12-09 17:19:25.677919] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:08:57.268 [2024-12-09 17:19:25.677966] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:57.268 [2024-12-09 17:19:25.756474] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.268 [2024-12-09 17:19:25.794504] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:57.268 [2024-12-09 17:19:25.794537] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:57.268 [2024-12-09 17:19:25.794544] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:57.268 [2024-12-09 17:19:25.794550] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:57.268 [2024-12-09 17:19:25.794555] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:57.268 [2024-12-09 17:19:25.795100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:57.268 [2024-12-09 17:19:25.938166] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:57.268 Malloc0 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:57.268 [2024-12-09 17:19:25.988132] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2445749 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:57.268 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2445749 /var/tmp/bdevperf.sock 00:08:57.269 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2445749 ']' 00:08:57.269 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:57.269 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:57.269 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:57.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:57.269 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:57.269 17:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:57.269 [2024-12-09 17:19:26.036564] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:08:57.269 [2024-12-09 17:19:26.036603] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2445749 ] 00:08:57.269 [2024-12-09 17:19:26.109563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.269 [2024-12-09 17:19:26.150561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.269 17:19:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:57.269 17:19:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:57.269 17:19:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:57.269 17:19:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.269 17:19:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:57.527 NVMe0n1 00:08:57.527 17:19:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.527 17:19:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:57.527 Running I/O for 10 seconds... 00:08:59.839 12188.00 IOPS, 47.61 MiB/s [2024-12-09T16:19:29.954Z] 12288.00 IOPS, 48.00 MiB/s [2024-12-09T16:19:30.889Z] 12292.67 IOPS, 48.02 MiB/s [2024-12-09T16:19:31.825Z] 12419.75 IOPS, 48.51 MiB/s [2024-12-09T16:19:32.759Z] 12477.40 IOPS, 48.74 MiB/s [2024-12-09T16:19:33.695Z] 12498.33 IOPS, 48.82 MiB/s [2024-12-09T16:19:34.629Z] 12567.29 IOPS, 49.09 MiB/s [2024-12-09T16:19:36.009Z] 12544.50 IOPS, 49.00 MiB/s [2024-12-09T16:19:36.945Z] 12592.44 IOPS, 49.19 MiB/s [2024-12-09T16:19:36.945Z] 12583.10 IOPS, 49.15 MiB/s 00:09:07.766 Latency(us) 00:09:07.766 [2024-12-09T16:19:36.945Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:07.766 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:07.766 Verification LBA range: start 0x0 length 0x4000 00:09:07.766 NVMe0n1 : 10.06 12610.63 49.26 0.00 0.00 80956.62 18724.57 53926.77 00:09:07.766 [2024-12-09T16:19:36.945Z] =================================================================================================================== 00:09:07.766 [2024-12-09T16:19:36.945Z] Total : 12610.63 49.26 0.00 0.00 80956.62 18724.57 53926.77 00:09:07.766 { 00:09:07.766 "results": [ 00:09:07.766 { 00:09:07.766 "job": "NVMe0n1", 00:09:07.766 "core_mask": "0x1", 00:09:07.766 "workload": "verify", 00:09:07.766 "status": "finished", 00:09:07.766 "verify_range": { 00:09:07.766 "start": 0, 00:09:07.766 "length": 16384 00:09:07.766 }, 00:09:07.766 "queue_depth": 1024, 00:09:07.766 "io_size": 4096, 00:09:07.766 "runtime": 10.05937, 00:09:07.766 "iops": 12610.630685619477, 00:09:07.766 "mibps": 49.26027611570108, 00:09:07.766 "io_failed": 0, 00:09:07.766 "io_timeout": 0, 00:09:07.766 "avg_latency_us": 80956.61510486476, 00:09:07.766 "min_latency_us": 18724.571428571428, 00:09:07.766 "max_latency_us": 53926.76571428571 00:09:07.766 } 00:09:07.766 ], 00:09:07.766 "core_count": 1 00:09:07.766 } 00:09:07.766 17:19:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2445749 00:09:07.766 17:19:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2445749 ']' 00:09:07.767 17:19:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2445749 00:09:07.767 17:19:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:07.767 17:19:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:07.767 17:19:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2445749 00:09:07.767 17:19:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:07.767 17:19:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:07.767 17:19:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2445749' 00:09:07.767 killing process with pid 2445749 00:09:07.767 17:19:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2445749 00:09:07.767 Received shutdown signal, test time was about 10.000000 seconds 00:09:07.767 00:09:07.767 Latency(us) 00:09:07.767 [2024-12-09T16:19:36.946Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:07.767 [2024-12-09T16:19:36.946Z] =================================================================================================================== 00:09:07.767 [2024-12-09T16:19:36.946Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:07.767 17:19:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2445749 00:09:07.767 17:19:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:07.767 17:19:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:07.767 17:19:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:07.767 17:19:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:07.767 17:19:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:07.767 17:19:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:07.767 17:19:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:07.767 17:19:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:07.767 rmmod nvme_tcp 00:09:07.767 rmmod nvme_fabrics 00:09:07.767 rmmod nvme_keyring 00:09:08.026 17:19:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:08.026 17:19:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:08.026 17:19:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:08.026 17:19:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2445526 ']' 00:09:08.026 17:19:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2445526 00:09:08.026 17:19:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2445526 ']' 00:09:08.026 17:19:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2445526 00:09:08.026 17:19:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:08.026 17:19:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:08.026 17:19:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2445526 00:09:08.026 17:19:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:08.026 17:19:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:08.026 17:19:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2445526' 00:09:08.026 killing process with pid 2445526 00:09:08.026 17:19:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2445526 00:09:08.026 17:19:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2445526 00:09:08.026 17:19:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:08.026 17:19:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:08.026 17:19:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:08.026 17:19:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:08.026 17:19:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:09:08.026 17:19:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:08.026 17:19:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:09:08.026 17:19:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:08.026 17:19:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:08.026 17:19:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:08.026 17:19:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:08.026 17:19:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:10.562 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:10.562 00:09:10.562 real 0m19.852s 00:09:10.562 user 0m23.229s 00:09:10.562 sys 0m6.131s 00:09:10.562 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:10.562 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:10.562 ************************************ 00:09:10.562 END TEST nvmf_queue_depth 00:09:10.562 ************************************ 00:09:10.562 17:19:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:10.562 17:19:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:10.562 17:19:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:10.562 17:19:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:10.562 ************************************ 00:09:10.562 START TEST nvmf_target_multipath 00:09:10.562 ************************************ 00:09:10.562 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:10.562 * Looking for test storage... 00:09:10.562 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:10.562 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:10.562 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:09:10.562 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:10.562 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:10.562 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:10.562 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:10.562 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:10.562 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:10.562 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:10.562 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:10.562 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:10.562 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:10.562 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:10.562 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:10.562 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:10.562 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:10.562 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:10.562 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:10.562 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:10.562 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:10.562 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:10.562 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:10.562 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:10.562 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:10.562 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:10.562 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:10.562 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:10.562 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:10.562 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:10.562 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:10.562 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:10.562 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:10.562 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:10.562 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:10.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.562 --rc genhtml_branch_coverage=1 00:09:10.562 --rc genhtml_function_coverage=1 00:09:10.562 --rc genhtml_legend=1 00:09:10.562 --rc geninfo_all_blocks=1 00:09:10.562 --rc geninfo_unexecuted_blocks=1 00:09:10.562 00:09:10.562 ' 00:09:10.562 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:10.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.562 --rc genhtml_branch_coverage=1 00:09:10.562 --rc genhtml_function_coverage=1 00:09:10.562 --rc genhtml_legend=1 00:09:10.562 --rc geninfo_all_blocks=1 00:09:10.562 --rc geninfo_unexecuted_blocks=1 00:09:10.562 00:09:10.562 ' 00:09:10.562 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:10.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.562 --rc genhtml_branch_coverage=1 00:09:10.562 --rc genhtml_function_coverage=1 00:09:10.562 --rc genhtml_legend=1 00:09:10.562 --rc geninfo_all_blocks=1 00:09:10.562 --rc geninfo_unexecuted_blocks=1 00:09:10.562 00:09:10.562 ' 00:09:10.562 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:10.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.563 --rc genhtml_branch_coverage=1 00:09:10.563 --rc genhtml_function_coverage=1 00:09:10.563 --rc genhtml_legend=1 00:09:10.563 --rc geninfo_all_blocks=1 00:09:10.563 --rc geninfo_unexecuted_blocks=1 00:09:10.563 00:09:10.563 ' 00:09:10.563 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:10.563 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:10.563 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:10.563 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:10.563 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:10.563 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:10.563 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:10.563 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:10.563 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:10.563 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:10.563 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:10.563 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:10.563 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:09:10.563 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:09:10.563 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:10.563 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:10.563 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:10.563 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:10.563 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:10.563 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:10.563 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:10.563 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:10.563 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:10.563 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.563 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.563 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.563 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:10.563 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.563 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:10.563 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:10.563 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:10.563 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:10.563 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:10.563 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:10.563 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:10.563 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:10.563 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:10.563 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:10.563 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:10.563 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:10.563 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:10.563 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:10.563 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:10.563 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:10.563 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:10.563 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:10.563 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:10.563 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:10.563 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:10.563 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:10.563 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:10.563 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:10.563 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:10.563 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:10.563 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:09:10.563 17:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:17.134 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:17.134 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:09:17.134 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:17.134 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:17.134 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:17.134 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:17.134 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:17.134 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:09:17.134 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:17.134 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:09:17.134 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:09:17.134 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:09:17.134 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:09:17.134 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:09:17.134 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:09:17.134 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:17.134 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:17.135 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:17.135 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:17.135 Found net devices under 0000:af:00.0: cvl_0_0 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:17.135 Found net devices under 0000:af:00.1: cvl_0_1 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:17.135 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:17.135 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.360 ms 00:09:17.135 00:09:17.135 --- 10.0.0.2 ping statistics --- 00:09:17.135 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:17.135 rtt min/avg/max/mdev = 0.360/0.360/0.360/0.000 ms 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:17.135 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:17.135 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:09:17.135 00:09:17.135 --- 10.0.0.1 ping statistics --- 00:09:17.135 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:17.135 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:17.135 only one NIC for nvmf test 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:17.135 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:17.136 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:17.136 rmmod nvme_tcp 00:09:17.136 rmmod nvme_fabrics 00:09:17.136 rmmod nvme_keyring 00:09:17.136 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:17.136 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:17.136 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:17.136 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:17.136 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:17.136 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:17.136 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:17.136 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:17.136 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:17.136 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:17.136 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:17.136 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:17.136 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:17.136 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:17.136 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:17.136 17:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:18.515 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:18.515 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:18.515 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:18.515 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:18.515 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:18.515 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:18.515 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:18.516 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:18.516 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:18.516 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:18.516 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:18.516 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:18.516 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:18.516 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:18.516 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:18.516 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:18.516 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:18.516 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:18.516 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:18.516 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:18.516 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:18.516 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:18.516 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:18.516 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:18.516 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:18.516 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:18.516 00:09:18.516 real 0m8.343s 00:09:18.516 user 0m1.864s 00:09:18.516 sys 0m4.493s 00:09:18.516 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:18.516 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:18.516 ************************************ 00:09:18.516 END TEST nvmf_target_multipath 00:09:18.516 ************************************ 00:09:18.776 17:19:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:18.776 17:19:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:18.776 17:19:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:18.776 17:19:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:18.776 ************************************ 00:09:18.776 START TEST nvmf_zcopy 00:09:18.776 ************************************ 00:09:18.776 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:18.776 * Looking for test storage... 00:09:18.776 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:18.776 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:18.776 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:09:18.776 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:18.776 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:18.776 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:18.776 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:18.776 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:18.776 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:18.776 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:18.776 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:18.776 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:18.776 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:18.776 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:18.776 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:18.776 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:18.776 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:18.776 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:18.776 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:18.776 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:18.776 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:18.776 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:18.776 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:18.776 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:18.776 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:18.776 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:18.776 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:18.776 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:18.776 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:18.776 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:18.776 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:18.776 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:18.776 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:18.776 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:18.776 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:18.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.776 --rc genhtml_branch_coverage=1 00:09:18.776 --rc genhtml_function_coverage=1 00:09:18.776 --rc genhtml_legend=1 00:09:18.776 --rc geninfo_all_blocks=1 00:09:18.776 --rc geninfo_unexecuted_blocks=1 00:09:18.776 00:09:18.776 ' 00:09:18.776 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:18.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.776 --rc genhtml_branch_coverage=1 00:09:18.776 --rc genhtml_function_coverage=1 00:09:18.776 --rc genhtml_legend=1 00:09:18.776 --rc geninfo_all_blocks=1 00:09:18.776 --rc geninfo_unexecuted_blocks=1 00:09:18.776 00:09:18.776 ' 00:09:18.776 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:18.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.776 --rc genhtml_branch_coverage=1 00:09:18.776 --rc genhtml_function_coverage=1 00:09:18.776 --rc genhtml_legend=1 00:09:18.776 --rc geninfo_all_blocks=1 00:09:18.776 --rc geninfo_unexecuted_blocks=1 00:09:18.776 00:09:18.776 ' 00:09:18.776 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:18.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.776 --rc genhtml_branch_coverage=1 00:09:18.776 --rc genhtml_function_coverage=1 00:09:18.776 --rc genhtml_legend=1 00:09:18.776 --rc geninfo_all_blocks=1 00:09:18.776 --rc geninfo_unexecuted_blocks=1 00:09:18.776 00:09:18.776 ' 00:09:18.776 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:18.776 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:18.776 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:18.776 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:18.776 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:18.776 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:18.776 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:18.776 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:18.776 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:18.776 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:18.776 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:18.776 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:18.776 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:09:18.776 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:09:18.776 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:18.776 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:18.776 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:18.776 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:18.776 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:18.777 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:18.777 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:18.777 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:18.777 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:18.777 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.777 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.037 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.037 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:19.037 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.037 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:19.037 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:19.037 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:19.037 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:19.037 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:19.037 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:19.037 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:19.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:19.037 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:19.037 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:19.037 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:19.037 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:19.037 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:19.037 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:19.037 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:19.037 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:19.037 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:19.037 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.037 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:19.037 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:19.037 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:19.037 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:19.037 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:09:19.037 17:19:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:25.613 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:25.613 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:09:25.613 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:25.613 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:25.613 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:25.613 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:25.613 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:25.613 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:09:25.613 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:25.613 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:09:25.613 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:09:25.613 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:09:25.613 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:09:25.613 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:09:25.613 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:09:25.613 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:25.613 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:25.613 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:25.613 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:25.613 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:25.613 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:25.613 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:25.613 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:25.613 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:25.613 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:25.613 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:25.613 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:25.613 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:25.613 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:25.613 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:25.613 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:25.613 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:25.613 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:25.613 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:25.613 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:25.613 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:25.613 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:25.613 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:25.613 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:25.613 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:25.613 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:25.613 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:25.613 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:25.613 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:25.613 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:25.613 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:25.613 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:25.613 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:25.613 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:25.613 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:25.613 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:25.613 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:25.613 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:25.613 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:25.613 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:25.613 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:25.613 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:25.613 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:25.613 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:25.613 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:25.613 Found net devices under 0000:af:00.0: cvl_0_0 00:09:25.613 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:25.613 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:25.613 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:25.613 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:25.613 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:25.613 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:25.613 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:25.613 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:25.613 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:25.613 Found net devices under 0000:af:00.1: cvl_0_1 00:09:25.613 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:25.613 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:25.613 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:09:25.613 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:25.614 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:25.614 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:25.614 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:25.614 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:25.614 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:25.614 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:25.614 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:25.614 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:25.614 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:25.614 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:25.614 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:25.614 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:25.614 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:25.614 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:25.614 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:25.614 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:25.614 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:25.614 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:25.614 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:25.614 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:25.614 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:25.614 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:25.614 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:25.614 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:25.614 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:25.614 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:25.614 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.358 ms 00:09:25.614 00:09:25.614 --- 10.0.0.2 ping statistics --- 00:09:25.614 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:25.614 rtt min/avg/max/mdev = 0.358/0.358/0.358/0.000 ms 00:09:25.614 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:25.614 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:25.614 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:09:25.614 00:09:25.614 --- 10.0.0.1 ping statistics --- 00:09:25.614 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:25.614 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:09:25.614 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:25.614 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:09:25.614 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:25.614 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:25.614 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:25.614 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:25.614 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:25.614 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:25.614 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:25.614 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:25.614 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:25.614 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:25.614 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:25.614 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2454560 00:09:25.614 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2454560 00:09:25.614 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:25.614 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2454560 ']' 00:09:25.614 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:25.614 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:25.614 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:25.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:25.614 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:25.614 17:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:25.614 [2024-12-09 17:19:54.025427] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:09:25.614 [2024-12-09 17:19:54.025476] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:25.614 [2024-12-09 17:19:54.102466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.614 [2024-12-09 17:19:54.141868] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:25.614 [2024-12-09 17:19:54.141903] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:25.614 [2024-12-09 17:19:54.141910] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:25.614 [2024-12-09 17:19:54.141915] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:25.614 [2024-12-09 17:19:54.141921] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:25.614 [2024-12-09 17:19:54.142431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:25.614 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:25.614 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:09:25.614 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:25.614 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:25.614 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:25.614 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:25.614 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:25.614 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:25.614 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.614 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:25.614 [2024-12-09 17:19:54.278778] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:25.614 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.614 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:25.614 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.614 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:25.614 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.614 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:25.614 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.614 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:25.614 [2024-12-09 17:19:54.298967] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:25.614 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.614 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:25.614 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.614 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:25.614 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.614 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:25.614 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.614 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:25.614 malloc0 00:09:25.614 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.614 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:25.614 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.614 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:25.614 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.614 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:25.614 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:25.614 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:25.614 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:25.614 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:25.614 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:25.614 { 00:09:25.614 "params": { 00:09:25.614 "name": "Nvme$subsystem", 00:09:25.614 "trtype": "$TEST_TRANSPORT", 00:09:25.614 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:25.614 "adrfam": "ipv4", 00:09:25.614 "trsvcid": "$NVMF_PORT", 00:09:25.614 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:25.614 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:25.614 "hdgst": ${hdgst:-false}, 00:09:25.614 "ddgst": ${ddgst:-false} 00:09:25.614 }, 00:09:25.614 "method": "bdev_nvme_attach_controller" 00:09:25.614 } 00:09:25.614 EOF 00:09:25.615 )") 00:09:25.615 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:25.615 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:25.615 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:25.615 17:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:25.615 "params": { 00:09:25.615 "name": "Nvme1", 00:09:25.615 "trtype": "tcp", 00:09:25.615 "traddr": "10.0.0.2", 00:09:25.615 "adrfam": "ipv4", 00:09:25.615 "trsvcid": "4420", 00:09:25.615 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:25.615 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:25.615 "hdgst": false, 00:09:25.615 "ddgst": false 00:09:25.615 }, 00:09:25.615 "method": "bdev_nvme_attach_controller" 00:09:25.615 }' 00:09:25.615 [2024-12-09 17:19:54.380684] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:09:25.615 [2024-12-09 17:19:54.380725] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2454587 ] 00:09:25.615 [2024-12-09 17:19:54.453837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.615 [2024-12-09 17:19:54.493048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.873 Running I/O for 10 seconds... 00:09:27.744 8739.00 IOPS, 68.27 MiB/s [2024-12-09T16:19:58.301Z] 8831.00 IOPS, 68.99 MiB/s [2024-12-09T16:19:59.235Z] 8843.00 IOPS, 69.09 MiB/s [2024-12-09T16:20:00.173Z] 8853.50 IOPS, 69.17 MiB/s [2024-12-09T16:20:01.111Z] 8860.20 IOPS, 69.22 MiB/s [2024-12-09T16:20:02.047Z] 8853.67 IOPS, 69.17 MiB/s [2024-12-09T16:20:02.985Z] 8863.71 IOPS, 69.25 MiB/s [2024-12-09T16:20:03.922Z] 8871.62 IOPS, 69.31 MiB/s [2024-12-09T16:20:05.299Z] 8871.89 IOPS, 69.31 MiB/s [2024-12-09T16:20:05.299Z] 8876.50 IOPS, 69.35 MiB/s 00:09:36.120 Latency(us) 00:09:36.120 [2024-12-09T16:20:05.299Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:36.120 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:36.120 Verification LBA range: start 0x0 length 0x1000 00:09:36.120 Nvme1n1 : 10.01 8880.37 69.38 0.00 0.00 14372.88 1763.23 22719.15 00:09:36.120 [2024-12-09T16:20:05.299Z] =================================================================================================================== 00:09:36.120 [2024-12-09T16:20:05.299Z] Total : 8880.37 69.38 0.00 0.00 14372.88 1763.23 22719.15 00:09:36.120 17:20:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2456395 00:09:36.120 17:20:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:36.120 17:20:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:36.120 17:20:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:36.120 17:20:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:36.120 17:20:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:36.120 17:20:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:36.120 17:20:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:36.120 17:20:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:36.120 { 00:09:36.120 "params": { 00:09:36.120 "name": "Nvme$subsystem", 00:09:36.120 "trtype": "$TEST_TRANSPORT", 00:09:36.120 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:36.120 "adrfam": "ipv4", 00:09:36.120 "trsvcid": "$NVMF_PORT", 00:09:36.120 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:36.120 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:36.120 "hdgst": ${hdgst:-false}, 00:09:36.120 "ddgst": ${ddgst:-false} 00:09:36.120 }, 00:09:36.120 "method": "bdev_nvme_attach_controller" 00:09:36.120 } 00:09:36.120 EOF 00:09:36.120 )") 00:09:36.120 17:20:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:36.120 [2024-12-09 17:20:05.054756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.120 [2024-12-09 17:20:05.054786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.120 17:20:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:36.120 17:20:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:36.120 17:20:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:36.120 "params": { 00:09:36.120 "name": "Nvme1", 00:09:36.120 "trtype": "tcp", 00:09:36.120 "traddr": "10.0.0.2", 00:09:36.120 "adrfam": "ipv4", 00:09:36.120 "trsvcid": "4420", 00:09:36.120 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:36.120 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:36.120 "hdgst": false, 00:09:36.120 "ddgst": false 00:09:36.120 }, 00:09:36.120 "method": "bdev_nvme_attach_controller" 00:09:36.120 }' 00:09:36.120 [2024-12-09 17:20:05.066755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.120 [2024-12-09 17:20:05.066768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.120 [2024-12-09 17:20:05.078786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.120 [2024-12-09 17:20:05.078796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.120 [2024-12-09 17:20:05.090813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.120 [2024-12-09 17:20:05.090822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.120 [2024-12-09 17:20:05.094092] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:09:36.120 [2024-12-09 17:20:05.094131] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2456395 ] 00:09:36.120 [2024-12-09 17:20:05.102844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.120 [2024-12-09 17:20:05.102859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.120 [2024-12-09 17:20:05.114877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.120 [2024-12-09 17:20:05.114886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.121 [2024-12-09 17:20:05.126909] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.121 [2024-12-09 17:20:05.126918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.121 [2024-12-09 17:20:05.138941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.121 [2024-12-09 17:20:05.138950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.121 [2024-12-09 17:20:05.150972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.121 [2024-12-09 17:20:05.150982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.121 [2024-12-09 17:20:05.163005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.121 [2024-12-09 17:20:05.163014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.121 [2024-12-09 17:20:05.169297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.121 [2024-12-09 17:20:05.175036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.121 [2024-12-09 17:20:05.175047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.121 [2024-12-09 17:20:05.187071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.121 [2024-12-09 17:20:05.187084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.121 [2024-12-09 17:20:05.199102] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.121 [2024-12-09 17:20:05.199112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.121 [2024-12-09 17:20:05.209094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.121 [2024-12-09 17:20:05.211136] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.121 [2024-12-09 17:20:05.211148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.121 [2024-12-09 17:20:05.223181] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.121 [2024-12-09 17:20:05.223198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.121 [2024-12-09 17:20:05.235206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.121 [2024-12-09 17:20:05.235230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.121 [2024-12-09 17:20:05.247239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.121 [2024-12-09 17:20:05.247253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.121 [2024-12-09 17:20:05.259268] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.121 [2024-12-09 17:20:05.259281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.121 [2024-12-09 17:20:05.271303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.121 [2024-12-09 17:20:05.271316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.121 [2024-12-09 17:20:05.283338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.121 [2024-12-09 17:20:05.283350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.121 [2024-12-09 17:20:05.295363] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.121 [2024-12-09 17:20:05.295372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.457 [2024-12-09 17:20:05.307408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.457 [2024-12-09 17:20:05.307428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.457 [2024-12-09 17:20:05.319435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.457 [2024-12-09 17:20:05.319452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.457 [2024-12-09 17:20:05.331467] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.457 [2024-12-09 17:20:05.331481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.457 [2024-12-09 17:20:05.343498] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.457 [2024-12-09 17:20:05.343509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.457 [2024-12-09 17:20:05.355524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.457 [2024-12-09 17:20:05.355533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.457 [2024-12-09 17:20:05.367559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.457 [2024-12-09 17:20:05.367569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.457 [2024-12-09 17:20:05.379596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.457 [2024-12-09 17:20:05.379609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.457 [2024-12-09 17:20:05.391624] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.457 [2024-12-09 17:20:05.391633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.457 [2024-12-09 17:20:05.403655] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.457 [2024-12-09 17:20:05.403664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.457 [2024-12-09 17:20:05.415682] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.457 [2024-12-09 17:20:05.415691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.457 [2024-12-09 17:20:05.427721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.457 [2024-12-09 17:20:05.427734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.457 [2024-12-09 17:20:05.439751] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.457 [2024-12-09 17:20:05.439759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.457 [2024-12-09 17:20:05.451782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.457 [2024-12-09 17:20:05.451791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.457 [2024-12-09 17:20:05.463820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.457 [2024-12-09 17:20:05.463830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.457 [2024-12-09 17:20:05.475849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.457 [2024-12-09 17:20:05.475857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.457 [2024-12-09 17:20:05.487892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.457 [2024-12-09 17:20:05.487908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.457 Running I/O for 5 seconds... 00:09:36.457 [2024-12-09 17:20:05.504310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.457 [2024-12-09 17:20:05.504330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.457 [2024-12-09 17:20:05.517594] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.457 [2024-12-09 17:20:05.517613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.457 [2024-12-09 17:20:05.531058] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.457 [2024-12-09 17:20:05.531076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.457 [2024-12-09 17:20:05.544938] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.457 [2024-12-09 17:20:05.544956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.457 [2024-12-09 17:20:05.559252] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.457 [2024-12-09 17:20:05.559272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.743 [2024-12-09 17:20:05.572842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.743 [2024-12-09 17:20:05.572860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.743 [2024-12-09 17:20:05.586757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.743 [2024-12-09 17:20:05.586776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.743 [2024-12-09 17:20:05.600789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.743 [2024-12-09 17:20:05.600807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.743 [2024-12-09 17:20:05.614418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.743 [2024-12-09 17:20:05.614436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.743 [2024-12-09 17:20:05.628253] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.743 [2024-12-09 17:20:05.628271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.743 [2024-12-09 17:20:05.642060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.743 [2024-12-09 17:20:05.642078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.743 [2024-12-09 17:20:05.656035] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.743 [2024-12-09 17:20:05.656053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.743 [2024-12-09 17:20:05.664926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.743 [2024-12-09 17:20:05.664944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.743 [2024-12-09 17:20:05.678995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.743 [2024-12-09 17:20:05.679014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.743 [2024-12-09 17:20:05.692344] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.743 [2024-12-09 17:20:05.692362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.743 [2024-12-09 17:20:05.706521] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.743 [2024-12-09 17:20:05.706539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.743 [2024-12-09 17:20:05.715497] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.743 [2024-12-09 17:20:05.715515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.743 [2024-12-09 17:20:05.729395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.743 [2024-12-09 17:20:05.729414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.743 [2024-12-09 17:20:05.744212] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.743 [2024-12-09 17:20:05.744237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.743 [2024-12-09 17:20:05.759757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.743 [2024-12-09 17:20:05.759775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.743 [2024-12-09 17:20:05.773827] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.743 [2024-12-09 17:20:05.773845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.743 [2024-12-09 17:20:05.782786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.743 [2024-12-09 17:20:05.782803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.743 [2024-12-09 17:20:05.797044] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.743 [2024-12-09 17:20:05.797062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.743 [2024-12-09 17:20:05.810954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.743 [2024-12-09 17:20:05.810973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.743 [2024-12-09 17:20:05.824848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.743 [2024-12-09 17:20:05.824866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.743 [2024-12-09 17:20:05.838434] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.743 [2024-12-09 17:20:05.838452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.743 [2024-12-09 17:20:05.851882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.743 [2024-12-09 17:20:05.851900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.743 [2024-12-09 17:20:05.865789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.743 [2024-12-09 17:20:05.865807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.743 [2024-12-09 17:20:05.879545] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.743 [2024-12-09 17:20:05.879564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.743 [2024-12-09 17:20:05.893215] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.743 [2024-12-09 17:20:05.893241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.743 [2024-12-09 17:20:05.907328] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.743 [2024-12-09 17:20:05.907347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.076 [2024-12-09 17:20:05.920807] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.076 [2024-12-09 17:20:05.920827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.076 [2024-12-09 17:20:05.934789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.076 [2024-12-09 17:20:05.934808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.076 [2024-12-09 17:20:05.948938] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.076 [2024-12-09 17:20:05.948957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.076 [2024-12-09 17:20:05.962915] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.076 [2024-12-09 17:20:05.962936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.076 [2024-12-09 17:20:05.976935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.076 [2024-12-09 17:20:05.976954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.076 [2024-12-09 17:20:05.990617] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.076 [2024-12-09 17:20:05.990636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.076 [2024-12-09 17:20:05.999435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.076 [2024-12-09 17:20:05.999453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.076 [2024-12-09 17:20:06.014027] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.076 [2024-12-09 17:20:06.014046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.076 [2024-12-09 17:20:06.027395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.076 [2024-12-09 17:20:06.027414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.076 [2024-12-09 17:20:06.041342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.076 [2024-12-09 17:20:06.041370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.076 [2024-12-09 17:20:06.055062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.076 [2024-12-09 17:20:06.055085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.076 [2024-12-09 17:20:06.068394] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.076 [2024-12-09 17:20:06.068414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.076 [2024-12-09 17:20:06.081898] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.076 [2024-12-09 17:20:06.081916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.076 [2024-12-09 17:20:06.095355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.076 [2024-12-09 17:20:06.095373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.076 [2024-12-09 17:20:06.105080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.076 [2024-12-09 17:20:06.105100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.076 [2024-12-09 17:20:06.118735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.076 [2024-12-09 17:20:06.118754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.076 [2024-12-09 17:20:06.127352] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.076 [2024-12-09 17:20:06.127370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.076 [2024-12-09 17:20:06.141751] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.076 [2024-12-09 17:20:06.141769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.076 [2024-12-09 17:20:06.155421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.076 [2024-12-09 17:20:06.155441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.076 [2024-12-09 17:20:06.169192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.076 [2024-12-09 17:20:06.169211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.076 [2024-12-09 17:20:06.177884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.076 [2024-12-09 17:20:06.177903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.076 [2024-12-09 17:20:06.192062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.076 [2024-12-09 17:20:06.192080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.076 [2024-12-09 17:20:06.206133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.076 [2024-12-09 17:20:06.206151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.076 [2024-12-09 17:20:06.217170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.076 [2024-12-09 17:20:06.217189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.334 [2024-12-09 17:20:06.231250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.334 [2024-12-09 17:20:06.231271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.334 [2024-12-09 17:20:06.245131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.334 [2024-12-09 17:20:06.245151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.334 [2024-12-09 17:20:06.258889] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.334 [2024-12-09 17:20:06.258909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.334 [2024-12-09 17:20:06.272962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.334 [2024-12-09 17:20:06.272981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.334 [2024-12-09 17:20:06.286410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.334 [2024-12-09 17:20:06.286428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.334 [2024-12-09 17:20:06.299710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.334 [2024-12-09 17:20:06.299732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.334 [2024-12-09 17:20:06.313324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.334 [2024-12-09 17:20:06.313342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.334 [2024-12-09 17:20:06.326765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.334 [2024-12-09 17:20:06.326783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.334 [2024-12-09 17:20:06.340643] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.334 [2024-12-09 17:20:06.340661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.334 [2024-12-09 17:20:06.349874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.334 [2024-12-09 17:20:06.349892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.334 [2024-12-09 17:20:06.363890] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.335 [2024-12-09 17:20:06.363908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.335 [2024-12-09 17:20:06.377399] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.335 [2024-12-09 17:20:06.377416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.335 [2024-12-09 17:20:06.390754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.335 [2024-12-09 17:20:06.390773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.335 [2024-12-09 17:20:06.403895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.335 [2024-12-09 17:20:06.403913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.335 [2024-12-09 17:20:06.417685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.335 [2024-12-09 17:20:06.417703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.335 [2024-12-09 17:20:06.431619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.335 [2024-12-09 17:20:06.431637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.335 [2024-12-09 17:20:06.445057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.335 [2024-12-09 17:20:06.445075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.335 [2024-12-09 17:20:06.458707] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.335 [2024-12-09 17:20:06.458725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.335 [2024-12-09 17:20:06.472467] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.335 [2024-12-09 17:20:06.472485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.335 [2024-12-09 17:20:06.486056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.335 [2024-12-09 17:20:06.486074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.335 17031.00 IOPS, 133.05 MiB/s [2024-12-09T16:20:06.514Z] [2024-12-09 17:20:06.499816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.335 [2024-12-09 17:20:06.499835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.593 [2024-12-09 17:20:06.513399] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.593 [2024-12-09 17:20:06.513418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.593 [2024-12-09 17:20:06.526920] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.593 [2024-12-09 17:20:06.526938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.593 [2024-12-09 17:20:06.540367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.593 [2024-12-09 17:20:06.540385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.593 [2024-12-09 17:20:06.554002] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.593 [2024-12-09 17:20:06.554024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.593 [2024-12-09 17:20:06.567817] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.593 [2024-12-09 17:20:06.567835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.593 [2024-12-09 17:20:06.581738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.593 [2024-12-09 17:20:06.581756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.593 [2024-12-09 17:20:06.595097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.593 [2024-12-09 17:20:06.595114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.593 [2024-12-09 17:20:06.608856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.593 [2024-12-09 17:20:06.608875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.593 [2024-12-09 17:20:06.622461] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.593 [2024-12-09 17:20:06.622484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.593 [2024-12-09 17:20:06.635915] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.593 [2024-12-09 17:20:06.635933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.593 [2024-12-09 17:20:06.649291] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.593 [2024-12-09 17:20:06.649309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.593 [2024-12-09 17:20:06.662875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.593 [2024-12-09 17:20:06.662893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.593 [2024-12-09 17:20:06.676450] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.593 [2024-12-09 17:20:06.676468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.593 [2024-12-09 17:20:06.690015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.593 [2024-12-09 17:20:06.690034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.593 [2024-12-09 17:20:06.703580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.593 [2024-12-09 17:20:06.703598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.593 [2024-12-09 17:20:06.717352] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.593 [2024-12-09 17:20:06.717371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.593 [2024-12-09 17:20:06.731213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.593 [2024-12-09 17:20:06.731243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.593 [2024-12-09 17:20:06.744340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.594 [2024-12-09 17:20:06.744357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.594 [2024-12-09 17:20:06.757743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.594 [2024-12-09 17:20:06.757761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.594 [2024-12-09 17:20:06.766820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.594 [2024-12-09 17:20:06.766838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.852 [2024-12-09 17:20:06.776210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.852 [2024-12-09 17:20:06.776236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.852 [2024-12-09 17:20:06.785427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.852 [2024-12-09 17:20:06.785444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.852 [2024-12-09 17:20:06.799614] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.852 [2024-12-09 17:20:06.799636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.852 [2024-12-09 17:20:06.813269] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.852 [2024-12-09 17:20:06.813286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.852 [2024-12-09 17:20:06.827118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.852 [2024-12-09 17:20:06.827136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.852 [2024-12-09 17:20:06.836107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.852 [2024-12-09 17:20:06.836128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.852 [2024-12-09 17:20:06.850253] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.852 [2024-12-09 17:20:06.850271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.852 [2024-12-09 17:20:06.863674] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.852 [2024-12-09 17:20:06.863692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.852 [2024-12-09 17:20:06.877496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.852 [2024-12-09 17:20:06.877514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.852 [2024-12-09 17:20:06.890658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.852 [2024-12-09 17:20:06.890676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.852 [2024-12-09 17:20:06.904375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.852 [2024-12-09 17:20:06.904392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.852 [2024-12-09 17:20:06.917860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.852 [2024-12-09 17:20:06.917878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.852 [2024-12-09 17:20:06.931451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.852 [2024-12-09 17:20:06.931470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.852 [2024-12-09 17:20:06.944590] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.852 [2024-12-09 17:20:06.944608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.852 [2024-12-09 17:20:06.958263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.852 [2024-12-09 17:20:06.958281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.852 [2024-12-09 17:20:06.972132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.852 [2024-12-09 17:20:06.972150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.852 [2024-12-09 17:20:06.985482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.852 [2024-12-09 17:20:06.985500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.852 [2024-12-09 17:20:06.999023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.852 [2024-12-09 17:20:06.999041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.852 [2024-12-09 17:20:07.012575] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.852 [2024-12-09 17:20:07.012593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.852 [2024-12-09 17:20:07.026516] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.852 [2024-12-09 17:20:07.026535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.111 [2024-12-09 17:20:07.039975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.111 [2024-12-09 17:20:07.039993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.111 [2024-12-09 17:20:07.053753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.111 [2024-12-09 17:20:07.053771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.111 [2024-12-09 17:20:07.067560] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.111 [2024-12-09 17:20:07.067578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.111 [2024-12-09 17:20:07.081250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.111 [2024-12-09 17:20:07.081269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.111 [2024-12-09 17:20:07.095058] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.111 [2024-12-09 17:20:07.095076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.111 [2024-12-09 17:20:07.108634] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.111 [2024-12-09 17:20:07.108652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.111 [2024-12-09 17:20:07.122034] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.111 [2024-12-09 17:20:07.122052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.111 [2024-12-09 17:20:07.135504] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.111 [2024-12-09 17:20:07.135522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.111 [2024-12-09 17:20:07.149246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.111 [2024-12-09 17:20:07.149264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.111 [2024-12-09 17:20:07.163047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.111 [2024-12-09 17:20:07.163066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.111 [2024-12-09 17:20:07.176480] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.111 [2024-12-09 17:20:07.176498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.111 [2024-12-09 17:20:07.190281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.111 [2024-12-09 17:20:07.190299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.111 [2024-12-09 17:20:07.203904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.111 [2024-12-09 17:20:07.203922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.111 [2024-12-09 17:20:07.217345] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.111 [2024-12-09 17:20:07.217363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.111 [2024-12-09 17:20:07.230770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.111 [2024-12-09 17:20:07.230788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.111 [2024-12-09 17:20:07.244694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.111 [2024-12-09 17:20:07.244712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.111 [2024-12-09 17:20:07.258712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.111 [2024-12-09 17:20:07.258730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.111 [2024-12-09 17:20:07.269856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.111 [2024-12-09 17:20:07.269875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.111 [2024-12-09 17:20:07.279234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.111 [2024-12-09 17:20:07.279254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.370 [2024-12-09 17:20:07.293898] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.370 [2024-12-09 17:20:07.293918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.370 [2024-12-09 17:20:07.307554] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.370 [2024-12-09 17:20:07.307573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.370 [2024-12-09 17:20:07.317007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.370 [2024-12-09 17:20:07.317025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.370 [2024-12-09 17:20:07.331054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.370 [2024-12-09 17:20:07.331072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.370 [2024-12-09 17:20:07.344461] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.370 [2024-12-09 17:20:07.344479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.370 [2024-12-09 17:20:07.358195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.370 [2024-12-09 17:20:07.358214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.370 [2024-12-09 17:20:07.371767] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.370 [2024-12-09 17:20:07.371785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.370 [2024-12-09 17:20:07.385168] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.370 [2024-12-09 17:20:07.385186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.370 [2024-12-09 17:20:07.399111] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.370 [2024-12-09 17:20:07.399131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.370 [2024-12-09 17:20:07.412387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.370 [2024-12-09 17:20:07.412405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.370 [2024-12-09 17:20:07.425658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.370 [2024-12-09 17:20:07.425676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.370 [2024-12-09 17:20:07.439551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.370 [2024-12-09 17:20:07.439569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.370 [2024-12-09 17:20:07.453570] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.370 [2024-12-09 17:20:07.453588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.370 [2024-12-09 17:20:07.466974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.370 [2024-12-09 17:20:07.466992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.370 [2024-12-09 17:20:07.480631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.370 [2024-12-09 17:20:07.480650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.370 [2024-12-09 17:20:07.494482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.370 [2024-12-09 17:20:07.494501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.370 17117.00 IOPS, 133.73 MiB/s [2024-12-09T16:20:07.549Z] [2024-12-09 17:20:07.507771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.371 [2024-12-09 17:20:07.507790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.371 [2024-12-09 17:20:07.521861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.371 [2024-12-09 17:20:07.521881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.371 [2024-12-09 17:20:07.532333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.371 [2024-12-09 17:20:07.532351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.371 [2024-12-09 17:20:07.547022] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.371 [2024-12-09 17:20:07.547046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.629 [2024-12-09 17:20:07.557545] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.629 [2024-12-09 17:20:07.557564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.629 [2024-12-09 17:20:07.571475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.629 [2024-12-09 17:20:07.571494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.629 [2024-12-09 17:20:07.584795] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.629 [2024-12-09 17:20:07.584814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.629 [2024-12-09 17:20:07.598747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.629 [2024-12-09 17:20:07.598766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.629 [2024-12-09 17:20:07.612341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.629 [2024-12-09 17:20:07.612358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.629 [2024-12-09 17:20:07.626046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.629 [2024-12-09 17:20:07.626064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.629 [2024-12-09 17:20:07.640124] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.629 [2024-12-09 17:20:07.640143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.629 [2024-12-09 17:20:07.653780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.629 [2024-12-09 17:20:07.653800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.629 [2024-12-09 17:20:07.667397] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.629 [2024-12-09 17:20:07.667417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.629 [2024-12-09 17:20:07.676285] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.629 [2024-12-09 17:20:07.676303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.629 [2024-12-09 17:20:07.690215] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.629 [2024-12-09 17:20:07.690240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.629 [2024-12-09 17:20:07.703639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.629 [2024-12-09 17:20:07.703657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.629 [2024-12-09 17:20:07.717288] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.630 [2024-12-09 17:20:07.717306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.630 [2024-12-09 17:20:07.730406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.630 [2024-12-09 17:20:07.730423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.630 [2024-12-09 17:20:07.739055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.630 [2024-12-09 17:20:07.739073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.630 [2024-12-09 17:20:07.752987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.630 [2024-12-09 17:20:07.753010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.630 [2024-12-09 17:20:07.766321] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.630 [2024-12-09 17:20:07.766339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.630 [2024-12-09 17:20:07.780085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.630 [2024-12-09 17:20:07.780104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.630 [2024-12-09 17:20:07.793664] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.630 [2024-12-09 17:20:07.793686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.630 [2024-12-09 17:20:07.807198] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.630 [2024-12-09 17:20:07.807222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.888 [2024-12-09 17:20:07.820912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.888 [2024-12-09 17:20:07.820930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.888 [2024-12-09 17:20:07.834902] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.888 [2024-12-09 17:20:07.834921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.888 [2024-12-09 17:20:07.848494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.888 [2024-12-09 17:20:07.848513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.888 [2024-12-09 17:20:07.862263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.888 [2024-12-09 17:20:07.862281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.888 [2024-12-09 17:20:07.876276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.888 [2024-12-09 17:20:07.876296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.888 [2024-12-09 17:20:07.890486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.888 [2024-12-09 17:20:07.890504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.888 [2024-12-09 17:20:07.903795] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.888 [2024-12-09 17:20:07.903814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.888 [2024-12-09 17:20:07.917530] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.888 [2024-12-09 17:20:07.917548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.888 [2024-12-09 17:20:07.931233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.888 [2024-12-09 17:20:07.931251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.888 [2024-12-09 17:20:07.944862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.888 [2024-12-09 17:20:07.944881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.888 [2024-12-09 17:20:07.958494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.888 [2024-12-09 17:20:07.958512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.888 [2024-12-09 17:20:07.972497] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.888 [2024-12-09 17:20:07.972515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.888 [2024-12-09 17:20:07.986095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.888 [2024-12-09 17:20:07.986113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.888 [2024-12-09 17:20:07.999806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.888 [2024-12-09 17:20:07.999824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.888 [2024-12-09 17:20:08.013614] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.888 [2024-12-09 17:20:08.013632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.888 [2024-12-09 17:20:08.027627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.888 [2024-12-09 17:20:08.027645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.888 [2024-12-09 17:20:08.041101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.888 [2024-12-09 17:20:08.041119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.888 [2024-12-09 17:20:08.054600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.888 [2024-12-09 17:20:08.054622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.147 [2024-12-09 17:20:08.068499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.147 [2024-12-09 17:20:08.068518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.147 [2024-12-09 17:20:08.082370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.147 [2024-12-09 17:20:08.082389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.147 [2024-12-09 17:20:08.096054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.147 [2024-12-09 17:20:08.096072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.147 [2024-12-09 17:20:08.109700] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.147 [2024-12-09 17:20:08.109720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.147 [2024-12-09 17:20:08.123344] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.147 [2024-12-09 17:20:08.123363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.147 [2024-12-09 17:20:08.136769] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.147 [2024-12-09 17:20:08.136788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.147 [2024-12-09 17:20:08.150664] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.147 [2024-12-09 17:20:08.150682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.147 [2024-12-09 17:20:08.164300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.147 [2024-12-09 17:20:08.164318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.147 [2024-12-09 17:20:08.177710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.147 [2024-12-09 17:20:08.177729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.147 [2024-12-09 17:20:08.186518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.147 [2024-12-09 17:20:08.186536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.147 [2024-12-09 17:20:08.200609] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.147 [2024-12-09 17:20:08.200627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.147 [2024-12-09 17:20:08.214349] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.147 [2024-12-09 17:20:08.214367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.147 [2024-12-09 17:20:08.228002] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.147 [2024-12-09 17:20:08.228020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.147 [2024-12-09 17:20:08.236903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.147 [2024-12-09 17:20:08.236921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.147 [2024-12-09 17:20:08.250839] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.147 [2024-12-09 17:20:08.250857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.147 [2024-12-09 17:20:08.264283] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.147 [2024-12-09 17:20:08.264301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.147 [2024-12-09 17:20:08.278005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.147 [2024-12-09 17:20:08.278023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.147 [2024-12-09 17:20:08.291996] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.147 [2024-12-09 17:20:08.292015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.147 [2024-12-09 17:20:08.303121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.147 [2024-12-09 17:20:08.303146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.147 [2024-12-09 17:20:08.317810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.148 [2024-12-09 17:20:08.317828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.406 [2024-12-09 17:20:08.331569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.406 [2024-12-09 17:20:08.331588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.406 [2024-12-09 17:20:08.345075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.406 [2024-12-09 17:20:08.345092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.406 [2024-12-09 17:20:08.358594] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.406 [2024-12-09 17:20:08.358611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.406 [2024-12-09 17:20:08.372192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.406 [2024-12-09 17:20:08.372210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.406 [2024-12-09 17:20:08.385855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.406 [2024-12-09 17:20:08.385873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.406 [2024-12-09 17:20:08.399718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.406 [2024-12-09 17:20:08.399736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.406 [2024-12-09 17:20:08.413339] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.406 [2024-12-09 17:20:08.413356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.406 [2024-12-09 17:20:08.426764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.406 [2024-12-09 17:20:08.426782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.406 [2024-12-09 17:20:08.440442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.406 [2024-12-09 17:20:08.440460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.406 [2024-12-09 17:20:08.454057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.406 [2024-12-09 17:20:08.454074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.406 [2024-12-09 17:20:08.467787] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.406 [2024-12-09 17:20:08.467805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.406 [2024-12-09 17:20:08.476647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.406 [2024-12-09 17:20:08.476664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.407 [2024-12-09 17:20:08.490557] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.407 [2024-12-09 17:20:08.490575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.407 17151.00 IOPS, 133.99 MiB/s [2024-12-09T16:20:08.586Z] [2024-12-09 17:20:08.503693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.407 [2024-12-09 17:20:08.503710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.407 [2024-12-09 17:20:08.517386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.407 [2024-12-09 17:20:08.517403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.407 [2024-12-09 17:20:08.530911] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.407 [2024-12-09 17:20:08.530929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.407 [2024-12-09 17:20:08.544430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.407 [2024-12-09 17:20:08.544449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.407 [2024-12-09 17:20:08.558053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.407 [2024-12-09 17:20:08.558072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.407 [2024-12-09 17:20:08.571689] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.407 [2024-12-09 17:20:08.571707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.665 [2024-12-09 17:20:08.584916] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.665 [2024-12-09 17:20:08.584934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.665 [2024-12-09 17:20:08.598335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.665 [2024-12-09 17:20:08.598353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.665 [2024-12-09 17:20:08.611941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.665 [2024-12-09 17:20:08.611959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.665 [2024-12-09 17:20:08.625344] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.665 [2024-12-09 17:20:08.625363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.665 [2024-12-09 17:20:08.638499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.665 [2024-12-09 17:20:08.638517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.665 [2024-12-09 17:20:08.652044] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.665 [2024-12-09 17:20:08.652062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.665 [2024-12-09 17:20:08.665487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.665 [2024-12-09 17:20:08.665506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.665 [2024-12-09 17:20:08.679281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.665 [2024-12-09 17:20:08.679300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.665 [2024-12-09 17:20:08.692881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.665 [2024-12-09 17:20:08.692900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.666 [2024-12-09 17:20:08.701699] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.666 [2024-12-09 17:20:08.701718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.666 [2024-12-09 17:20:08.715875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.666 [2024-12-09 17:20:08.715895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.666 [2024-12-09 17:20:08.729297] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.666 [2024-12-09 17:20:08.729316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.666 [2024-12-09 17:20:08.742955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.666 [2024-12-09 17:20:08.742974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.666 [2024-12-09 17:20:08.756935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.666 [2024-12-09 17:20:08.756954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.666 [2024-12-09 17:20:08.771072] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.666 [2024-12-09 17:20:08.771090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.666 [2024-12-09 17:20:08.784861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.666 [2024-12-09 17:20:08.784880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.666 [2024-12-09 17:20:08.798908] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.666 [2024-12-09 17:20:08.798927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.666 [2024-12-09 17:20:08.812495] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.666 [2024-12-09 17:20:08.812514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.666 [2024-12-09 17:20:08.826261] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.666 [2024-12-09 17:20:08.826279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.666 [2024-12-09 17:20:08.835464] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.666 [2024-12-09 17:20:08.835483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.924 [2024-12-09 17:20:08.849459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.924 [2024-12-09 17:20:08.849478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.924 [2024-12-09 17:20:08.862743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.924 [2024-12-09 17:20:08.862763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.924 [2024-12-09 17:20:08.876369] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.924 [2024-12-09 17:20:08.876388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.924 [2024-12-09 17:20:08.885197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.924 [2024-12-09 17:20:08.885215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.925 [2024-12-09 17:20:08.894268] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.925 [2024-12-09 17:20:08.894286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.925 [2024-12-09 17:20:08.908720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.925 [2024-12-09 17:20:08.908738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.925 [2024-12-09 17:20:08.922396] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.925 [2024-12-09 17:20:08.922414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.925 [2024-12-09 17:20:08.935848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.925 [2024-12-09 17:20:08.935866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.925 [2024-12-09 17:20:08.949627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.925 [2024-12-09 17:20:08.949645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.925 [2024-12-09 17:20:08.963342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.925 [2024-12-09 17:20:08.963360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.925 [2024-12-09 17:20:08.976662] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.925 [2024-12-09 17:20:08.976681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.925 [2024-12-09 17:20:08.990760] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.925 [2024-12-09 17:20:08.990779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.925 [2024-12-09 17:20:09.004679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.925 [2024-12-09 17:20:09.004697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.925 [2024-12-09 17:20:09.013704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.925 [2024-12-09 17:20:09.013722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.925 [2024-12-09 17:20:09.027653] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.925 [2024-12-09 17:20:09.027673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.925 [2024-12-09 17:20:09.041178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.925 [2024-12-09 17:20:09.041196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.925 [2024-12-09 17:20:09.054804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.925 [2024-12-09 17:20:09.054822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.925 [2024-12-09 17:20:09.068224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.925 [2024-12-09 17:20:09.068242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.925 [2024-12-09 17:20:09.082330] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.925 [2024-12-09 17:20:09.082349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.925 [2024-12-09 17:20:09.095730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.925 [2024-12-09 17:20:09.095749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.183 [2024-12-09 17:20:09.109388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.183 [2024-12-09 17:20:09.109406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.183 [2024-12-09 17:20:09.123066] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.183 [2024-12-09 17:20:09.123089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.183 [2024-12-09 17:20:09.137203] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.183 [2024-12-09 17:20:09.137230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.183 [2024-12-09 17:20:09.150788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.183 [2024-12-09 17:20:09.150805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.183 [2024-12-09 17:20:09.164571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.183 [2024-12-09 17:20:09.164588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.183 [2024-12-09 17:20:09.177991] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.183 [2024-12-09 17:20:09.178009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.183 [2024-12-09 17:20:09.191609] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.183 [2024-12-09 17:20:09.191627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.183 [2024-12-09 17:20:09.204893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.183 [2024-12-09 17:20:09.204911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.183 [2024-12-09 17:20:09.218132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.183 [2024-12-09 17:20:09.218151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.183 [2024-12-09 17:20:09.231646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.183 [2024-12-09 17:20:09.231664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.183 [2024-12-09 17:20:09.246008] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.183 [2024-12-09 17:20:09.246027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.183 [2024-12-09 17:20:09.259333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.183 [2024-12-09 17:20:09.259351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.183 [2024-12-09 17:20:09.273161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.183 [2024-12-09 17:20:09.273179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.183 [2024-12-09 17:20:09.286379] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.183 [2024-12-09 17:20:09.286396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.183 [2024-12-09 17:20:09.300284] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.183 [2024-12-09 17:20:09.300307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.183 [2024-12-09 17:20:09.314071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.183 [2024-12-09 17:20:09.314090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.183 [2024-12-09 17:20:09.328061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.183 [2024-12-09 17:20:09.328082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.183 [2024-12-09 17:20:09.337417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.183 [2024-12-09 17:20:09.337436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.183 [2024-12-09 17:20:09.351592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.183 [2024-12-09 17:20:09.351610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.442 [2024-12-09 17:20:09.365062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.442 [2024-12-09 17:20:09.365081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.442 [2024-12-09 17:20:09.378375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.442 [2024-12-09 17:20:09.378393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.442 [2024-12-09 17:20:09.392083] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.442 [2024-12-09 17:20:09.392101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.442 [2024-12-09 17:20:09.405272] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.442 [2024-12-09 17:20:09.405290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.442 [2024-12-09 17:20:09.418892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.442 [2024-12-09 17:20:09.418910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.442 [2024-12-09 17:20:09.432655] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.442 [2024-12-09 17:20:09.432673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.442 [2024-12-09 17:20:09.441590] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.442 [2024-12-09 17:20:09.441608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.442 [2024-12-09 17:20:09.455846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.442 [2024-12-09 17:20:09.455864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.442 [2024-12-09 17:20:09.469513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.442 [2024-12-09 17:20:09.469531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.442 [2024-12-09 17:20:09.483187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.442 [2024-12-09 17:20:09.483205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.442 [2024-12-09 17:20:09.496428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.442 [2024-12-09 17:20:09.496445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.442 17159.50 IOPS, 134.06 MiB/s [2024-12-09T16:20:09.621Z] [2024-12-09 17:20:09.510180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.442 [2024-12-09 17:20:09.510201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.442 [2024-12-09 17:20:09.523990] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.442 [2024-12-09 17:20:09.524008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.442 [2024-12-09 17:20:09.537997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.442 [2024-12-09 17:20:09.538015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.442 [2024-12-09 17:20:09.547003] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.442 [2024-12-09 17:20:09.547028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.442 [2024-12-09 17:20:09.561266] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.442 [2024-12-09 17:20:09.561285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.442 [2024-12-09 17:20:09.575537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.442 [2024-12-09 17:20:09.575555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.442 [2024-12-09 17:20:09.589302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.442 [2024-12-09 17:20:09.589321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.442 [2024-12-09 17:20:09.598106] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.442 [2024-12-09 17:20:09.598123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.442 [2024-12-09 17:20:09.607185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.442 [2024-12-09 17:20:09.607202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.701 [2024-12-09 17:20:09.621472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.701 [2024-12-09 17:20:09.621491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.701 [2024-12-09 17:20:09.635082] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.701 [2024-12-09 17:20:09.635100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.701 [2024-12-09 17:20:09.648502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.701 [2024-12-09 17:20:09.648520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.701 [2024-12-09 17:20:09.662426] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.701 [2024-12-09 17:20:09.662444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.701 [2024-12-09 17:20:09.675852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.701 [2024-12-09 17:20:09.675871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.701 [2024-12-09 17:20:09.689554] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.701 [2024-12-09 17:20:09.689573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.701 [2024-12-09 17:20:09.703121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.701 [2024-12-09 17:20:09.703139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.701 [2024-12-09 17:20:09.716678] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.701 [2024-12-09 17:20:09.716697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.701 [2024-12-09 17:20:09.730189] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.701 [2024-12-09 17:20:09.730207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.701 [2024-12-09 17:20:09.744009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.701 [2024-12-09 17:20:09.744028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.701 [2024-12-09 17:20:09.757640] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.701 [2024-12-09 17:20:09.757658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.701 [2024-12-09 17:20:09.771391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.701 [2024-12-09 17:20:09.771410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.701 [2024-12-09 17:20:09.785058] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.701 [2024-12-09 17:20:09.785077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.701 [2024-12-09 17:20:09.799007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.701 [2024-12-09 17:20:09.799029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.701 [2024-12-09 17:20:09.812732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.701 [2024-12-09 17:20:09.812750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.701 [2024-12-09 17:20:09.826936] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.701 [2024-12-09 17:20:09.826954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.701 [2024-12-09 17:20:09.840709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.701 [2024-12-09 17:20:09.840728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.701 [2024-12-09 17:20:09.854057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.701 [2024-12-09 17:20:09.854075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.701 [2024-12-09 17:20:09.867872] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.701 [2024-12-09 17:20:09.867891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.960 [2024-12-09 17:20:09.881788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.960 [2024-12-09 17:20:09.881807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.960 [2024-12-09 17:20:09.895062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.960 [2024-12-09 17:20:09.895081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.960 [2024-12-09 17:20:09.909159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.960 [2024-12-09 17:20:09.909187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.960 [2024-12-09 17:20:09.922691] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.960 [2024-12-09 17:20:09.922709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.960 [2024-12-09 17:20:09.936025] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.960 [2024-12-09 17:20:09.936044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.960 [2024-12-09 17:20:09.949178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.960 [2024-12-09 17:20:09.949197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.960 [2024-12-09 17:20:09.962767] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.960 [2024-12-09 17:20:09.962785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.960 [2024-12-09 17:20:09.976690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.960 [2024-12-09 17:20:09.976707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.960 [2024-12-09 17:20:09.990253] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.960 [2024-12-09 17:20:09.990271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.960 [2024-12-09 17:20:10.004424] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.960 [2024-12-09 17:20:10.004442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.960 [2024-12-09 17:20:10.018324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.960 [2024-12-09 17:20:10.018344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.960 [2024-12-09 17:20:10.032641] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.960 [2024-12-09 17:20:10.032664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.960 [2024-12-09 17:20:10.044979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.960 [2024-12-09 17:20:10.045000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.960 [2024-12-09 17:20:10.059461] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.960 [2024-12-09 17:20:10.059481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.960 [2024-12-09 17:20:10.072254] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.960 [2024-12-09 17:20:10.072274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.960 [2024-12-09 17:20:10.086387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.960 [2024-12-09 17:20:10.086407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.960 [2024-12-09 17:20:10.100730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.960 [2024-12-09 17:20:10.100750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.960 [2024-12-09 17:20:10.114362] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.960 [2024-12-09 17:20:10.114382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.960 [2024-12-09 17:20:10.128273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.960 [2024-12-09 17:20:10.128292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.219 [2024-12-09 17:20:10.141978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.219 [2024-12-09 17:20:10.141998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.219 [2024-12-09 17:20:10.155807] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.219 [2024-12-09 17:20:10.155827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.219 [2024-12-09 17:20:10.169241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.219 [2024-12-09 17:20:10.169261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.219 [2024-12-09 17:20:10.182992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.219 [2024-12-09 17:20:10.183011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.219 [2024-12-09 17:20:10.196693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.219 [2024-12-09 17:20:10.196712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.219 [2024-12-09 17:20:10.210758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.219 [2024-12-09 17:20:10.210777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.219 [2024-12-09 17:20:10.224225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.219 [2024-12-09 17:20:10.224244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.219 [2024-12-09 17:20:10.238551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.219 [2024-12-09 17:20:10.238570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.219 [2024-12-09 17:20:10.253886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.219 [2024-12-09 17:20:10.253906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.219 [2024-12-09 17:20:10.267743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.219 [2024-12-09 17:20:10.267763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.219 [2024-12-09 17:20:10.281345] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.219 [2024-12-09 17:20:10.281364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.219 [2024-12-09 17:20:10.290252] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.219 [2024-12-09 17:20:10.290272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.219 [2024-12-09 17:20:10.304544] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.219 [2024-12-09 17:20:10.304563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.219 [2024-12-09 17:20:10.318233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.219 [2024-12-09 17:20:10.318251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.219 [2024-12-09 17:20:10.331970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.219 [2024-12-09 17:20:10.331989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.219 [2024-12-09 17:20:10.340701] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.219 [2024-12-09 17:20:10.340722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.219 [2024-12-09 17:20:10.355746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.219 [2024-12-09 17:20:10.355765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.219 [2024-12-09 17:20:10.371741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.219 [2024-12-09 17:20:10.371760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.219 [2024-12-09 17:20:10.382611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.219 [2024-12-09 17:20:10.382629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.219 [2024-12-09 17:20:10.396571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.219 [2024-12-09 17:20:10.396590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.478 [2024-12-09 17:20:10.410231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.478 [2024-12-09 17:20:10.410251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.478 [2024-12-09 17:20:10.423698] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.478 [2024-12-09 17:20:10.423715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.478 [2024-12-09 17:20:10.437130] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.478 [2024-12-09 17:20:10.437148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.478 [2024-12-09 17:20:10.450897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.478 [2024-12-09 17:20:10.450915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.478 [2024-12-09 17:20:10.465023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.478 [2024-12-09 17:20:10.465042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.478 [2024-12-09 17:20:10.478638] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.478 [2024-12-09 17:20:10.478656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.478 [2024-12-09 17:20:10.492257] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.478 [2024-12-09 17:20:10.492275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.478 [2024-12-09 17:20:10.502096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.478 [2024-12-09 17:20:10.502115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.478 17135.00 IOPS, 133.87 MiB/s 00:09:41.478 Latency(us) 00:09:41.478 [2024-12-09T16:20:10.657Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:41.478 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:41.478 Nvme1n1 : 5.00 17145.06 133.95 0.00 0.00 7460.06 2668.25 16352.79 00:09:41.478 [2024-12-09T16:20:10.657Z] =================================================================================================================== 00:09:41.478 [2024-12-09T16:20:10.657Z] Total : 17145.06 133.95 0.00 0.00 7460.06 2668.25 16352.79 00:09:41.478 [2024-12-09 17:20:10.512439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.478 [2024-12-09 17:20:10.512459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.478 [2024-12-09 17:20:10.524478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.478 [2024-12-09 17:20:10.524493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.478 [2024-12-09 17:20:10.536513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.478 [2024-12-09 17:20:10.536533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.478 [2024-12-09 17:20:10.548532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.478 [2024-12-09 17:20:10.548551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.478 [2024-12-09 17:20:10.560562] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.478 [2024-12-09 17:20:10.560578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.478 [2024-12-09 17:20:10.572591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.478 [2024-12-09 17:20:10.572605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.478 [2024-12-09 17:20:10.584622] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.478 [2024-12-09 17:20:10.584636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.478 [2024-12-09 17:20:10.596655] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.478 [2024-12-09 17:20:10.596669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.478 [2024-12-09 17:20:10.608687] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.478 [2024-12-09 17:20:10.608702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.478 [2024-12-09 17:20:10.620712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.478 [2024-12-09 17:20:10.620723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.478 [2024-12-09 17:20:10.632745] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.478 [2024-12-09 17:20:10.632755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.478 [2024-12-09 17:20:10.644777] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.478 [2024-12-09 17:20:10.644789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.737 [2024-12-09 17:20:10.656807] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.737 [2024-12-09 17:20:10.656816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.737 [2024-12-09 17:20:10.668840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.737 [2024-12-09 17:20:10.668849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.737 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2456395) - No such process 00:09:41.737 17:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2456395 00:09:41.737 17:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:41.737 17:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.737 17:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:41.737 17:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.737 17:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:41.737 17:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.737 17:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:41.737 delay0 00:09:41.737 17:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.737 17:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:41.737 17:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.737 17:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:41.737 17:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.737 17:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:41.737 [2024-12-09 17:20:10.824139] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:48.301 Initializing NVMe Controllers 00:09:48.301 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:48.301 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:48.301 Initialization complete. Launching workers. 00:09:48.301 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 934 00:09:48.301 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1214, failed to submit 40 00:09:48.301 success 1038, unsuccessful 176, failed 0 00:09:48.301 17:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:48.301 17:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:48.301 17:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:48.301 17:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:48.301 17:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:48.301 17:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:48.301 17:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:48.301 17:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:48.301 rmmod nvme_tcp 00:09:48.301 rmmod nvme_fabrics 00:09:48.301 rmmod nvme_keyring 00:09:48.301 17:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:48.301 17:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:48.301 17:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:48.301 17:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2454560 ']' 00:09:48.301 17:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2454560 00:09:48.301 17:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2454560 ']' 00:09:48.301 17:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2454560 00:09:48.301 17:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:09:48.301 17:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:48.301 17:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2454560 00:09:48.301 17:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:48.301 17:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:48.301 17:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2454560' 00:09:48.301 killing process with pid 2454560 00:09:48.301 17:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2454560 00:09:48.301 17:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2454560 00:09:48.301 17:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:48.301 17:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:48.301 17:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:48.301 17:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:48.301 17:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:09:48.301 17:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:48.301 17:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:09:48.301 17:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:48.301 17:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:48.301 17:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:48.301 17:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:48.301 17:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:50.838 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:50.838 00:09:50.838 real 0m31.713s 00:09:50.838 user 0m42.464s 00:09:50.838 sys 0m11.160s 00:09:50.838 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:50.838 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:50.838 ************************************ 00:09:50.838 END TEST nvmf_zcopy 00:09:50.838 ************************************ 00:09:50.838 17:20:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:50.838 17:20:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:50.838 17:20:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:50.838 17:20:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:50.838 ************************************ 00:09:50.838 START TEST nvmf_nmic 00:09:50.839 ************************************ 00:09:50.839 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:50.839 * Looking for test storage... 00:09:50.839 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:50.839 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:50.839 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:09:50.839 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:50.839 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:50.839 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:50.839 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:50.839 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:50.839 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:50.839 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:50.839 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:50.839 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:50.839 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:50.839 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:50.839 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:50.839 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:50.839 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:50.839 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:50.839 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:50.839 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:50.839 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:50.839 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:50.839 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:50.839 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:50.839 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:50.839 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:50.839 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:50.839 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:50.839 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:50.839 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:50.839 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:50.839 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:50.839 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:50.839 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:50.839 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:50.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.839 --rc genhtml_branch_coverage=1 00:09:50.839 --rc genhtml_function_coverage=1 00:09:50.839 --rc genhtml_legend=1 00:09:50.839 --rc geninfo_all_blocks=1 00:09:50.839 --rc geninfo_unexecuted_blocks=1 00:09:50.839 00:09:50.839 ' 00:09:50.839 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:50.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.839 --rc genhtml_branch_coverage=1 00:09:50.839 --rc genhtml_function_coverage=1 00:09:50.839 --rc genhtml_legend=1 00:09:50.839 --rc geninfo_all_blocks=1 00:09:50.839 --rc geninfo_unexecuted_blocks=1 00:09:50.839 00:09:50.839 ' 00:09:50.839 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:50.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.839 --rc genhtml_branch_coverage=1 00:09:50.839 --rc genhtml_function_coverage=1 00:09:50.839 --rc genhtml_legend=1 00:09:50.839 --rc geninfo_all_blocks=1 00:09:50.839 --rc geninfo_unexecuted_blocks=1 00:09:50.839 00:09:50.839 ' 00:09:50.839 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:50.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.839 --rc genhtml_branch_coverage=1 00:09:50.839 --rc genhtml_function_coverage=1 00:09:50.839 --rc genhtml_legend=1 00:09:50.839 --rc geninfo_all_blocks=1 00:09:50.839 --rc geninfo_unexecuted_blocks=1 00:09:50.839 00:09:50.839 ' 00:09:50.839 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:50.839 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:50.839 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:50.839 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:50.839 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:50.839 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:50.839 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:50.839 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:50.839 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:50.839 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:50.839 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:50.839 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:50.839 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:09:50.839 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:09:50.839 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:50.839 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:50.839 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:50.839 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:50.839 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:50.839 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:50.839 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:50.839 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:50.839 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:50.839 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.839 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.839 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.839 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:50.839 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.840 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:50.840 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:50.840 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:50.840 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:50.840 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:50.840 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:50.840 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:50.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:50.840 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:50.840 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:50.840 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:50.840 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:50.840 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:50.840 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:50.840 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:50.840 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:50.840 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:50.840 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:50.840 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:50.840 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:50.840 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:50.840 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:50.840 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:50.840 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:50.840 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:09:50.840 17:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:57.413 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:57.413 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:09:57.413 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:57.413 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:57.413 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:57.413 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:57.414 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:57.414 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:57.414 Found net devices under 0000:af:00.0: cvl_0_0 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:57.414 Found net devices under 0000:af:00.1: cvl_0_1 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:57.414 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:57.414 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:09:57.414 00:09:57.414 --- 10.0.0.2 ping statistics --- 00:09:57.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.414 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:57.414 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:57.414 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:09:57.414 00:09:57.414 --- 10.0.0.1 ping statistics --- 00:09:57.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.414 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:57.414 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2461915 00:09:57.415 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:57.415 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2461915 00:09:57.415 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2461915 ']' 00:09:57.415 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:57.415 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:57.415 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:57.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:57.415 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:57.415 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:57.415 [2024-12-09 17:20:25.730526] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:09:57.415 [2024-12-09 17:20:25.730574] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:57.415 [2024-12-09 17:20:25.808827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:57.415 [2024-12-09 17:20:25.852394] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:57.415 [2024-12-09 17:20:25.852428] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:57.415 [2024-12-09 17:20:25.852435] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:57.415 [2024-12-09 17:20:25.852441] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:57.415 [2024-12-09 17:20:25.852446] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:57.415 [2024-12-09 17:20:25.853926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:57.415 [2024-12-09 17:20:25.854036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:57.415 [2024-12-09 17:20:25.854144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.415 [2024-12-09 17:20:25.854145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:57.415 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:57.415 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:09:57.415 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:57.415 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:57.415 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:57.415 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:57.415 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:57.415 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.415 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:57.415 [2024-12-09 17:20:25.991746] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:57.415 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.415 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:57.415 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.415 17:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:57.415 Malloc0 00:09:57.415 17:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.415 17:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:57.415 17:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.415 17:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:57.415 17:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.415 17:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:57.415 17:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.415 17:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:57.415 17:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.415 17:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:57.415 17:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.415 17:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:57.415 [2024-12-09 17:20:26.051959] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:57.415 17:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.415 17:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:57.415 test case1: single bdev can't be used in multiple subsystems 00:09:57.415 17:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:57.415 17:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.415 17:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:57.415 17:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.415 17:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:57.415 17:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.415 17:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:57.415 17:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.415 17:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:57.415 17:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:57.415 17:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.415 17:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:57.415 [2024-12-09 17:20:26.083884] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:57.415 [2024-12-09 17:20:26.083902] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:57.415 [2024-12-09 17:20:26.083909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.415 request: 00:09:57.415 { 00:09:57.415 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:57.415 "namespace": { 00:09:57.415 "bdev_name": "Malloc0", 00:09:57.415 "no_auto_visible": false, 00:09:57.415 "hide_metadata": false 00:09:57.415 }, 00:09:57.415 "method": "nvmf_subsystem_add_ns", 00:09:57.415 "req_id": 1 00:09:57.415 } 00:09:57.415 Got JSON-RPC error response 00:09:57.415 response: 00:09:57.415 { 00:09:57.415 "code": -32602, 00:09:57.415 "message": "Invalid parameters" 00:09:57.415 } 00:09:57.415 17:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:57.415 17:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:57.415 17:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:57.415 17:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:57.415 Adding namespace failed - expected result. 00:09:57.415 17:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:57.415 test case2: host connect to nvmf target in multiple paths 00:09:57.415 17:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:57.415 17:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.415 17:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:57.415 [2024-12-09 17:20:26.096024] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:57.415 17:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.415 17:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:58.352 17:20:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:59.289 17:20:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:59.289 17:20:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:09:59.289 17:20:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:59.289 17:20:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:59.289 17:20:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:10:01.822 17:20:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:01.822 17:20:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:01.822 17:20:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:01.822 17:20:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:01.822 17:20:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:01.822 17:20:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:10:01.822 17:20:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:01.822 [global] 00:10:01.822 thread=1 00:10:01.822 invalidate=1 00:10:01.822 rw=write 00:10:01.822 time_based=1 00:10:01.822 runtime=1 00:10:01.822 ioengine=libaio 00:10:01.822 direct=1 00:10:01.822 bs=4096 00:10:01.822 iodepth=1 00:10:01.822 norandommap=0 00:10:01.822 numjobs=1 00:10:01.822 00:10:01.822 verify_dump=1 00:10:01.822 verify_backlog=512 00:10:01.822 verify_state_save=0 00:10:01.822 do_verify=1 00:10:01.822 verify=crc32c-intel 00:10:01.822 [job0] 00:10:01.822 filename=/dev/nvme0n1 00:10:01.822 Could not set queue depth (nvme0n1) 00:10:01.822 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:01.822 fio-3.35 00:10:01.822 Starting 1 thread 00:10:02.757 00:10:02.757 job0: (groupid=0, jobs=1): err= 0: pid=2462786: Mon Dec 9 17:20:31 2024 00:10:02.757 read: IOPS=2512, BW=9.81MiB/s (10.3MB/s)(9.82MiB/1001msec) 00:10:02.757 slat (nsec): min=6739, max=42655, avg=7884.30, stdev=1525.97 00:10:02.757 clat (usec): min=178, max=287, avg=221.49, stdev=18.14 00:10:02.757 lat (usec): min=186, max=294, avg=229.37, stdev=18.30 00:10:02.757 clat percentiles (usec): 00:10:02.757 | 1.00th=[ 194], 5.00th=[ 202], 10.00th=[ 204], 20.00th=[ 208], 00:10:02.757 | 30.00th=[ 210], 40.00th=[ 212], 50.00th=[ 215], 60.00th=[ 219], 00:10:02.757 | 70.00th=[ 229], 80.00th=[ 243], 90.00th=[ 249], 95.00th=[ 255], 00:10:02.757 | 99.00th=[ 269], 99.50th=[ 277], 99.90th=[ 285], 99.95th=[ 285], 00:10:02.757 | 99.99th=[ 289] 00:10:02.757 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:02.757 slat (nsec): min=9663, max=39842, avg=10799.85, stdev=1613.32 00:10:02.757 clat (usec): min=113, max=350, avg=148.69, stdev=21.06 00:10:02.757 lat (usec): min=123, max=390, avg=159.49, stdev=21.50 00:10:02.757 clat percentiles (usec): 00:10:02.757 | 1.00th=[ 119], 5.00th=[ 122], 10.00th=[ 123], 20.00th=[ 126], 00:10:02.757 | 30.00th=[ 133], 40.00th=[ 149], 50.00th=[ 153], 60.00th=[ 157], 00:10:02.757 | 70.00th=[ 159], 80.00th=[ 163], 90.00th=[ 167], 95.00th=[ 174], 00:10:02.757 | 99.00th=[ 239], 99.50th=[ 241], 99.90th=[ 249], 99.95th=[ 343], 00:10:02.757 | 99.99th=[ 351] 00:10:02.757 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:10:02.757 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:02.757 lat (usec) : 250=95.35%, 500=4.65% 00:10:02.757 cpu : usr=4.60%, sys=7.20%, ctx=5075, majf=0, minf=1 00:10:02.757 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:02.757 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.757 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.757 issued rwts: total=2515,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:02.757 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:02.757 00:10:02.757 Run status group 0 (all jobs): 00:10:02.757 READ: bw=9.81MiB/s (10.3MB/s), 9.81MiB/s-9.81MiB/s (10.3MB/s-10.3MB/s), io=9.82MiB (10.3MB), run=1001-1001msec 00:10:02.757 WRITE: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:10:02.757 00:10:02.757 Disk stats (read/write): 00:10:02.757 nvme0n1: ios=2172/2560, merge=0/0, ticks=452/356, in_queue=808, util=91.38% 00:10:02.757 17:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:03.016 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:03.016 17:20:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:03.016 17:20:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:10:03.016 17:20:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:03.016 17:20:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:03.016 17:20:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:03.016 17:20:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:03.016 17:20:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:10:03.016 17:20:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:03.016 17:20:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:03.016 17:20:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:03.016 17:20:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:03.016 17:20:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:03.016 17:20:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:03.016 17:20:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:03.016 17:20:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:03.016 rmmod nvme_tcp 00:10:03.016 rmmod nvme_fabrics 00:10:03.016 rmmod nvme_keyring 00:10:03.016 17:20:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:03.016 17:20:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:03.016 17:20:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:03.016 17:20:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2461915 ']' 00:10:03.016 17:20:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2461915 00:10:03.016 17:20:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2461915 ']' 00:10:03.016 17:20:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2461915 00:10:03.016 17:20:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:10:03.016 17:20:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:03.016 17:20:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2461915 00:10:03.016 17:20:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:03.016 17:20:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:03.016 17:20:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2461915' 00:10:03.016 killing process with pid 2461915 00:10:03.016 17:20:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2461915 00:10:03.017 17:20:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2461915 00:10:03.276 17:20:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:03.276 17:20:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:03.276 17:20:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:03.276 17:20:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:03.276 17:20:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:10:03.276 17:20:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:03.276 17:20:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:10:03.276 17:20:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:03.276 17:20:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:03.276 17:20:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:03.276 17:20:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:03.276 17:20:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:05.810 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:05.810 00:10:05.810 real 0m14.910s 00:10:05.810 user 0m33.180s 00:10:05.810 sys 0m5.321s 00:10:05.810 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:05.810 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:05.810 ************************************ 00:10:05.810 END TEST nvmf_nmic 00:10:05.810 ************************************ 00:10:05.810 17:20:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:05.810 17:20:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:05.810 17:20:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:05.810 17:20:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:05.810 ************************************ 00:10:05.810 START TEST nvmf_fio_target 00:10:05.810 ************************************ 00:10:05.810 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:05.810 * Looking for test storage... 00:10:05.810 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:05.810 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:05.810 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:10:05.810 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:05.810 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:05.810 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:05.810 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:05.810 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:05.810 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:05.810 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:05.810 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:05.810 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:05.811 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:05.811 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:05.811 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:05.811 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:05.811 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:05.811 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:05.811 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:05.811 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:05.811 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:05.811 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:05.811 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:05.811 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:05.811 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:05.811 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:05.811 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:05.811 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:05.811 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:05.811 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:05.811 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:05.811 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:05.811 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:05.811 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:05.811 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:05.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.811 --rc genhtml_branch_coverage=1 00:10:05.811 --rc genhtml_function_coverage=1 00:10:05.811 --rc genhtml_legend=1 00:10:05.811 --rc geninfo_all_blocks=1 00:10:05.811 --rc geninfo_unexecuted_blocks=1 00:10:05.811 00:10:05.811 ' 00:10:05.811 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:05.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.811 --rc genhtml_branch_coverage=1 00:10:05.811 --rc genhtml_function_coverage=1 00:10:05.811 --rc genhtml_legend=1 00:10:05.811 --rc geninfo_all_blocks=1 00:10:05.811 --rc geninfo_unexecuted_blocks=1 00:10:05.811 00:10:05.811 ' 00:10:05.811 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:05.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.811 --rc genhtml_branch_coverage=1 00:10:05.811 --rc genhtml_function_coverage=1 00:10:05.811 --rc genhtml_legend=1 00:10:05.811 --rc geninfo_all_blocks=1 00:10:05.811 --rc geninfo_unexecuted_blocks=1 00:10:05.811 00:10:05.811 ' 00:10:05.811 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:05.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.811 --rc genhtml_branch_coverage=1 00:10:05.811 --rc genhtml_function_coverage=1 00:10:05.811 --rc genhtml_legend=1 00:10:05.811 --rc geninfo_all_blocks=1 00:10:05.811 --rc geninfo_unexecuted_blocks=1 00:10:05.811 00:10:05.811 ' 00:10:05.811 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:05.811 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:05.811 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:05.811 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:05.811 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:05.811 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:05.811 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:05.811 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:05.811 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:05.811 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:05.811 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:05.811 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:05.811 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:10:05.811 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:10:05.811 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:05.811 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:05.811 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:05.811 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:05.811 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:05.811 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:05.811 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:05.811 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:05.811 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:05.811 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.811 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.811 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.811 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:05.811 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.811 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:05.811 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:05.811 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:05.811 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:05.811 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:05.811 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:05.811 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:05.811 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:05.811 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:05.811 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:05.811 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:05.811 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:05.811 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:05.811 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:05.811 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:05.811 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:05.811 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:05.811 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:05.811 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:05.811 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:05.811 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:05.811 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:05.811 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:05.811 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:05.811 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:05.811 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:05.811 17:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:12.380 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:12.380 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:12.380 Found net devices under 0000:af:00.0: cvl_0_0 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:12.380 Found net devices under 0000:af:00.1: cvl_0_1 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:12.380 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:12.381 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:12.381 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:12.381 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:12.381 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:12.381 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:12.381 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:12.381 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:12.381 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:12.381 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:12.381 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.335 ms 00:10:12.381 00:10:12.381 --- 10.0.0.2 ping statistics --- 00:10:12.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.381 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:10:12.381 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:12.381 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:12.381 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:10:12.381 00:10:12.381 --- 10.0.0.1 ping statistics --- 00:10:12.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.381 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:10:12.381 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:12.381 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:10:12.381 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:12.381 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:12.381 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:12.381 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:12.381 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:12.381 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:12.381 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:12.381 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:12.381 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:12.381 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:12.381 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:12.381 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2466534 00:10:12.381 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:12.381 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2466534 00:10:12.381 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2466534 ']' 00:10:12.381 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:12.381 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:12.381 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:12.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:12.381 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:12.381 17:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:12.381 [2024-12-09 17:20:40.743965] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:10:12.381 [2024-12-09 17:20:40.744013] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:12.381 [2024-12-09 17:20:40.826458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:12.381 [2024-12-09 17:20:40.869684] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:12.381 [2024-12-09 17:20:40.869719] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:12.381 [2024-12-09 17:20:40.869727] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:12.381 [2024-12-09 17:20:40.869733] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:12.381 [2024-12-09 17:20:40.869739] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:12.381 [2024-12-09 17:20:40.871323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:12.381 [2024-12-09 17:20:40.871431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:12.381 [2024-12-09 17:20:40.871539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.381 [2024-12-09 17:20:40.871540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:12.640 17:20:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:12.640 17:20:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:10:12.640 17:20:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:12.640 17:20:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:12.640 17:20:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:12.640 17:20:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:12.640 17:20:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:12.640 [2024-12-09 17:20:41.779359] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:12.640 17:20:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:12.899 17:20:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:12.899 17:20:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:13.158 17:20:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:13.158 17:20:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:13.416 17:20:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:13.416 17:20:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:13.675 17:20:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:13.675 17:20:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:13.934 17:20:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:13.934 17:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:13.934 17:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:14.193 17:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:14.193 17:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:14.452 17:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:14.452 17:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:14.711 17:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:14.970 17:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:14.970 17:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:14.970 17:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:14.970 17:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:15.228 17:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:15.487 [2024-12-09 17:20:44.496618] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:15.487 17:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:15.746 17:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:15.746 17:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:17.123 17:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:17.123 17:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:10:17.123 17:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:17.123 17:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:10:17.123 17:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:10:17.123 17:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:10:19.028 17:20:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:19.028 17:20:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:19.028 17:20:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:19.028 17:20:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:10:19.028 17:20:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:19.028 17:20:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:10:19.028 17:20:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:19.028 [global] 00:10:19.028 thread=1 00:10:19.028 invalidate=1 00:10:19.028 rw=write 00:10:19.028 time_based=1 00:10:19.028 runtime=1 00:10:19.028 ioengine=libaio 00:10:19.028 direct=1 00:10:19.028 bs=4096 00:10:19.028 iodepth=1 00:10:19.028 norandommap=0 00:10:19.028 numjobs=1 00:10:19.028 00:10:19.028 verify_dump=1 00:10:19.028 verify_backlog=512 00:10:19.028 verify_state_save=0 00:10:19.028 do_verify=1 00:10:19.028 verify=crc32c-intel 00:10:19.028 [job0] 00:10:19.028 filename=/dev/nvme0n1 00:10:19.028 [job1] 00:10:19.028 filename=/dev/nvme0n2 00:10:19.028 [job2] 00:10:19.028 filename=/dev/nvme0n3 00:10:19.028 [job3] 00:10:19.028 filename=/dev/nvme0n4 00:10:19.028 Could not set queue depth (nvme0n1) 00:10:19.028 Could not set queue depth (nvme0n2) 00:10:19.028 Could not set queue depth (nvme0n3) 00:10:19.028 Could not set queue depth (nvme0n4) 00:10:19.287 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:19.287 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:19.287 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:19.287 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:19.287 fio-3.35 00:10:19.287 Starting 4 threads 00:10:20.666 00:10:20.666 job0: (groupid=0, jobs=1): err= 0: pid=2468093: Mon Dec 9 17:20:49 2024 00:10:20.666 read: IOPS=21, BW=86.4KiB/s (88.5kB/s)(88.0KiB/1018msec) 00:10:20.666 slat (nsec): min=20707, max=25116, avg=22167.91, stdev=1086.32 00:10:20.666 clat (usec): min=40868, max=41289, avg=40984.45, stdev=91.71 00:10:20.666 lat (usec): min=40890, max=41311, avg=41006.61, stdev=91.79 00:10:20.666 clat percentiles (usec): 00:10:20.666 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:10:20.666 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:20.666 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:20.666 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:20.666 | 99.99th=[41157] 00:10:20.666 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:10:20.666 slat (nsec): min=11799, max=41802, avg=13362.22, stdev=2121.97 00:10:20.666 clat (usec): min=128, max=312, avg=208.19, stdev=33.45 00:10:20.666 lat (usec): min=141, max=324, avg=221.55, stdev=33.73 00:10:20.666 clat percentiles (usec): 00:10:20.666 | 1.00th=[ 141], 5.00th=[ 153], 10.00th=[ 159], 20.00th=[ 172], 00:10:20.666 | 30.00th=[ 188], 40.00th=[ 210], 50.00th=[ 217], 60.00th=[ 225], 00:10:20.666 | 70.00th=[ 231], 80.00th=[ 237], 90.00th=[ 247], 95.00th=[ 251], 00:10:20.666 | 99.00th=[ 269], 99.50th=[ 281], 99.90th=[ 314], 99.95th=[ 314], 00:10:20.666 | 99.99th=[ 314] 00:10:20.666 bw ( KiB/s): min= 4096, max= 4096, per=25.73%, avg=4096.00, stdev= 0.00, samples=1 00:10:20.666 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:20.666 lat (usec) : 250=89.14%, 500=6.74% 00:10:20.666 lat (msec) : 50=4.12% 00:10:20.666 cpu : usr=0.59%, sys=0.88%, ctx=534, majf=0, minf=1 00:10:20.666 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:20.666 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.666 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.666 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:20.666 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:20.666 job1: (groupid=0, jobs=1): err= 0: pid=2468094: Mon Dec 9 17:20:49 2024 00:10:20.666 read: IOPS=22, BW=89.4KiB/s (91.6kB/s)(92.0KiB/1029msec) 00:10:20.666 slat (nsec): min=10075, max=24456, avg=23208.43, stdev=2932.62 00:10:20.666 clat (usec): min=40846, max=41133, avg=40969.62, stdev=70.08 00:10:20.666 lat (usec): min=40871, max=41143, avg=40992.83, stdev=68.75 00:10:20.666 clat percentiles (usec): 00:10:20.666 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:10:20.666 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:20.666 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:20.666 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:20.666 | 99.99th=[41157] 00:10:20.666 write: IOPS=497, BW=1990KiB/s (2038kB/s)(2048KiB/1029msec); 0 zone resets 00:10:20.666 slat (nsec): min=10351, max=42923, avg=11410.80, stdev=1862.08 00:10:20.666 clat (usec): min=123, max=3316, avg=153.92, stdev=140.48 00:10:20.666 lat (usec): min=134, max=3327, avg=165.33, stdev=140.52 00:10:20.666 clat percentiles (usec): 00:10:20.666 | 1.00th=[ 128], 5.00th=[ 135], 10.00th=[ 137], 20.00th=[ 141], 00:10:20.666 | 30.00th=[ 143], 40.00th=[ 145], 50.00th=[ 147], 60.00th=[ 149], 00:10:20.666 | 70.00th=[ 153], 80.00th=[ 155], 90.00th=[ 161], 95.00th=[ 165], 00:10:20.666 | 99.00th=[ 180], 99.50th=[ 192], 99.90th=[ 3326], 99.95th=[ 3326], 00:10:20.666 | 99.99th=[ 3326] 00:10:20.666 bw ( KiB/s): min= 4096, max= 4096, per=25.73%, avg=4096.00, stdev= 0.00, samples=1 00:10:20.666 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:20.666 lat (usec) : 250=95.33%, 500=0.19% 00:10:20.666 lat (msec) : 4=0.19%, 50=4.30% 00:10:20.666 cpu : usr=0.29%, sys=0.49%, ctx=538, majf=0, minf=1 00:10:20.666 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:20.666 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.666 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.666 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:20.666 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:20.666 job2: (groupid=0, jobs=1): err= 0: pid=2468095: Mon Dec 9 17:20:49 2024 00:10:20.666 read: IOPS=21, BW=86.4KiB/s (88.5kB/s)(88.0KiB/1018msec) 00:10:20.666 slat (nsec): min=11277, max=29466, avg=23747.09, stdev=3114.95 00:10:20.666 clat (usec): min=40900, max=41187, avg=40976.42, stdev=59.05 00:10:20.666 lat (usec): min=40929, max=41198, avg=41000.17, stdev=56.39 00:10:20.666 clat percentiles (usec): 00:10:20.666 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:20.666 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:20.666 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:20.666 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:20.666 | 99.99th=[41157] 00:10:20.666 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:10:20.666 slat (nsec): min=11568, max=39119, avg=13281.85, stdev=2074.71 00:10:20.666 clat (usec): min=137, max=341, avg=208.51, stdev=33.58 00:10:20.666 lat (usec): min=149, max=380, avg=221.79, stdev=34.29 00:10:20.666 clat percentiles (usec): 00:10:20.666 | 1.00th=[ 145], 5.00th=[ 153], 10.00th=[ 159], 20.00th=[ 169], 00:10:20.666 | 30.00th=[ 186], 40.00th=[ 208], 50.00th=[ 217], 60.00th=[ 225], 00:10:20.666 | 70.00th=[ 231], 80.00th=[ 237], 90.00th=[ 247], 95.00th=[ 253], 00:10:20.666 | 99.00th=[ 265], 99.50th=[ 302], 99.90th=[ 343], 99.95th=[ 343], 00:10:20.666 | 99.99th=[ 343] 00:10:20.666 bw ( KiB/s): min= 4096, max= 4096, per=25.73%, avg=4096.00, stdev= 0.00, samples=1 00:10:20.666 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:20.666 lat (usec) : 250=89.70%, 500=6.18% 00:10:20.666 lat (msec) : 50=4.12% 00:10:20.666 cpu : usr=0.69%, sys=0.79%, ctx=534, majf=0, minf=1 00:10:20.666 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:20.666 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.666 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.666 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:20.666 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:20.666 job3: (groupid=0, jobs=1): err= 0: pid=2468096: Mon Dec 9 17:20:49 2024 00:10:20.666 read: IOPS=2555, BW=9.98MiB/s (10.5MB/s)(9.99MiB/1001msec) 00:10:20.666 slat (nsec): min=7503, max=45473, avg=8630.52, stdev=1366.62 00:10:20.666 clat (usec): min=167, max=1412, avg=217.75, stdev=35.83 00:10:20.666 lat (usec): min=174, max=1421, avg=226.38, stdev=35.84 00:10:20.666 clat percentiles (usec): 00:10:20.666 | 1.00th=[ 178], 5.00th=[ 184], 10.00th=[ 190], 20.00th=[ 196], 00:10:20.666 | 30.00th=[ 202], 40.00th=[ 206], 50.00th=[ 212], 60.00th=[ 219], 00:10:20.666 | 70.00th=[ 233], 80.00th=[ 243], 90.00th=[ 251], 95.00th=[ 258], 00:10:20.666 | 99.00th=[ 269], 99.50th=[ 273], 99.90th=[ 302], 99.95th=[ 881], 00:10:20.666 | 99.99th=[ 1418] 00:10:20.666 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:20.666 slat (nsec): min=10816, max=40072, avg=12303.94, stdev=1891.54 00:10:20.666 clat (usec): min=120, max=321, avg=146.14, stdev=15.58 00:10:20.666 lat (usec): min=131, max=358, avg=158.45, stdev=16.11 00:10:20.666 clat percentiles (usec): 00:10:20.666 | 1.00th=[ 126], 5.00th=[ 129], 10.00th=[ 130], 20.00th=[ 133], 00:10:20.666 | 30.00th=[ 137], 40.00th=[ 139], 50.00th=[ 143], 60.00th=[ 147], 00:10:20.666 | 70.00th=[ 153], 80.00th=[ 159], 90.00th=[ 169], 95.00th=[ 176], 00:10:20.666 | 99.00th=[ 188], 99.50th=[ 194], 99.90th=[ 202], 99.95th=[ 204], 00:10:20.666 | 99.99th=[ 322] 00:10:20.666 bw ( KiB/s): min=12288, max=12288, per=77.18%, avg=12288.00, stdev= 0.00, samples=1 00:10:20.666 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:20.666 lat (usec) : 250=94.28%, 500=5.69%, 1000=0.02% 00:10:20.666 lat (msec) : 2=0.02% 00:10:20.666 cpu : usr=4.40%, sys=8.20%, ctx=5119, majf=0, minf=2 00:10:20.666 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:20.666 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.666 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.666 issued rwts: total=2558,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:20.666 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:20.666 00:10:20.666 Run status group 0 (all jobs): 00:10:20.666 READ: bw=9.96MiB/s (10.4MB/s), 86.4KiB/s-9.98MiB/s (88.5kB/s-10.5MB/s), io=10.3MiB (10.8MB), run=1001-1029msec 00:10:20.666 WRITE: bw=15.5MiB/s (16.3MB/s), 1990KiB/s-9.99MiB/s (2038kB/s-10.5MB/s), io=16.0MiB (16.8MB), run=1001-1029msec 00:10:20.666 00:10:20.666 Disk stats (read/write): 00:10:20.666 nvme0n1: ios=67/512, merge=0/0, ticks=716/99, in_queue=815, util=86.17% 00:10:20.666 nvme0n2: ios=44/512, merge=0/0, ticks=1640/75, in_queue=1715, util=88.99% 00:10:20.666 nvme0n3: ios=74/512, merge=0/0, ticks=765/100, in_queue=865, util=94.13% 00:10:20.666 nvme0n4: ios=2073/2406, merge=0/0, ticks=1311/332, in_queue=1643, util=94.18% 00:10:20.666 17:20:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:20.666 [global] 00:10:20.666 thread=1 00:10:20.666 invalidate=1 00:10:20.666 rw=randwrite 00:10:20.666 time_based=1 00:10:20.666 runtime=1 00:10:20.666 ioengine=libaio 00:10:20.666 direct=1 00:10:20.666 bs=4096 00:10:20.666 iodepth=1 00:10:20.666 norandommap=0 00:10:20.666 numjobs=1 00:10:20.666 00:10:20.666 verify_dump=1 00:10:20.666 verify_backlog=512 00:10:20.666 verify_state_save=0 00:10:20.666 do_verify=1 00:10:20.666 verify=crc32c-intel 00:10:20.666 [job0] 00:10:20.666 filename=/dev/nvme0n1 00:10:20.666 [job1] 00:10:20.666 filename=/dev/nvme0n2 00:10:20.666 [job2] 00:10:20.666 filename=/dev/nvme0n3 00:10:20.667 [job3] 00:10:20.667 filename=/dev/nvme0n4 00:10:20.667 Could not set queue depth (nvme0n1) 00:10:20.667 Could not set queue depth (nvme0n2) 00:10:20.667 Could not set queue depth (nvme0n3) 00:10:20.667 Could not set queue depth (nvme0n4) 00:10:20.925 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:20.925 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:20.925 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:20.925 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:20.925 fio-3.35 00:10:20.925 Starting 4 threads 00:10:22.301 00:10:22.301 job0: (groupid=0, jobs=1): err= 0: pid=2468464: Mon Dec 9 17:20:51 2024 00:10:22.301 read: IOPS=25, BW=104KiB/s (106kB/s)(108KiB/1040msec) 00:10:22.301 slat (nsec): min=9874, max=24757, avg=20892.96, stdev=5244.04 00:10:22.301 clat (usec): min=224, max=42156, avg=34939.10, stdev=14751.76 00:10:22.301 lat (usec): min=246, max=42180, avg=34959.99, stdev=14751.03 00:10:22.301 clat percentiles (usec): 00:10:22.301 | 1.00th=[ 225], 5.00th=[ 233], 10.00th=[ 235], 20.00th=[40633], 00:10:22.301 | 30.00th=[40633], 40.00th=[40633], 50.00th=[41157], 60.00th=[41157], 00:10:22.301 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:22.301 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:22.301 | 99.99th=[42206] 00:10:22.301 write: IOPS=492, BW=1969KiB/s (2016kB/s)(2048KiB/1040msec); 0 zone resets 00:10:22.301 slat (nsec): min=9269, max=41126, avg=12392.19, stdev=2506.94 00:10:22.301 clat (usec): min=131, max=343, avg=171.80, stdev=19.57 00:10:22.301 lat (usec): min=142, max=384, avg=184.19, stdev=20.28 00:10:22.301 clat percentiles (usec): 00:10:22.302 | 1.00th=[ 141], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 155], 00:10:22.302 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 172], 60.00th=[ 176], 00:10:22.302 | 70.00th=[ 182], 80.00th=[ 188], 90.00th=[ 196], 95.00th=[ 202], 00:10:22.302 | 99.00th=[ 212], 99.50th=[ 231], 99.90th=[ 343], 99.95th=[ 343], 00:10:22.302 | 99.99th=[ 343] 00:10:22.302 bw ( KiB/s): min= 4096, max= 4096, per=18.91%, avg=4096.00, stdev= 0.00, samples=1 00:10:22.302 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:22.302 lat (usec) : 250=95.36%, 500=0.37% 00:10:22.302 lat (msec) : 50=4.27% 00:10:22.302 cpu : usr=0.19%, sys=1.15%, ctx=540, majf=0, minf=1 00:10:22.302 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:22.302 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.302 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.302 issued rwts: total=27,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:22.302 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:22.302 job1: (groupid=0, jobs=1): err= 0: pid=2468465: Mon Dec 9 17:20:51 2024 00:10:22.302 read: IOPS=1517, BW=6071KiB/s (6217kB/s)(6168KiB/1016msec) 00:10:22.302 slat (nsec): min=7244, max=45450, avg=8418.69, stdev=1780.39 00:10:22.302 clat (usec): min=191, max=41040, avg=406.59, stdev=2535.59 00:10:22.302 lat (usec): min=199, max=41065, avg=415.01, stdev=2536.48 00:10:22.302 clat percentiles (usec): 00:10:22.302 | 1.00th=[ 204], 5.00th=[ 215], 10.00th=[ 221], 20.00th=[ 227], 00:10:22.302 | 30.00th=[ 233], 40.00th=[ 237], 50.00th=[ 241], 60.00th=[ 247], 00:10:22.302 | 70.00th=[ 253], 80.00th=[ 262], 90.00th=[ 277], 95.00th=[ 293], 00:10:22.302 | 99.00th=[ 424], 99.50th=[ 840], 99.90th=[41157], 99.95th=[41157], 00:10:22.302 | 99.99th=[41157] 00:10:22.302 write: IOPS=2015, BW=8063KiB/s (8257kB/s)(8192KiB/1016msec); 0 zone resets 00:10:22.302 slat (nsec): min=10375, max=45158, avg=12050.96, stdev=2364.49 00:10:22.302 clat (usec): min=116, max=922, avg=166.22, stdev=39.40 00:10:22.302 lat (usec): min=127, max=936, avg=178.28, stdev=39.99 00:10:22.302 clat percentiles (usec): 00:10:22.302 | 1.00th=[ 131], 5.00th=[ 137], 10.00th=[ 141], 20.00th=[ 145], 00:10:22.302 | 30.00th=[ 149], 40.00th=[ 151], 50.00th=[ 155], 60.00th=[ 161], 00:10:22.302 | 70.00th=[ 169], 80.00th=[ 182], 90.00th=[ 202], 95.00th=[ 241], 00:10:22.302 | 99.00th=[ 273], 99.50th=[ 289], 99.90th=[ 693], 99.95th=[ 693], 00:10:22.302 | 99.99th=[ 922] 00:10:22.302 bw ( KiB/s): min= 6040, max=10344, per=37.82%, avg=8192.00, stdev=3043.39, samples=2 00:10:22.302 iops : min= 1510, max= 2586, avg=2048.00, stdev=760.85, samples=2 00:10:22.302 lat (usec) : 250=83.23%, 500=16.38%, 750=0.14%, 1000=0.08% 00:10:22.302 lat (msec) : 50=0.17% 00:10:22.302 cpu : usr=3.45%, sys=5.22%, ctx=3592, majf=0, minf=1 00:10:22.302 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:22.302 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.302 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.302 issued rwts: total=1542,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:22.302 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:22.302 job2: (groupid=0, jobs=1): err= 0: pid=2468468: Mon Dec 9 17:20:51 2024 00:10:22.302 read: IOPS=2501, BW=9.77MiB/s (10.2MB/s)(9.78MiB/1001msec) 00:10:22.302 slat (nsec): min=8059, max=39087, avg=8941.35, stdev=1153.77 00:10:22.302 clat (usec): min=169, max=333, avg=216.76, stdev=20.75 00:10:22.302 lat (usec): min=181, max=344, avg=225.70, stdev=20.75 00:10:22.302 clat percentiles (usec): 00:10:22.302 | 1.00th=[ 180], 5.00th=[ 188], 10.00th=[ 194], 20.00th=[ 198], 00:10:22.302 | 30.00th=[ 204], 40.00th=[ 208], 50.00th=[ 215], 60.00th=[ 221], 00:10:22.302 | 70.00th=[ 227], 80.00th=[ 237], 90.00th=[ 247], 95.00th=[ 255], 00:10:22.302 | 99.00th=[ 265], 99.50th=[ 269], 99.90th=[ 281], 99.95th=[ 302], 00:10:22.302 | 99.99th=[ 334] 00:10:22.302 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:22.302 slat (nsec): min=11412, max=46780, avg=12830.94, stdev=1572.34 00:10:22.302 clat (usec): min=116, max=283, avg=151.04, stdev=18.21 00:10:22.302 lat (usec): min=128, max=330, avg=163.87, stdev=18.45 00:10:22.302 clat percentiles (usec): 00:10:22.302 | 1.00th=[ 123], 5.00th=[ 126], 10.00th=[ 130], 20.00th=[ 137], 00:10:22.302 | 30.00th=[ 143], 40.00th=[ 147], 50.00th=[ 151], 60.00th=[ 153], 00:10:22.302 | 70.00th=[ 159], 80.00th=[ 163], 90.00th=[ 172], 95.00th=[ 178], 00:10:22.302 | 99.00th=[ 235], 99.50th=[ 241], 99.90th=[ 249], 99.95th=[ 251], 00:10:22.302 | 99.99th=[ 285] 00:10:22.302 bw ( KiB/s): min=11568, max=11568, per=53.40%, avg=11568.00, stdev= 0.00, samples=1 00:10:22.302 iops : min= 2892, max= 2892, avg=2892.00, stdev= 0.00, samples=1 00:10:22.302 lat (usec) : 250=95.97%, 500=4.03% 00:10:22.302 cpu : usr=3.40%, sys=5.40%, ctx=5065, majf=0, minf=1 00:10:22.302 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:22.302 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.302 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.302 issued rwts: total=2504,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:22.302 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:22.302 job3: (groupid=0, jobs=1): err= 0: pid=2468469: Mon Dec 9 17:20:51 2024 00:10:22.302 read: IOPS=90, BW=362KiB/s (371kB/s)(372KiB/1027msec) 00:10:22.302 slat (nsec): min=8085, max=27042, avg=11983.46, stdev=5245.98 00:10:22.302 clat (usec): min=251, max=41111, avg=9937.32, stdev=17348.48 00:10:22.302 lat (usec): min=266, max=41127, avg=9949.31, stdev=17351.05 00:10:22.302 clat percentiles (usec): 00:10:22.302 | 1.00th=[ 253], 5.00th=[ 265], 10.00th=[ 269], 20.00th=[ 273], 00:10:22.302 | 30.00th=[ 281], 40.00th=[ 293], 50.00th=[ 396], 60.00th=[ 404], 00:10:22.302 | 70.00th=[ 424], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:22.302 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:22.302 | 99.99th=[41157] 00:10:22.302 write: IOPS=498, BW=1994KiB/s (2042kB/s)(2048KiB/1027msec); 0 zone resets 00:10:22.302 slat (nsec): min=9149, max=37207, avg=13012.63, stdev=3319.08 00:10:22.302 clat (usec): min=132, max=863, avg=181.50, stdev=59.13 00:10:22.302 lat (usec): min=145, max=878, avg=194.51, stdev=60.14 00:10:22.302 clat percentiles (usec): 00:10:22.302 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 149], 20.00th=[ 155], 00:10:22.302 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 165], 60.00th=[ 174], 00:10:22.302 | 70.00th=[ 180], 80.00th=[ 194], 90.00th=[ 237], 95.00th=[ 269], 00:10:22.302 | 99.00th=[ 330], 99.50th=[ 644], 99.90th=[ 865], 99.95th=[ 865], 00:10:22.302 | 99.99th=[ 865] 00:10:22.302 bw ( KiB/s): min= 4096, max= 4096, per=18.91%, avg=4096.00, stdev= 0.00, samples=1 00:10:22.302 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:22.302 lat (usec) : 250=78.18%, 500=17.52%, 750=0.50%, 1000=0.17% 00:10:22.302 lat (msec) : 50=3.64% 00:10:22.302 cpu : usr=0.49%, sys=0.58%, ctx=606, majf=0, minf=1 00:10:22.302 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:22.302 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.302 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.302 issued rwts: total=93,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:22.302 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:22.302 00:10:22.302 Run status group 0 (all jobs): 00:10:22.302 READ: bw=15.6MiB/s (16.4MB/s), 104KiB/s-9.77MiB/s (106kB/s-10.2MB/s), io=16.3MiB (17.1MB), run=1001-1040msec 00:10:22.302 WRITE: bw=21.2MiB/s (22.2MB/s), 1969KiB/s-9.99MiB/s (2016kB/s-10.5MB/s), io=22.0MiB (23.1MB), run=1001-1040msec 00:10:22.302 00:10:22.302 Disk stats (read/write): 00:10:22.302 nvme0n1: ios=44/512, merge=0/0, ticks=1644/88, in_queue=1732, util=89.18% 00:10:22.302 nvme0n2: ios=1573/2048, merge=0/0, ticks=598/298, in_queue=896, util=99.39% 00:10:22.302 nvme0n3: ios=2106/2161, merge=0/0, ticks=964/315, in_queue=1279, util=93.19% 00:10:22.302 nvme0n4: ios=147/512, merge=0/0, ticks=874/83, in_queue=957, util=97.35% 00:10:22.302 17:20:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:22.302 [global] 00:10:22.302 thread=1 00:10:22.302 invalidate=1 00:10:22.302 rw=write 00:10:22.302 time_based=1 00:10:22.302 runtime=1 00:10:22.302 ioengine=libaio 00:10:22.302 direct=1 00:10:22.302 bs=4096 00:10:22.302 iodepth=128 00:10:22.302 norandommap=0 00:10:22.302 numjobs=1 00:10:22.302 00:10:22.302 verify_dump=1 00:10:22.302 verify_backlog=512 00:10:22.302 verify_state_save=0 00:10:22.302 do_verify=1 00:10:22.302 verify=crc32c-intel 00:10:22.302 [job0] 00:10:22.302 filename=/dev/nvme0n1 00:10:22.302 [job1] 00:10:22.302 filename=/dev/nvme0n2 00:10:22.302 [job2] 00:10:22.302 filename=/dev/nvme0n3 00:10:22.302 [job3] 00:10:22.302 filename=/dev/nvme0n4 00:10:22.302 Could not set queue depth (nvme0n1) 00:10:22.302 Could not set queue depth (nvme0n2) 00:10:22.302 Could not set queue depth (nvme0n3) 00:10:22.302 Could not set queue depth (nvme0n4) 00:10:22.562 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:22.562 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:22.562 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:22.562 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:22.562 fio-3.35 00:10:22.562 Starting 4 threads 00:10:24.054 00:10:24.055 job0: (groupid=0, jobs=1): err= 0: pid=2468839: Mon Dec 9 17:20:52 2024 00:10:24.055 read: IOPS=5610, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1005msec) 00:10:24.055 slat (nsec): min=1118, max=15514k, avg=82170.99, stdev=577852.43 00:10:24.055 clat (usec): min=4382, max=24766, avg=11222.82, stdev=2709.77 00:10:24.055 lat (usec): min=5037, max=24775, avg=11304.99, stdev=2753.21 00:10:24.055 clat percentiles (usec): 00:10:24.055 | 1.00th=[ 6849], 5.00th=[ 7504], 10.00th=[ 8160], 20.00th=[ 9503], 00:10:24.055 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10421], 60.00th=[11338], 00:10:24.055 | 70.00th=[12125], 80.00th=[12911], 90.00th=[15795], 95.00th=[16712], 00:10:24.055 | 99.00th=[19792], 99.50th=[20317], 99.90th=[22938], 99.95th=[24511], 00:10:24.055 | 99.99th=[24773] 00:10:24.055 write: IOPS=6113, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1005msec); 0 zone resets 00:10:24.055 slat (nsec): min=1860, max=17394k, avg=74167.09, stdev=449597.45 00:10:24.055 clat (usec): min=2319, max=38466, avg=10335.72, stdev=3738.91 00:10:24.055 lat (usec): min=2325, max=40610, avg=10409.88, stdev=3755.30 00:10:24.055 clat percentiles (usec): 00:10:24.055 | 1.00th=[ 4359], 5.00th=[ 5211], 10.00th=[ 7046], 20.00th=[ 8979], 00:10:24.055 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10028], 60.00th=[10159], 00:10:24.055 | 70.00th=[10290], 80.00th=[10945], 90.00th=[12518], 95.00th=[14222], 00:10:24.055 | 99.00th=[28443], 99.50th=[30278], 99.90th=[36963], 99.95th=[38536], 00:10:24.055 | 99.99th=[38536] 00:10:24.055 bw ( KiB/s): min=22424, max=25768, per=32.15%, avg=24096.00, stdev=2364.57, samples=2 00:10:24.055 iops : min= 5606, max= 6442, avg=6024.00, stdev=591.14, samples=2 00:10:24.055 lat (msec) : 4=0.23%, 10=43.69%, 20=54.10%, 50=1.98% 00:10:24.055 cpu : usr=3.29%, sys=6.18%, ctx=655, majf=0, minf=1 00:10:24.055 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:24.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:24.055 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:24.055 issued rwts: total=5639,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:24.055 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:24.055 job1: (groupid=0, jobs=1): err= 0: pid=2468840: Mon Dec 9 17:20:52 2024 00:10:24.055 read: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec) 00:10:24.055 slat (nsec): min=1251, max=21566k, avg=118847.69, stdev=883368.95 00:10:24.055 clat (usec): min=4206, max=57285, avg=15273.04, stdev=6964.57 00:10:24.055 lat (usec): min=4219, max=63409, avg=15391.89, stdev=7020.36 00:10:24.055 clat percentiles (usec): 00:10:24.055 | 1.00th=[ 6915], 5.00th=[ 9765], 10.00th=[10028], 20.00th=[10814], 00:10:24.055 | 30.00th=[12125], 40.00th=[13042], 50.00th=[14091], 60.00th=[15008], 00:10:24.055 | 70.00th=[15795], 80.00th=[17695], 90.00th=[19792], 95.00th=[23462], 00:10:24.055 | 99.00th=[55837], 99.50th=[57410], 99.90th=[57410], 99.95th=[57410], 00:10:24.055 | 99.99th=[57410] 00:10:24.055 write: IOPS=4605, BW=18.0MiB/s (18.9MB/s)(18.1MiB/1004msec); 0 zone resets 00:10:24.055 slat (usec): min=2, max=14049, avg=90.82, stdev=683.43 00:10:24.055 clat (usec): min=1684, max=41633, avg=12289.39, stdev=4522.52 00:10:24.055 lat (usec): min=1983, max=41642, avg=12380.21, stdev=4578.55 00:10:24.055 clat percentiles (usec): 00:10:24.055 | 1.00th=[ 4015], 5.00th=[ 7242], 10.00th=[ 7963], 20.00th=[ 9503], 00:10:24.055 | 30.00th=[10028], 40.00th=[10290], 50.00th=[11338], 60.00th=[12518], 00:10:24.055 | 70.00th=[13173], 80.00th=[15533], 90.00th=[17171], 95.00th=[19792], 00:10:24.055 | 99.00th=[29492], 99.50th=[32637], 99.90th=[41681], 99.95th=[41681], 00:10:24.055 | 99.99th=[41681] 00:10:24.055 bw ( KiB/s): min=16384, max=20480, per=24.59%, avg=18432.00, stdev=2896.31, samples=2 00:10:24.055 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:10:24.055 lat (msec) : 2=0.05%, 4=0.44%, 10=19.41%, 20=72.80%, 50=6.61% 00:10:24.055 lat (msec) : 100=0.68% 00:10:24.055 cpu : usr=3.49%, sys=6.38%, ctx=302, majf=0, minf=1 00:10:24.055 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:24.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:24.055 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:24.055 issued rwts: total=4608,4624,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:24.055 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:24.055 job2: (groupid=0, jobs=1): err= 0: pid=2468841: Mon Dec 9 17:20:52 2024 00:10:24.055 read: IOPS=2544, BW=9.94MiB/s (10.4MB/s)(10.0MiB/1006msec) 00:10:24.055 slat (nsec): min=1172, max=46585k, avg=187330.33, stdev=1469420.10 00:10:24.055 clat (usec): min=8598, max=67823, avg=24345.87, stdev=12136.78 00:10:24.055 lat (usec): min=8602, max=67830, avg=24533.20, stdev=12222.80 00:10:24.055 clat percentiles (usec): 00:10:24.055 | 1.00th=[ 9241], 5.00th=[11469], 10.00th=[11731], 20.00th=[13304], 00:10:24.055 | 30.00th=[15008], 40.00th=[19530], 50.00th=[22938], 60.00th=[26346], 00:10:24.055 | 70.00th=[28967], 80.00th=[31851], 90.00th=[37487], 95.00th=[46924], 00:10:24.055 | 99.00th=[66323], 99.50th=[67634], 99.90th=[67634], 99.95th=[67634], 00:10:24.055 | 99.99th=[67634] 00:10:24.055 write: IOPS=3000, BW=11.7MiB/s (12.3MB/s)(11.8MiB/1006msec); 0 zone resets 00:10:24.055 slat (usec): min=2, max=21520, avg=166.21, stdev=960.28 00:10:24.055 clat (usec): min=4358, max=55508, avg=21510.47, stdev=12102.49 00:10:24.055 lat (usec): min=5765, max=55522, avg=21676.68, stdev=12182.27 00:10:24.055 clat percentiles (usec): 00:10:24.055 | 1.00th=[ 8717], 5.00th=[ 9241], 10.00th=[11600], 20.00th=[12780], 00:10:24.055 | 30.00th=[13435], 40.00th=[13960], 50.00th=[14877], 60.00th=[21103], 00:10:24.055 | 70.00th=[25560], 80.00th=[26870], 90.00th=[41681], 95.00th=[50070], 00:10:24.055 | 99.00th=[55313], 99.50th=[55313], 99.90th=[55313], 99.95th=[55313], 00:10:24.055 | 99.99th=[55313] 00:10:24.055 bw ( KiB/s): min= 8616, max=14512, per=15.43%, avg=11564.00, stdev=4169.10, samples=2 00:10:24.055 iops : min= 2154, max= 3628, avg=2891.00, stdev=1042.28, samples=2 00:10:24.055 lat (msec) : 10=4.77%, 20=47.13%, 50=43.04%, 100=5.06% 00:10:24.055 cpu : usr=1.99%, sys=4.38%, ctx=223, majf=0, minf=2 00:10:24.055 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:10:24.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:24.055 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:24.055 issued rwts: total=2560,3018,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:24.055 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:24.055 job3: (groupid=0, jobs=1): err= 0: pid=2468842: Mon Dec 9 17:20:52 2024 00:10:24.055 read: IOPS=4660, BW=18.2MiB/s (19.1MB/s)(18.4MiB/1009msec) 00:10:24.055 slat (nsec): min=1432, max=12240k, avg=115431.43, stdev=809743.07 00:10:24.055 clat (usec): min=4237, max=54735, avg=13580.12, stdev=4555.40 00:10:24.055 lat (usec): min=4245, max=54738, avg=13695.55, stdev=4632.61 00:10:24.055 clat percentiles (usec): 00:10:24.055 | 1.00th=[ 5276], 5.00th=[ 9372], 10.00th=[10290], 20.00th=[11207], 00:10:24.055 | 30.00th=[11469], 40.00th=[11731], 50.00th=[12256], 60.00th=[13435], 00:10:24.055 | 70.00th=[14353], 80.00th=[15270], 90.00th=[18220], 95.00th=[20317], 00:10:24.055 | 99.00th=[30278], 99.50th=[42206], 99.90th=[54789], 99.95th=[54789], 00:10:24.055 | 99.99th=[54789] 00:10:24.055 write: IOPS=5074, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1009msec); 0 zone resets 00:10:24.055 slat (usec): min=2, max=12125, avg=83.48, stdev=444.42 00:10:24.055 clat (usec): min=2937, max=54736, avg=12492.39, stdev=5542.88 00:10:24.055 lat (usec): min=2949, max=54740, avg=12575.86, stdev=5567.42 00:10:24.055 clat percentiles (usec): 00:10:24.055 | 1.00th=[ 4113], 5.00th=[ 6259], 10.00th=[ 8029], 20.00th=[10552], 00:10:24.055 | 30.00th=[10814], 40.00th=[11338], 50.00th=[11469], 60.00th=[11600], 00:10:24.055 | 70.00th=[11863], 80.00th=[13566], 90.00th=[17433], 95.00th=[22938], 00:10:24.055 | 99.00th=[42206], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:10:24.055 | 99.99th=[54789] 00:10:24.055 bw ( KiB/s): min=17776, max=22920, per=27.15%, avg=20348.00, stdev=3637.36, samples=2 00:10:24.055 iops : min= 4444, max= 5730, avg=5087.00, stdev=909.34, samples=2 00:10:24.055 lat (msec) : 4=0.50%, 10=13.14%, 20=79.86%, 50=6.42%, 100=0.07% 00:10:24.055 cpu : usr=3.97%, sys=6.05%, ctx=593, majf=0, minf=1 00:10:24.055 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:24.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:24.055 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:24.055 issued rwts: total=4702,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:24.055 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:24.055 00:10:24.055 Run status group 0 (all jobs): 00:10:24.055 READ: bw=67.8MiB/s (71.1MB/s), 9.94MiB/s-21.9MiB/s (10.4MB/s-23.0MB/s), io=68.4MiB (71.7MB), run=1004-1009msec 00:10:24.055 WRITE: bw=73.2MiB/s (76.7MB/s), 11.7MiB/s-23.9MiB/s (12.3MB/s-25.0MB/s), io=73.9MiB (77.4MB), run=1004-1009msec 00:10:24.055 00:10:24.055 Disk stats (read/write): 00:10:24.055 nvme0n1: ios=4460/4608, merge=0/0, ticks=28656/25651, in_queue=54307, util=79.66% 00:10:24.055 nvme0n2: ios=3609/3687, merge=0/0, ticks=36181/29432, in_queue=65613, util=96.28% 00:10:24.055 nvme0n3: ios=2358/2560, merge=0/0, ticks=19166/14698, in_queue=33864, util=94.08% 00:10:24.055 nvme0n4: ios=3623/3823, merge=0/0, ticks=48140/48986, in_queue=97126, util=100.00% 00:10:24.055 17:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:24.055 [global] 00:10:24.055 thread=1 00:10:24.055 invalidate=1 00:10:24.055 rw=randwrite 00:10:24.055 time_based=1 00:10:24.055 runtime=1 00:10:24.055 ioengine=libaio 00:10:24.055 direct=1 00:10:24.055 bs=4096 00:10:24.055 iodepth=128 00:10:24.055 norandommap=0 00:10:24.055 numjobs=1 00:10:24.055 00:10:24.055 verify_dump=1 00:10:24.055 verify_backlog=512 00:10:24.055 verify_state_save=0 00:10:24.055 do_verify=1 00:10:24.055 verify=crc32c-intel 00:10:24.055 [job0] 00:10:24.055 filename=/dev/nvme0n1 00:10:24.055 [job1] 00:10:24.055 filename=/dev/nvme0n2 00:10:24.055 [job2] 00:10:24.055 filename=/dev/nvme0n3 00:10:24.055 [job3] 00:10:24.055 filename=/dev/nvme0n4 00:10:24.055 Could not set queue depth (nvme0n1) 00:10:24.055 Could not set queue depth (nvme0n2) 00:10:24.055 Could not set queue depth (nvme0n3) 00:10:24.055 Could not set queue depth (nvme0n4) 00:10:24.313 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:24.313 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:24.313 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:24.313 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:24.313 fio-3.35 00:10:24.313 Starting 4 threads 00:10:25.686 00:10:25.686 job0: (groupid=0, jobs=1): err= 0: pid=2469211: Mon Dec 9 17:20:54 2024 00:10:25.686 read: IOPS=3041, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1010msec) 00:10:25.686 slat (nsec): min=1583, max=16936k, avg=140202.74, stdev=937596.87 00:10:25.686 clat (usec): min=4455, max=46477, avg=16069.81, stdev=7169.90 00:10:25.686 lat (usec): min=4463, max=46485, avg=16210.02, stdev=7233.59 00:10:25.686 clat percentiles (usec): 00:10:25.686 | 1.00th=[ 5735], 5.00th=[ 8848], 10.00th=[ 9110], 20.00th=[ 9896], 00:10:25.686 | 30.00th=[10683], 40.00th=[13042], 50.00th=[14353], 60.00th=[15795], 00:10:25.686 | 70.00th=[20055], 80.00th=[21103], 90.00th=[23725], 95.00th=[30278], 00:10:25.686 | 99.00th=[42206], 99.50th=[43254], 99.90th=[46400], 99.95th=[46400], 00:10:25.686 | 99.99th=[46400] 00:10:25.686 write: IOPS=3491, BW=13.6MiB/s (14.3MB/s)(13.8MiB/1010msec); 0 zone resets 00:10:25.686 slat (usec): min=2, max=18637, avg=156.30, stdev=805.51 00:10:25.686 clat (usec): min=2217, max=54533, avg=22321.44, stdev=9410.06 00:10:25.686 lat (usec): min=2223, max=54546, avg=22477.74, stdev=9478.95 00:10:25.686 clat percentiles (usec): 00:10:25.686 | 1.00th=[ 4293], 5.00th=[ 8717], 10.00th=[13173], 20.00th=[16319], 00:10:25.686 | 30.00th=[16909], 40.00th=[17695], 50.00th=[19792], 60.00th=[21365], 00:10:25.686 | 70.00th=[25035], 80.00th=[31589], 90.00th=[35914], 95.00th=[39584], 00:10:25.686 | 99.00th=[47973], 99.50th=[51119], 99.90th=[54264], 99.95th=[54264], 00:10:25.686 | 99.99th=[54789] 00:10:25.686 bw ( KiB/s): min=11904, max=15280, per=22.36%, avg=13592.00, stdev=2387.19, samples=2 00:10:25.686 iops : min= 2976, max= 3820, avg=3398.00, stdev=596.80, samples=2 00:10:25.686 lat (msec) : 4=0.42%, 10=13.22%, 20=46.06%, 50=39.95%, 100=0.35% 00:10:25.686 cpu : usr=3.37%, sys=3.47%, ctx=420, majf=0, minf=1 00:10:25.686 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:10:25.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.686 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:25.686 issued rwts: total=3072,3526,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:25.686 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:25.686 job1: (groupid=0, jobs=1): err= 0: pid=2469215: Mon Dec 9 17:20:54 2024 00:10:25.686 read: IOPS=4509, BW=17.6MiB/s (18.5MB/s)(17.7MiB/1006msec) 00:10:25.686 slat (nsec): min=1355, max=18042k, avg=108898.60, stdev=776286.23 00:10:25.686 clat (usec): min=3091, max=45346, avg=14346.20, stdev=7727.73 00:10:25.686 lat (usec): min=4254, max=45350, avg=14455.10, stdev=7786.34 00:10:25.686 clat percentiles (usec): 00:10:25.686 | 1.00th=[ 5276], 5.00th=[ 8356], 10.00th=[ 9372], 20.00th=[ 9765], 00:10:25.686 | 30.00th=[10159], 40.00th=[10552], 50.00th=[10945], 60.00th=[11600], 00:10:25.686 | 70.00th=[13173], 80.00th=[16909], 90.00th=[28443], 95.00th=[32375], 00:10:25.686 | 99.00th=[39584], 99.50th=[45351], 99.90th=[45351], 99.95th=[45351], 00:10:25.686 | 99.99th=[45351] 00:10:25.686 write: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec); 0 zone resets 00:10:25.686 slat (nsec): min=1879, max=11149k, avg=104465.93, stdev=555867.62 00:10:25.686 clat (usec): min=1018, max=46183, avg=13327.63, stdev=6991.78 00:10:25.686 lat (usec): min=1026, max=46191, avg=13432.10, stdev=7040.51 00:10:25.686 clat percentiles (usec): 00:10:25.686 | 1.00th=[ 4178], 5.00th=[ 5735], 10.00th=[ 7963], 20.00th=[ 9634], 00:10:25.686 | 30.00th=[ 9896], 40.00th=[10028], 50.00th=[10421], 60.00th=[12125], 00:10:25.686 | 70.00th=[15795], 80.00th=[16909], 90.00th=[19006], 95.00th=[26608], 00:10:25.686 | 99.00th=[45351], 99.50th=[45876], 99.90th=[46400], 99.95th=[46400], 00:10:25.686 | 99.99th=[46400] 00:10:25.686 bw ( KiB/s): min=17248, max=19616, per=30.32%, avg=18432.00, stdev=1674.43, samples=2 00:10:25.686 iops : min= 4312, max= 4904, avg=4608.00, stdev=418.61, samples=2 00:10:25.686 lat (msec) : 2=0.07%, 4=0.45%, 10=30.07%, 20=56.35%, 50=13.07% 00:10:25.686 cpu : usr=2.89%, sys=5.27%, ctx=509, majf=0, minf=1 00:10:25.686 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:25.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.686 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:25.686 issued rwts: total=4537,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:25.686 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:25.686 job2: (groupid=0, jobs=1): err= 0: pid=2469225: Mon Dec 9 17:20:54 2024 00:10:25.686 read: IOPS=3091, BW=12.1MiB/s (12.7MB/s)(12.7MiB/1048msec) 00:10:25.686 slat (nsec): min=1115, max=16196k, avg=136906.75, stdev=884206.76 00:10:25.686 clat (usec): min=3867, max=59676, avg=19420.68, stdev=9739.24 00:10:25.686 lat (usec): min=3873, max=62546, avg=19557.58, stdev=9761.21 00:10:25.686 clat percentiles (usec): 00:10:25.686 | 1.00th=[ 5538], 5.00th=[ 7832], 10.00th=[10552], 20.00th=[13042], 00:10:25.686 | 30.00th=[15139], 40.00th=[15533], 50.00th=[16712], 60.00th=[19792], 00:10:25.686 | 70.00th=[20841], 80.00th=[23200], 90.00th=[31327], 95.00th=[36439], 00:10:25.686 | 99.00th=[59507], 99.50th=[59507], 99.90th=[59507], 99.95th=[59507], 00:10:25.686 | 99.99th=[59507] 00:10:25.686 write: IOPS=3419, BW=13.4MiB/s (14.0MB/s)(14.0MiB/1048msec); 0 zone resets 00:10:25.686 slat (usec): min=2, max=15664, avg=146.79, stdev=696.53 00:10:25.686 clat (usec): min=1424, max=35278, avg=19534.73, stdev=4935.20 00:10:25.686 lat (usec): min=1436, max=35310, avg=19681.52, stdev=4982.05 00:10:25.686 clat percentiles (usec): 00:10:25.686 | 1.00th=[ 7439], 5.00th=[11994], 10.00th=[13173], 20.00th=[15270], 00:10:25.686 | 30.00th=[16909], 40.00th=[19268], 50.00th=[20579], 60.00th=[20841], 00:10:25.686 | 70.00th=[21365], 80.00th=[21890], 90.00th=[25035], 95.00th=[28705], 00:10:25.686 | 99.00th=[32375], 99.50th=[32637], 99.90th=[33424], 99.95th=[33817], 00:10:25.686 | 99.99th=[35390] 00:10:25.686 bw ( KiB/s): min=12624, max=16048, per=23.58%, avg=14336.00, stdev=2421.13, samples=2 00:10:25.686 iops : min= 3156, max= 4012, avg=3584.00, stdev=605.28, samples=2 00:10:25.686 lat (msec) : 2=0.03%, 4=0.06%, 10=5.50%, 20=47.01%, 50=46.18% 00:10:25.686 lat (msec) : 100=1.23% 00:10:25.686 cpu : usr=2.39%, sys=3.06%, ctx=451, majf=0, minf=2 00:10:25.686 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:10:25.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.686 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:25.686 issued rwts: total=3240,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:25.686 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:25.686 job3: (groupid=0, jobs=1): err= 0: pid=2469229: Mon Dec 9 17:20:54 2024 00:10:25.686 read: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1007msec) 00:10:25.686 slat (nsec): min=1305, max=7109.6k, avg=122701.55, stdev=602290.36 00:10:25.686 clat (usec): min=5731, max=28424, avg=15567.22, stdev=2988.23 00:10:25.686 lat (usec): min=5739, max=28442, avg=15689.92, stdev=3038.43 00:10:25.686 clat percentiles (usec): 00:10:25.686 | 1.00th=[ 8586], 5.00th=[10683], 10.00th=[11469], 20.00th=[13042], 00:10:25.686 | 30.00th=[14353], 40.00th=[15401], 50.00th=[15795], 60.00th=[16188], 00:10:25.686 | 70.00th=[16450], 80.00th=[17171], 90.00th=[19530], 95.00th=[20841], 00:10:25.686 | 99.00th=[24511], 99.50th=[24773], 99.90th=[25822], 99.95th=[26608], 00:10:25.686 | 99.99th=[28443] 00:10:25.687 write: IOPS=4180, BW=16.3MiB/s (17.1MB/s)(16.4MiB/1007msec); 0 zone resets 00:10:25.687 slat (usec): min=2, max=14615, avg=110.62, stdev=519.78 00:10:25.687 clat (usec): min=3370, max=36431, avg=15130.10, stdev=4931.36 00:10:25.687 lat (usec): min=6261, max=36439, avg=15240.72, stdev=4958.86 00:10:25.687 clat percentiles (usec): 00:10:25.687 | 1.00th=[ 6325], 5.00th=[ 9372], 10.00th=[11076], 20.00th=[11600], 00:10:25.687 | 30.00th=[11994], 40.00th=[12387], 50.00th=[13304], 60.00th=[14877], 00:10:25.687 | 70.00th=[17957], 80.00th=[19268], 90.00th=[20579], 95.00th=[21103], 00:10:25.687 | 99.00th=[34866], 99.50th=[34866], 99.90th=[36439], 99.95th=[36439], 00:10:25.687 | 99.99th=[36439] 00:10:25.687 bw ( KiB/s): min=12632, max=20136, per=26.95%, avg=16384.00, stdev=5306.13, samples=2 00:10:25.687 iops : min= 3158, max= 5034, avg=4096.00, stdev=1326.53, samples=2 00:10:25.687 lat (msec) : 4=0.01%, 10=4.02%, 20=83.78%, 50=12.18% 00:10:25.687 cpu : usr=3.48%, sys=6.16%, ctx=517, majf=0, minf=1 00:10:25.687 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:25.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.687 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:25.687 issued rwts: total=4096,4210,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:25.687 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:25.687 00:10:25.687 Run status group 0 (all jobs): 00:10:25.687 READ: bw=55.7MiB/s (58.4MB/s), 11.9MiB/s-17.6MiB/s (12.5MB/s-18.5MB/s), io=58.4MiB (61.2MB), run=1006-1048msec 00:10:25.687 WRITE: bw=59.4MiB/s (62.3MB/s), 13.4MiB/s-17.9MiB/s (14.0MB/s-18.8MB/s), io=62.2MiB (65.2MB), run=1006-1048msec 00:10:25.687 00:10:25.687 Disk stats (read/write): 00:10:25.687 nvme0n1: ios=2611/2735, merge=0/0, ticks=42978/60409, in_queue=103387, util=92.99% 00:10:25.687 nvme0n2: ios=3796/4096, merge=0/0, ticks=34421/29485, in_queue=63906, util=97.45% 00:10:25.687 nvme0n3: ios=2657/3072, merge=0/0, ticks=29922/32733, in_queue=62655, util=93.49% 00:10:25.687 nvme0n4: ios=3565/3584, merge=0/0, ticks=19465/18043, in_queue=37508, util=100.00% 00:10:25.687 17:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:25.687 17:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2469450 00:10:25.687 17:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:25.687 17:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:25.687 [global] 00:10:25.687 thread=1 00:10:25.687 invalidate=1 00:10:25.687 rw=read 00:10:25.687 time_based=1 00:10:25.687 runtime=10 00:10:25.687 ioengine=libaio 00:10:25.687 direct=1 00:10:25.687 bs=4096 00:10:25.687 iodepth=1 00:10:25.687 norandommap=1 00:10:25.687 numjobs=1 00:10:25.687 00:10:25.687 [job0] 00:10:25.687 filename=/dev/nvme0n1 00:10:25.687 [job1] 00:10:25.687 filename=/dev/nvme0n2 00:10:25.687 [job2] 00:10:25.687 filename=/dev/nvme0n3 00:10:25.687 [job3] 00:10:25.687 filename=/dev/nvme0n4 00:10:25.687 Could not set queue depth (nvme0n1) 00:10:25.687 Could not set queue depth (nvme0n2) 00:10:25.687 Could not set queue depth (nvme0n3) 00:10:25.687 Could not set queue depth (nvme0n4) 00:10:25.945 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:25.945 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:25.945 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:25.945 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:25.945 fio-3.35 00:10:25.945 Starting 4 threads 00:10:28.473 17:20:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:28.731 17:20:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:28.731 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=6594560, buflen=4096 00:10:28.731 fio: pid=2469710, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:28.989 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=28389376, buflen=4096 00:10:28.989 fio: pid=2469703, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:28.989 17:20:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:28.989 17:20:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:29.246 17:20:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:29.247 17:20:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:29.247 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=331776, buflen=4096 00:10:29.247 fio: pid=2469675, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:29.247 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=29229056, buflen=4096 00:10:29.247 fio: pid=2469690, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:10:29.247 17:20:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:29.247 17:20:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:29.247 00:10:29.247 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2469675: Mon Dec 9 17:20:58 2024 00:10:29.247 read: IOPS=25, BW=102KiB/s (104kB/s)(324KiB/3185msec) 00:10:29.247 slat (nsec): min=7997, max=68571, avg=13893.78, stdev=8535.97 00:10:29.247 clat (usec): min=246, max=42031, avg=39040.32, stdev=8874.29 00:10:29.247 lat (usec): min=257, max=42046, avg=39054.03, stdev=8871.66 00:10:29.247 clat percentiles (usec): 00:10:29.247 | 1.00th=[ 247], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:10:29.247 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:29.247 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:10:29.247 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:29.247 | 99.99th=[42206] 00:10:29.247 bw ( KiB/s): min= 96, max= 112, per=0.54%, avg=102.00, stdev= 7.04, samples=6 00:10:29.247 iops : min= 24, max= 28, avg=25.50, stdev= 1.76, samples=6 00:10:29.247 lat (usec) : 250=1.22%, 500=3.66% 00:10:29.247 lat (msec) : 50=93.90% 00:10:29.247 cpu : usr=0.06%, sys=0.00%, ctx=84, majf=0, minf=1 00:10:29.247 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:29.247 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.247 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.247 issued rwts: total=82,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.247 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:29.247 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=2469690: Mon Dec 9 17:20:58 2024 00:10:29.247 read: IOPS=2139, BW=8556KiB/s (8762kB/s)(27.9MiB/3336msec) 00:10:29.247 slat (usec): min=6, max=22731, avg=14.08, stdev=320.78 00:10:29.247 clat (usec): min=155, max=45869, avg=452.02, stdev=3231.89 00:10:29.247 lat (usec): min=161, max=65003, avg=464.74, stdev=3287.10 00:10:29.247 clat percentiles (usec): 00:10:29.247 | 1.00th=[ 165], 5.00th=[ 176], 10.00th=[ 180], 20.00th=[ 184], 00:10:29.247 | 30.00th=[ 188], 40.00th=[ 192], 50.00th=[ 194], 60.00th=[ 198], 00:10:29.247 | 70.00th=[ 200], 80.00th=[ 204], 90.00th=[ 208], 95.00th=[ 215], 00:10:29.247 | 99.00th=[ 269], 99.50th=[40633], 99.90th=[41157], 99.95th=[41681], 00:10:29.247 | 99.99th=[45876] 00:10:29.247 bw ( KiB/s): min= 144, max=19840, per=48.52%, avg=9167.33, stdev=9094.56, samples=6 00:10:29.247 iops : min= 36, max= 4960, avg=2291.83, stdev=2273.64, samples=6 00:10:29.247 lat (usec) : 250=98.85%, 500=0.49% 00:10:29.247 lat (msec) : 2=0.01%, 50=0.63% 00:10:29.247 cpu : usr=0.60%, sys=2.13%, ctx=7140, majf=0, minf=1 00:10:29.247 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:29.247 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.247 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.247 issued rwts: total=7137,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.247 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:29.247 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2469703: Mon Dec 9 17:20:58 2024 00:10:29.247 read: IOPS=2340, BW=9360KiB/s (9585kB/s)(27.1MiB/2962msec) 00:10:29.247 slat (nsec): min=6837, max=66426, avg=7987.80, stdev=1616.59 00:10:29.247 clat (usec): min=161, max=41968, avg=414.75, stdev=2873.40 00:10:29.247 lat (usec): min=169, max=41992, avg=422.74, stdev=2874.59 00:10:29.247 clat percentiles (usec): 00:10:29.247 | 1.00th=[ 174], 5.00th=[ 182], 10.00th=[ 188], 20.00th=[ 194], 00:10:29.247 | 30.00th=[ 198], 40.00th=[ 200], 50.00th=[ 204], 60.00th=[ 206], 00:10:29.247 | 70.00th=[ 212], 80.00th=[ 219], 90.00th=[ 260], 95.00th=[ 277], 00:10:29.247 | 99.00th=[ 293], 99.50th=[28181], 99.90th=[41157], 99.95th=[41157], 00:10:29.247 | 99.99th=[42206] 00:10:29.247 bw ( KiB/s): min= 96, max=19104, per=54.83%, avg=10360.00, stdev=9458.87, samples=5 00:10:29.247 iops : min= 24, max= 4776, avg=2590.00, stdev=2364.72, samples=5 00:10:29.247 lat (usec) : 250=88.60%, 500=10.80%, 750=0.06%, 1000=0.01% 00:10:29.247 lat (msec) : 50=0.50% 00:10:29.247 cpu : usr=0.64%, sys=2.30%, ctx=6935, majf=0, minf=2 00:10:29.247 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:29.247 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.247 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.247 issued rwts: total=6932,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.247 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:29.247 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2469710: Mon Dec 9 17:20:58 2024 00:10:29.247 read: IOPS=584, BW=2337KiB/s (2393kB/s)(6440KiB/2756msec) 00:10:29.247 slat (nsec): min=7064, max=32647, avg=8926.29, stdev=3393.79 00:10:29.247 clat (usec): min=170, max=41963, avg=1687.45, stdev=7608.04 00:10:29.247 lat (usec): min=178, max=41986, avg=1696.37, stdev=7610.72 00:10:29.247 clat percentiles (usec): 00:10:29.247 | 1.00th=[ 182], 5.00th=[ 190], 10.00th=[ 194], 20.00th=[ 200], 00:10:29.247 | 30.00th=[ 204], 40.00th=[ 206], 50.00th=[ 210], 60.00th=[ 217], 00:10:29.247 | 70.00th=[ 221], 80.00th=[ 235], 90.00th=[ 273], 95.00th=[ 285], 00:10:29.247 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[42206], 00:10:29.247 | 99.99th=[42206] 00:10:29.247 bw ( KiB/s): min= 96, max= 9520, per=13.58%, avg=2566.40, stdev=4087.29, samples=5 00:10:29.247 iops : min= 24, max= 2380, avg=641.60, stdev=1021.82, samples=5 00:10:29.247 lat (usec) : 250=83.99%, 500=12.35% 00:10:29.247 lat (msec) : 50=3.60% 00:10:29.247 cpu : usr=0.29%, sys=0.51%, ctx=1612, majf=0, minf=2 00:10:29.247 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:29.247 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.247 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.247 issued rwts: total=1611,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.247 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:29.247 00:10:29.247 Run status group 0 (all jobs): 00:10:29.247 READ: bw=18.5MiB/s (19.3MB/s), 102KiB/s-9360KiB/s (104kB/s-9585kB/s), io=61.6MiB (64.5MB), run=2756-3336msec 00:10:29.247 00:10:29.247 Disk stats (read/write): 00:10:29.247 nvme0n1: ios=118/0, merge=0/0, ticks=4081/0, in_queue=4081, util=98.98% 00:10:29.247 nvme0n2: ios=7129/0, merge=0/0, ticks=2977/0, in_queue=2977, util=95.39% 00:10:29.247 nvme0n3: ios=6577/0, merge=0/0, ticks=3268/0, in_queue=3268, util=100.00% 00:10:29.247 nvme0n4: ios=1643/0, merge=0/0, ticks=3223/0, in_queue=3223, util=99.11% 00:10:29.505 17:20:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:29.505 17:20:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:29.763 17:20:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:29.763 17:20:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:30.020 17:20:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:30.020 17:20:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:30.278 17:20:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:30.278 17:20:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:30.278 17:20:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:30.278 17:20:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2469450 00:10:30.278 17:20:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:30.278 17:20:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:30.536 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:30.536 17:20:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:30.536 17:20:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:10:30.536 17:20:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:30.536 17:20:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:30.536 17:20:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:30.536 17:20:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:30.536 17:20:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:10:30.536 17:20:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:30.536 17:20:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:30.536 nvmf hotplug test: fio failed as expected 00:10:30.536 17:20:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:30.794 17:20:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:30.794 17:20:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:30.794 17:20:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:30.794 17:20:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:30.794 17:20:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:30.794 17:20:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:30.794 17:20:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:30.794 17:20:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:30.794 17:20:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:30.794 17:20:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:30.794 17:20:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:30.794 rmmod nvme_tcp 00:10:30.794 rmmod nvme_fabrics 00:10:30.794 rmmod nvme_keyring 00:10:30.794 17:20:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:30.794 17:20:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:30.794 17:20:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:30.794 17:20:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2466534 ']' 00:10:30.794 17:20:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2466534 00:10:30.794 17:20:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2466534 ']' 00:10:30.794 17:20:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2466534 00:10:30.794 17:20:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:10:30.794 17:20:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:30.794 17:20:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2466534 00:10:30.794 17:20:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:30.794 17:20:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:30.794 17:20:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2466534' 00:10:30.794 killing process with pid 2466534 00:10:30.794 17:20:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2466534 00:10:30.794 17:20:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2466534 00:10:31.053 17:21:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:31.053 17:21:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:31.053 17:21:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:31.053 17:21:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:31.053 17:21:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:10:31.053 17:21:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:31.053 17:21:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:10:31.053 17:21:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:31.053 17:21:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:31.053 17:21:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:31.053 17:21:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:31.053 17:21:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:32.958 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:32.958 00:10:32.958 real 0m27.602s 00:10:32.958 user 1m50.159s 00:10:32.958 sys 0m8.433s 00:10:32.958 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:32.958 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:32.958 ************************************ 00:10:32.958 END TEST nvmf_fio_target 00:10:32.958 ************************************ 00:10:33.217 17:21:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:33.217 17:21:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:33.217 17:21:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:33.217 17:21:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:33.217 ************************************ 00:10:33.217 START TEST nvmf_bdevio 00:10:33.217 ************************************ 00:10:33.218 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:33.218 * Looking for test storage... 00:10:33.218 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:33.218 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:33.218 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:10:33.218 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:33.218 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:33.218 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:33.218 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:33.218 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:33.218 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:33.218 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:33.218 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:33.218 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:33.218 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:33.218 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:33.218 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:33.218 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:33.218 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:33.218 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:33.218 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:33.218 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:33.218 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:33.218 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:33.218 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:33.218 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:33.218 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:33.218 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:33.218 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:33.218 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:33.218 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:33.218 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:33.218 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:33.218 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:33.218 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:33.218 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:33.218 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:33.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.218 --rc genhtml_branch_coverage=1 00:10:33.218 --rc genhtml_function_coverage=1 00:10:33.218 --rc genhtml_legend=1 00:10:33.218 --rc geninfo_all_blocks=1 00:10:33.218 --rc geninfo_unexecuted_blocks=1 00:10:33.218 00:10:33.218 ' 00:10:33.218 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:33.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.218 --rc genhtml_branch_coverage=1 00:10:33.218 --rc genhtml_function_coverage=1 00:10:33.218 --rc genhtml_legend=1 00:10:33.218 --rc geninfo_all_blocks=1 00:10:33.218 --rc geninfo_unexecuted_blocks=1 00:10:33.218 00:10:33.218 ' 00:10:33.218 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:33.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.218 --rc genhtml_branch_coverage=1 00:10:33.218 --rc genhtml_function_coverage=1 00:10:33.218 --rc genhtml_legend=1 00:10:33.218 --rc geninfo_all_blocks=1 00:10:33.218 --rc geninfo_unexecuted_blocks=1 00:10:33.218 00:10:33.218 ' 00:10:33.218 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:33.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.218 --rc genhtml_branch_coverage=1 00:10:33.218 --rc genhtml_function_coverage=1 00:10:33.218 --rc genhtml_legend=1 00:10:33.218 --rc geninfo_all_blocks=1 00:10:33.218 --rc geninfo_unexecuted_blocks=1 00:10:33.218 00:10:33.218 ' 00:10:33.218 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:33.218 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:33.218 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:33.218 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:33.218 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:33.218 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:33.218 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:33.218 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:33.218 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:33.218 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:33.218 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:33.218 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:33.218 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:10:33.218 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:10:33.218 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:33.218 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:33.218 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:33.218 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:33.218 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:33.218 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:33.218 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:33.218 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:33.218 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:33.218 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.218 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.218 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.218 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:33.218 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.218 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:33.218 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:33.218 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:33.218 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:33.218 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:33.218 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:33.478 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:33.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:33.478 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:33.478 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:33.478 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:33.478 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:33.478 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:33.478 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:33.478 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:33.478 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:33.478 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:33.478 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:33.478 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:33.478 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:33.478 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:33.478 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:33.478 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:33.478 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:33.478 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:33.478 17:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:40.048 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:40.048 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:40.048 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:40.048 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:40.048 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:40.048 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:40.048 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:40.048 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:40.048 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:40.048 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:40.048 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:40.048 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:40.048 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:40.048 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:40.048 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:40.048 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:40.048 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:40.048 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:40.048 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:40.048 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:40.048 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:40.048 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:40.048 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:40.048 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:40.048 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:40.048 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:40.048 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:40.048 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:40.048 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:40.048 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:40.048 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:40.048 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:40.048 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:40.048 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:40.048 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:40.048 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:40.048 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:40.048 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:40.048 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:40.048 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:40.048 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:40.048 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:40.048 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:40.048 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:40.048 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:40.048 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:40.048 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:40.048 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:40.048 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:40.048 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:40.048 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:40.048 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:40.048 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:40.048 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:40.048 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:40.048 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:40.048 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:40.048 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:40.048 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:40.048 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:40.048 Found net devices under 0000:af:00.0: cvl_0_0 00:10:40.048 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:40.048 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:40.048 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:40.048 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:40.048 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:40.048 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:40.048 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:40.048 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:40.048 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:40.048 Found net devices under 0000:af:00.1: cvl_0_1 00:10:40.048 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:40.048 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:40.048 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:10:40.048 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:40.048 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:40.048 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:40.049 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:40.049 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:40.049 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:40.049 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:40.049 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:40.049 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:40.049 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:40.049 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:40.049 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:40.049 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:40.049 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:40.049 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:40.049 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:40.049 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:40.049 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:40.049 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:40.049 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:40.049 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:40.049 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:40.049 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:40.049 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:40.049 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:40.049 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:40.049 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:40.049 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:10:40.049 00:10:40.049 --- 10.0.0.2 ping statistics --- 00:10:40.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.049 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:10:40.049 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:40.049 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:40.049 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:10:40.049 00:10:40.049 --- 10.0.0.1 ping statistics --- 00:10:40.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.049 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:10:40.049 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:40.049 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:10:40.049 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:40.049 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:40.049 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:40.049 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:40.049 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:40.049 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:40.049 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:40.049 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:40.049 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:40.049 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:40.049 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:40.049 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2474520 00:10:40.049 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:40.049 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2474520 00:10:40.049 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2474520 ']' 00:10:40.049 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:40.049 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:40.049 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:40.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:40.049 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:40.049 17:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:40.049 [2024-12-09 17:21:08.422599] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:10:40.049 [2024-12-09 17:21:08.422647] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:40.049 [2024-12-09 17:21:08.500789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:40.049 [2024-12-09 17:21:08.540615] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:40.049 [2024-12-09 17:21:08.540654] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:40.049 [2024-12-09 17:21:08.540661] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:40.049 [2024-12-09 17:21:08.540666] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:40.049 [2024-12-09 17:21:08.540671] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:40.049 [2024-12-09 17:21:08.542263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:40.049 [2024-12-09 17:21:08.542381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:40.049 [2024-12-09 17:21:08.542468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:40.049 [2024-12-09 17:21:08.542468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:40.307 17:21:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:40.307 17:21:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:10:40.307 17:21:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:40.307 17:21:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:40.307 17:21:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:40.308 17:21:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:40.308 17:21:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:40.308 17:21:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.308 17:21:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:40.308 [2024-12-09 17:21:09.309250] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:40.308 17:21:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.308 17:21:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:40.308 17:21:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.308 17:21:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:40.308 Malloc0 00:10:40.308 17:21:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.308 17:21:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:40.308 17:21:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.308 17:21:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:40.308 17:21:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.308 17:21:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:40.308 17:21:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.308 17:21:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:40.308 17:21:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.308 17:21:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:40.308 17:21:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.308 17:21:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:40.308 [2024-12-09 17:21:09.369674] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:40.308 17:21:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.308 17:21:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:40.308 17:21:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:40.308 17:21:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:10:40.308 17:21:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:10:40.308 17:21:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:40.308 17:21:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:40.308 { 00:10:40.308 "params": { 00:10:40.308 "name": "Nvme$subsystem", 00:10:40.308 "trtype": "$TEST_TRANSPORT", 00:10:40.308 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:40.308 "adrfam": "ipv4", 00:10:40.308 "trsvcid": "$NVMF_PORT", 00:10:40.308 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:40.308 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:40.308 "hdgst": ${hdgst:-false}, 00:10:40.308 "ddgst": ${ddgst:-false} 00:10:40.308 }, 00:10:40.308 "method": "bdev_nvme_attach_controller" 00:10:40.308 } 00:10:40.308 EOF 00:10:40.308 )") 00:10:40.308 17:21:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:10:40.308 17:21:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:10:40.308 17:21:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:10:40.308 17:21:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:40.308 "params": { 00:10:40.308 "name": "Nvme1", 00:10:40.308 "trtype": "tcp", 00:10:40.308 "traddr": "10.0.0.2", 00:10:40.308 "adrfam": "ipv4", 00:10:40.308 "trsvcid": "4420", 00:10:40.308 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:40.308 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:40.308 "hdgst": false, 00:10:40.308 "ddgst": false 00:10:40.308 }, 00:10:40.308 "method": "bdev_nvme_attach_controller" 00:10:40.308 }' 00:10:40.308 [2024-12-09 17:21:09.419410] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:10:40.308 [2024-12-09 17:21:09.419456] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2474766 ] 00:10:40.566 [2024-12-09 17:21:09.492457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:40.566 [2024-12-09 17:21:09.535159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:40.566 [2024-12-09 17:21:09.535194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.566 [2024-12-09 17:21:09.535194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:40.824 I/O targets: 00:10:40.824 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:40.824 00:10:40.824 00:10:40.824 CUnit - A unit testing framework for C - Version 2.1-3 00:10:40.824 http://cunit.sourceforge.net/ 00:10:40.824 00:10:40.824 00:10:40.824 Suite: bdevio tests on: Nvme1n1 00:10:40.824 Test: blockdev write read block ...passed 00:10:40.824 Test: blockdev write zeroes read block ...passed 00:10:40.824 Test: blockdev write zeroes read no split ...passed 00:10:40.824 Test: blockdev write zeroes read split ...passed 00:10:40.824 Test: blockdev write zeroes read split partial ...passed 00:10:40.824 Test: blockdev reset ...[2024-12-09 17:21:09.977234] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:10:40.824 [2024-12-09 17:21:09.977297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de8b0 (9): Bad file descriptor 00:10:41.081 [2024-12-09 17:21:10.037871] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:10:41.081 passed 00:10:41.081 Test: blockdev write read 8 blocks ...passed 00:10:41.081 Test: blockdev write read size > 128k ...passed 00:10:41.081 Test: blockdev write read invalid size ...passed 00:10:41.082 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:41.082 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:41.082 Test: blockdev write read max offset ...passed 00:10:41.082 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:41.082 Test: blockdev writev readv 8 blocks ...passed 00:10:41.340 Test: blockdev writev readv 30 x 1block ...passed 00:10:41.340 Test: blockdev writev readv block ...passed 00:10:41.340 Test: blockdev writev readv size > 128k ...passed 00:10:41.340 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:41.340 Test: blockdev comparev and writev ...[2024-12-09 17:21:10.332008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:41.340 [2024-12-09 17:21:10.332039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:41.340 [2024-12-09 17:21:10.332053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:41.340 [2024-12-09 17:21:10.332061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:41.340 [2024-12-09 17:21:10.332308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:41.340 [2024-12-09 17:21:10.332319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:41.340 [2024-12-09 17:21:10.332330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:41.340 [2024-12-09 17:21:10.332338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:41.340 [2024-12-09 17:21:10.332581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:41.340 [2024-12-09 17:21:10.332591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:41.340 [2024-12-09 17:21:10.332602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:41.340 [2024-12-09 17:21:10.332610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:41.340 [2024-12-09 17:21:10.332842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:41.340 [2024-12-09 17:21:10.332852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:41.340 [2024-12-09 17:21:10.332865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:41.340 [2024-12-09 17:21:10.332872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:41.340 passed 00:10:41.340 Test: blockdev nvme passthru rw ...passed 00:10:41.340 Test: blockdev nvme passthru vendor specific ...[2024-12-09 17:21:10.414574] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:41.340 [2024-12-09 17:21:10.414592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:41.340 [2024-12-09 17:21:10.414698] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:41.340 [2024-12-09 17:21:10.414708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:41.340 [2024-12-09 17:21:10.414815] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:41.340 [2024-12-09 17:21:10.414824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:41.340 [2024-12-09 17:21:10.414920] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:41.340 [2024-12-09 17:21:10.414929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:41.340 passed 00:10:41.340 Test: blockdev nvme admin passthru ...passed 00:10:41.340 Test: blockdev copy ...passed 00:10:41.340 00:10:41.340 Run Summary: Type Total Ran Passed Failed Inactive 00:10:41.340 suites 1 1 n/a 0 0 00:10:41.340 tests 23 23 23 0 0 00:10:41.340 asserts 152 152 152 0 n/a 00:10:41.340 00:10:41.340 Elapsed time = 1.307 seconds 00:10:41.598 17:21:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:41.598 17:21:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.598 17:21:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:41.598 17:21:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.598 17:21:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:41.598 17:21:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:41.598 17:21:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:41.598 17:21:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:41.598 17:21:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:41.598 17:21:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:41.598 17:21:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:41.598 17:21:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:41.598 rmmod nvme_tcp 00:10:41.598 rmmod nvme_fabrics 00:10:41.598 rmmod nvme_keyring 00:10:41.598 17:21:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:41.598 17:21:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:41.598 17:21:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:41.598 17:21:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2474520 ']' 00:10:41.598 17:21:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2474520 00:10:41.598 17:21:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2474520 ']' 00:10:41.598 17:21:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2474520 00:10:41.599 17:21:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:10:41.599 17:21:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:41.599 17:21:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2474520 00:10:41.599 17:21:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:10:41.599 17:21:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:10:41.599 17:21:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2474520' 00:10:41.599 killing process with pid 2474520 00:10:41.599 17:21:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2474520 00:10:41.599 17:21:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2474520 00:10:41.858 17:21:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:41.858 17:21:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:41.858 17:21:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:41.858 17:21:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:41.858 17:21:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:10:41.858 17:21:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:41.858 17:21:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:10:41.858 17:21:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:41.858 17:21:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:41.858 17:21:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:41.858 17:21:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:41.858 17:21:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:44.395 17:21:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:44.395 00:10:44.395 real 0m10.780s 00:10:44.395 user 0m13.746s 00:10:44.395 sys 0m5.048s 00:10:44.395 17:21:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:44.395 17:21:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:44.395 ************************************ 00:10:44.395 END TEST nvmf_bdevio 00:10:44.395 ************************************ 00:10:44.395 17:21:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:44.395 00:10:44.395 real 4m38.526s 00:10:44.395 user 10m30.281s 00:10:44.395 sys 1m37.203s 00:10:44.395 17:21:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:44.395 17:21:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:44.395 ************************************ 00:10:44.395 END TEST nvmf_target_core 00:10:44.395 ************************************ 00:10:44.395 17:21:13 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:44.395 17:21:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:44.395 17:21:13 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:44.395 17:21:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:44.395 ************************************ 00:10:44.395 START TEST nvmf_target_extra 00:10:44.395 ************************************ 00:10:44.395 17:21:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:44.395 * Looking for test storage... 00:10:44.395 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:44.395 17:21:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:44.395 17:21:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:10:44.395 17:21:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:44.395 17:21:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:44.395 17:21:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:44.395 17:21:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:44.395 17:21:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:44.395 17:21:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:44.395 17:21:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:44.395 17:21:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:44.395 17:21:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:44.395 17:21:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:44.395 17:21:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:44.395 17:21:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:44.395 17:21:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:44.395 17:21:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:44.395 17:21:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:44.395 17:21:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:44.395 17:21:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:44.395 17:21:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:44.395 17:21:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:44.395 17:21:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:44.395 17:21:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:44.395 17:21:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:44.395 17:21:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:44.395 17:21:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:44.395 17:21:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:44.395 17:21:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:44.395 17:21:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:44.395 17:21:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:44.395 17:21:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:44.395 17:21:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:44.395 17:21:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:44.395 17:21:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:44.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.395 --rc genhtml_branch_coverage=1 00:10:44.395 --rc genhtml_function_coverage=1 00:10:44.395 --rc genhtml_legend=1 00:10:44.395 --rc geninfo_all_blocks=1 00:10:44.395 --rc geninfo_unexecuted_blocks=1 00:10:44.395 00:10:44.395 ' 00:10:44.395 17:21:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:44.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.395 --rc genhtml_branch_coverage=1 00:10:44.395 --rc genhtml_function_coverage=1 00:10:44.395 --rc genhtml_legend=1 00:10:44.395 --rc geninfo_all_blocks=1 00:10:44.395 --rc geninfo_unexecuted_blocks=1 00:10:44.395 00:10:44.395 ' 00:10:44.395 17:21:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:44.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.395 --rc genhtml_branch_coverage=1 00:10:44.395 --rc genhtml_function_coverage=1 00:10:44.395 --rc genhtml_legend=1 00:10:44.395 --rc geninfo_all_blocks=1 00:10:44.395 --rc geninfo_unexecuted_blocks=1 00:10:44.395 00:10:44.395 ' 00:10:44.395 17:21:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:44.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.395 --rc genhtml_branch_coverage=1 00:10:44.395 --rc genhtml_function_coverage=1 00:10:44.395 --rc genhtml_legend=1 00:10:44.395 --rc geninfo_all_blocks=1 00:10:44.395 --rc geninfo_unexecuted_blocks=1 00:10:44.395 00:10:44.395 ' 00:10:44.395 17:21:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:44.395 17:21:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:44.395 17:21:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:44.395 17:21:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:44.395 17:21:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:44.395 17:21:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:44.395 17:21:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:44.395 17:21:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:44.395 17:21:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:44.395 17:21:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:44.395 17:21:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:44.395 17:21:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:44.395 17:21:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:10:44.395 17:21:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:10:44.395 17:21:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:44.395 17:21:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:44.395 17:21:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:44.395 17:21:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:44.395 17:21:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:44.395 17:21:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:44.395 17:21:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:44.395 17:21:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:44.396 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:44.396 ************************************ 00:10:44.396 START TEST nvmf_example 00:10:44.396 ************************************ 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:44.396 * Looking for test storage... 00:10:44.396 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:44.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.396 --rc genhtml_branch_coverage=1 00:10:44.396 --rc genhtml_function_coverage=1 00:10:44.396 --rc genhtml_legend=1 00:10:44.396 --rc geninfo_all_blocks=1 00:10:44.396 --rc geninfo_unexecuted_blocks=1 00:10:44.396 00:10:44.396 ' 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:44.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.396 --rc genhtml_branch_coverage=1 00:10:44.396 --rc genhtml_function_coverage=1 00:10:44.396 --rc genhtml_legend=1 00:10:44.396 --rc geninfo_all_blocks=1 00:10:44.396 --rc geninfo_unexecuted_blocks=1 00:10:44.396 00:10:44.396 ' 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:44.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.396 --rc genhtml_branch_coverage=1 00:10:44.396 --rc genhtml_function_coverage=1 00:10:44.396 --rc genhtml_legend=1 00:10:44.396 --rc geninfo_all_blocks=1 00:10:44.396 --rc geninfo_unexecuted_blocks=1 00:10:44.396 00:10:44.396 ' 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:44.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.396 --rc genhtml_branch_coverage=1 00:10:44.396 --rc genhtml_function_coverage=1 00:10:44.396 --rc genhtml_legend=1 00:10:44.396 --rc geninfo_all_blocks=1 00:10:44.396 --rc geninfo_unexecuted_blocks=1 00:10:44.396 00:10:44.396 ' 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:44.396 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:44.397 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.397 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.397 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.397 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:44.397 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.397 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:44.397 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:44.397 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:44.397 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:44.397 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:44.397 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:44.397 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:44.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:44.397 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:44.397 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:44.397 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:44.397 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:44.397 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:44.397 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:44.397 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:44.397 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:44.397 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:44.397 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:44.397 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:44.397 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:44.397 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:44.397 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:44.397 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:44.397 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:44.397 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:44.397 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:44.397 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:44.397 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:44.397 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:44.397 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:44.397 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:44.397 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:44.397 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:44.397 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:50.967 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:50.967 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:10:50.967 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:50.967 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:50.967 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:50.967 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:50.967 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:50.967 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:10:50.967 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:50.967 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:10:50.967 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:10:50.967 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:10:50.967 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:10:50.967 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:10:50.967 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:10:50.967 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:50.967 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:50.967 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:50.967 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:50.967 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:50.967 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:50.967 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:50.967 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:50.967 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:50.967 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:50.967 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:50.967 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:50.967 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:50.967 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:50.967 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:50.967 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:50.967 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:50.967 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:50.967 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:50.967 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:50.967 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:50.967 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:50.967 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:50.968 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:50.968 Found net devices under 0000:af:00.0: cvl_0_0 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:50.968 Found net devices under 0000:af:00.1: cvl_0_1 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:50.968 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:50.968 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.333 ms 00:10:50.968 00:10:50.968 --- 10.0.0.2 ping statistics --- 00:10:50.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:50.968 rtt min/avg/max/mdev = 0.333/0.333/0.333/0.000 ms 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:50.968 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:50.968 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:10:50.968 00:10:50.968 --- 10.0.0.1 ping statistics --- 00:10:50.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:50.968 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2478551 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2478551 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 2478551 ']' 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:50.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:50.968 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:51.535 17:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:51.535 17:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:10:51.535 17:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:51.535 17:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:51.535 17:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:51.535 17:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:51.535 17:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.535 17:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:51.535 17:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.535 17:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:51.535 17:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.535 17:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:51.535 17:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.535 17:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:51.535 17:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:51.535 17:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.535 17:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:51.535 17:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.535 17:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:51.536 17:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:51.536 17:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.536 17:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:51.536 17:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.536 17:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:51.536 17:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.536 17:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:51.536 17:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.536 17:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:51.536 17:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:03.738 Initializing NVMe Controllers 00:11:03.738 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:03.738 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:03.738 Initialization complete. Launching workers. 00:11:03.738 ======================================================== 00:11:03.738 Latency(us) 00:11:03.738 Device Information : IOPS MiB/s Average min max 00:11:03.738 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18710.45 73.09 3419.89 533.81 16224.21 00:11:03.738 ======================================================== 00:11:03.738 Total : 18710.45 73.09 3419.89 533.81 16224.21 00:11:03.738 00:11:03.738 17:21:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:03.738 17:21:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:03.738 17:21:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:03.738 17:21:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:03.738 17:21:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:03.738 17:21:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:03.738 17:21:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:03.738 17:21:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:03.738 rmmod nvme_tcp 00:11:03.738 rmmod nvme_fabrics 00:11:03.738 rmmod nvme_keyring 00:11:03.738 17:21:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:03.738 17:21:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:03.738 17:21:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:03.738 17:21:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 2478551 ']' 00:11:03.738 17:21:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 2478551 00:11:03.738 17:21:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 2478551 ']' 00:11:03.738 17:21:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 2478551 00:11:03.738 17:21:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:11:03.738 17:21:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:03.738 17:21:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2478551 00:11:03.738 17:21:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:11:03.738 17:21:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:11:03.738 17:21:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2478551' 00:11:03.738 killing process with pid 2478551 00:11:03.738 17:21:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 2478551 00:11:03.738 17:21:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 2478551 00:11:03.738 nvmf threads initialize successfully 00:11:03.738 bdev subsystem init successfully 00:11:03.738 created a nvmf target service 00:11:03.738 create targets's poll groups done 00:11:03.738 all subsystems of target started 00:11:03.738 nvmf target is running 00:11:03.738 all subsystems of target stopped 00:11:03.738 destroy targets's poll groups done 00:11:03.738 destroyed the nvmf target service 00:11:03.738 bdev subsystem finish successfully 00:11:03.738 nvmf threads destroy successfully 00:11:03.738 17:21:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:03.738 17:21:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:03.738 17:21:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:03.738 17:21:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:03.738 17:21:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:11:03.738 17:21:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:03.738 17:21:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:11:03.738 17:21:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:03.738 17:21:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:03.738 17:21:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:03.738 17:21:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:03.738 17:21:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:03.997 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:03.997 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:03.997 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:03.997 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:04.256 00:11:04.256 real 0m19.862s 00:11:04.256 user 0m46.268s 00:11:04.256 sys 0m6.056s 00:11:04.256 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:04.256 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:04.256 ************************************ 00:11:04.256 END TEST nvmf_example 00:11:04.256 ************************************ 00:11:04.256 17:21:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:04.256 17:21:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:04.256 17:21:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:04.256 17:21:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:04.256 ************************************ 00:11:04.256 START TEST nvmf_filesystem 00:11:04.256 ************************************ 00:11:04.256 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:04.256 * Looking for test storage... 00:11:04.256 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:04.256 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:04.256 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:11:04.256 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:04.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.519 --rc genhtml_branch_coverage=1 00:11:04.519 --rc genhtml_function_coverage=1 00:11:04.519 --rc genhtml_legend=1 00:11:04.519 --rc geninfo_all_blocks=1 00:11:04.519 --rc geninfo_unexecuted_blocks=1 00:11:04.519 00:11:04.519 ' 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:04.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.519 --rc genhtml_branch_coverage=1 00:11:04.519 --rc genhtml_function_coverage=1 00:11:04.519 --rc genhtml_legend=1 00:11:04.519 --rc geninfo_all_blocks=1 00:11:04.519 --rc geninfo_unexecuted_blocks=1 00:11:04.519 00:11:04.519 ' 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:04.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.519 --rc genhtml_branch_coverage=1 00:11:04.519 --rc genhtml_function_coverage=1 00:11:04.519 --rc genhtml_legend=1 00:11:04.519 --rc geninfo_all_blocks=1 00:11:04.519 --rc geninfo_unexecuted_blocks=1 00:11:04.519 00:11:04.519 ' 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:04.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.519 --rc genhtml_branch_coverage=1 00:11:04.519 --rc genhtml_function_coverage=1 00:11:04.519 --rc genhtml_legend=1 00:11:04.519 --rc geninfo_all_blocks=1 00:11:04.519 --rc geninfo_unexecuted_blocks=1 00:11:04.519 00:11:04.519 ' 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:04.519 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:04.520 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:04.520 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:04.520 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:04.520 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:04.520 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:04.520 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:04.520 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:04.520 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:04.520 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:04.520 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:04.520 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:04.520 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:04.520 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:04.520 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:04.520 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:04.520 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:04.520 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:04.520 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:04.520 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:04.520 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:04.520 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:04.520 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:04.520 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:04.520 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:04.520 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:04.520 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:04.520 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:04.520 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:04.520 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:04.520 #define SPDK_CONFIG_H 00:11:04.520 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:04.520 #define SPDK_CONFIG_APPS 1 00:11:04.520 #define SPDK_CONFIG_ARCH native 00:11:04.520 #undef SPDK_CONFIG_ASAN 00:11:04.520 #undef SPDK_CONFIG_AVAHI 00:11:04.520 #undef SPDK_CONFIG_CET 00:11:04.520 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:04.520 #define SPDK_CONFIG_COVERAGE 1 00:11:04.520 #define SPDK_CONFIG_CROSS_PREFIX 00:11:04.520 #undef SPDK_CONFIG_CRYPTO 00:11:04.520 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:04.520 #undef SPDK_CONFIG_CUSTOMOCF 00:11:04.520 #undef SPDK_CONFIG_DAOS 00:11:04.520 #define SPDK_CONFIG_DAOS_DIR 00:11:04.520 #define SPDK_CONFIG_DEBUG 1 00:11:04.520 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:04.520 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:04.520 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:04.520 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:04.520 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:04.520 #undef SPDK_CONFIG_DPDK_UADK 00:11:04.520 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:04.520 #define SPDK_CONFIG_EXAMPLES 1 00:11:04.520 #undef SPDK_CONFIG_FC 00:11:04.520 #define SPDK_CONFIG_FC_PATH 00:11:04.520 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:04.520 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:04.520 #define SPDK_CONFIG_FSDEV 1 00:11:04.520 #undef SPDK_CONFIG_FUSE 00:11:04.520 #undef SPDK_CONFIG_FUZZER 00:11:04.520 #define SPDK_CONFIG_FUZZER_LIB 00:11:04.520 #undef SPDK_CONFIG_GOLANG 00:11:04.520 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:04.520 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:04.520 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:04.520 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:04.520 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:04.520 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:04.520 #undef SPDK_CONFIG_HAVE_LZ4 00:11:04.520 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:04.520 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:04.520 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:04.520 #define SPDK_CONFIG_IDXD 1 00:11:04.520 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:04.520 #undef SPDK_CONFIG_IPSEC_MB 00:11:04.520 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:04.520 #define SPDK_CONFIG_ISAL 1 00:11:04.520 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:04.520 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:04.520 #define SPDK_CONFIG_LIBDIR 00:11:04.520 #undef SPDK_CONFIG_LTO 00:11:04.520 #define SPDK_CONFIG_MAX_LCORES 128 00:11:04.520 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:04.520 #define SPDK_CONFIG_NVME_CUSE 1 00:11:04.520 #undef SPDK_CONFIG_OCF 00:11:04.520 #define SPDK_CONFIG_OCF_PATH 00:11:04.520 #define SPDK_CONFIG_OPENSSL_PATH 00:11:04.520 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:04.520 #define SPDK_CONFIG_PGO_DIR 00:11:04.520 #undef SPDK_CONFIG_PGO_USE 00:11:04.520 #define SPDK_CONFIG_PREFIX /usr/local 00:11:04.520 #undef SPDK_CONFIG_RAID5F 00:11:04.520 #undef SPDK_CONFIG_RBD 00:11:04.520 #define SPDK_CONFIG_RDMA 1 00:11:04.520 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:04.520 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:04.520 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:04.520 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:04.520 #define SPDK_CONFIG_SHARED 1 00:11:04.520 #undef SPDK_CONFIG_SMA 00:11:04.520 #define SPDK_CONFIG_TESTS 1 00:11:04.520 #undef SPDK_CONFIG_TSAN 00:11:04.520 #define SPDK_CONFIG_UBLK 1 00:11:04.520 #define SPDK_CONFIG_UBSAN 1 00:11:04.520 #undef SPDK_CONFIG_UNIT_TESTS 00:11:04.520 #undef SPDK_CONFIG_URING 00:11:04.520 #define SPDK_CONFIG_URING_PATH 00:11:04.520 #undef SPDK_CONFIG_URING_ZNS 00:11:04.520 #undef SPDK_CONFIG_USDT 00:11:04.520 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:04.520 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:04.520 #define SPDK_CONFIG_VFIO_USER 1 00:11:04.520 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:04.520 #define SPDK_CONFIG_VHOST 1 00:11:04.520 #define SPDK_CONFIG_VIRTIO 1 00:11:04.520 #undef SPDK_CONFIG_VTUNE 00:11:04.520 #define SPDK_CONFIG_VTUNE_DIR 00:11:04.520 #define SPDK_CONFIG_WERROR 1 00:11:04.520 #define SPDK_CONFIG_WPDK_DIR 00:11:04.520 #undef SPDK_CONFIG_XNVME 00:11:04.520 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:04.520 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:04.520 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:04.520 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:04.520 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:04.520 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:04.520 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:04.520 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.520 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.520 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.520 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:04.520 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.520 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:04.520 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:04.520 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:04.520 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:04.520 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:04.520 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:04.520 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:04.520 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:04.520 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:04.520 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:04.520 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:04.520 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:04.520 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:04.520 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:04.520 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:04.520 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:04.520 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:04.520 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:04.520 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:04.520 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:04.520 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:04.520 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:04.520 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:04.520 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:04.520 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:04.520 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:04.520 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:04.520 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:11:04.520 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:04.520 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:04.520 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:04.520 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:04.520 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:04.520 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:04.521 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j96 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 2480924 ]] 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 2480924 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.PP139y 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.PP139y/tests/target /tmp/spdk.PP139y 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=93585747968 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=100837199872 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=7251451904 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=50408566784 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=50418597888 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=20144431104 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=20167442432 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23011328 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=50418335744 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=50418601984 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=266240 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=10083704832 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=10083717120 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:11:04.522 * Looking for test storage... 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:11:04.522 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:11:04.523 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:04.523 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:04.523 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:11:04.523 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=93585747968 00:11:04.523 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:11:04.523 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:11:04.523 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:11:04.523 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:11:04.523 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:11:04.523 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=9466044416 00:11:04.523 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:04.523 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:04.523 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:04.523 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:04.523 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:04.523 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:11:04.523 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:11:04.523 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:11:04.523 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:04.523 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:04.523 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:11:04.523 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:11:04.523 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:04.523 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:04.523 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:04.523 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:04.523 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:04.523 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:04.523 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:04.523 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:04.523 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:04.523 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:11:04.523 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:04.523 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:04.523 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:04.523 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:04.523 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:04.523 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:04.523 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:04.523 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:04.523 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:04.523 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:04.523 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:04.523 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:04.523 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:04.523 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:04.523 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:04.523 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:04.523 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:04.523 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:04.523 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:04.523 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:04.523 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:04.523 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:04.523 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:04.523 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:04.523 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:04.523 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:04.523 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:04.523 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:04.523 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:04.523 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:04.523 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:04.523 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:04.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.523 --rc genhtml_branch_coverage=1 00:11:04.523 --rc genhtml_function_coverage=1 00:11:04.523 --rc genhtml_legend=1 00:11:04.523 --rc geninfo_all_blocks=1 00:11:04.523 --rc geninfo_unexecuted_blocks=1 00:11:04.523 00:11:04.523 ' 00:11:04.523 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:04.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.523 --rc genhtml_branch_coverage=1 00:11:04.523 --rc genhtml_function_coverage=1 00:11:04.523 --rc genhtml_legend=1 00:11:04.523 --rc geninfo_all_blocks=1 00:11:04.523 --rc geninfo_unexecuted_blocks=1 00:11:04.523 00:11:04.523 ' 00:11:04.523 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:04.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.523 --rc genhtml_branch_coverage=1 00:11:04.523 --rc genhtml_function_coverage=1 00:11:04.523 --rc genhtml_legend=1 00:11:04.523 --rc geninfo_all_blocks=1 00:11:04.523 --rc geninfo_unexecuted_blocks=1 00:11:04.523 00:11:04.523 ' 00:11:04.523 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:04.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.523 --rc genhtml_branch_coverage=1 00:11:04.523 --rc genhtml_function_coverage=1 00:11:04.523 --rc genhtml_legend=1 00:11:04.523 --rc geninfo_all_blocks=1 00:11:04.523 --rc geninfo_unexecuted_blocks=1 00:11:04.523 00:11:04.523 ' 00:11:04.523 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:04.523 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:04.523 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:04.523 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:04.783 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:04.783 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:04.783 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:04.783 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:04.783 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:04.783 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:04.783 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:04.783 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:04.783 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:11:04.783 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:11:04.783 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:04.783 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:04.783 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:04.783 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:04.783 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:04.783 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:04.783 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:04.783 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:04.783 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:04.783 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.783 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.783 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.783 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:04.783 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.783 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:04.783 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:04.783 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:04.783 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:04.783 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:04.783 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:04.783 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:04.783 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:04.783 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:04.783 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:04.783 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:04.783 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:04.783 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:04.783 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:04.783 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:04.783 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:04.783 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:04.783 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:04.783 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:04.783 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:04.783 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:04.783 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:04.783 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:04.783 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:04.783 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:04.783 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:11.354 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:11.354 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:11.354 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:11.354 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:11.354 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:11.354 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:11.354 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:11.354 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:11.354 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:11.354 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:11.354 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:11.354 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:11.354 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:11.354 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:11.354 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:11.354 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:11.354 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:11.354 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:11.354 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:11.354 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:11.354 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:11.354 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:11.354 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:11.354 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:11.354 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:11.354 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:11.354 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:11.354 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:11.354 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:11.354 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:11.354 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:11.354 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:11.354 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:11.354 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:11.354 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:11.354 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:11.354 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:11.354 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:11.354 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:11.354 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:11.354 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:11.354 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:11.354 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:11.354 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:11.354 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:11.354 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:11.354 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:11.354 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:11.354 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:11.354 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:11.354 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:11.355 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:11.355 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:11.355 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:11.355 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:11.355 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:11.355 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:11.355 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:11.355 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:11.355 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:11.355 Found net devices under 0000:af:00.0: cvl_0_0 00:11:11.355 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:11.355 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:11.355 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:11.355 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:11.355 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:11.355 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:11.355 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:11.355 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:11.355 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:11.355 Found net devices under 0000:af:00.1: cvl_0_1 00:11:11.355 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:11.355 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:11.355 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:11:11.355 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:11.355 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:11.355 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:11.355 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:11.355 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:11.355 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:11.355 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:11.355 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:11.355 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:11.355 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:11.355 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:11.355 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:11.355 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:11.355 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:11.355 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:11.355 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:11.355 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:11.355 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:11.355 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:11.355 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:11.355 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:11.355 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:11.355 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:11.355 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:11.355 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:11.355 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:11.355 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:11.355 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.387 ms 00:11:11.355 00:11:11.355 --- 10.0.0.2 ping statistics --- 00:11:11.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:11.355 rtt min/avg/max/mdev = 0.387/0.387/0.387/0.000 ms 00:11:11.355 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:11.355 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:11.355 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:11:11.355 00:11:11.355 --- 10.0.0.1 ping statistics --- 00:11:11.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:11.355 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:11:11.355 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:11.355 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:11:11.355 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:11.355 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:11.355 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:11.355 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:11.355 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:11.355 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:11.355 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:11.355 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:11.355 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:11.355 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:11.355 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:11.355 ************************************ 00:11:11.355 START TEST nvmf_filesystem_no_in_capsule 00:11:11.355 ************************************ 00:11:11.355 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:11:11.355 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:11.355 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:11.355 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:11.355 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:11.355 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:11.355 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2483994 00:11:11.355 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2483994 00:11:11.355 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:11.355 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2483994 ']' 00:11:11.355 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:11.355 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:11.355 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:11.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:11.355 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:11.355 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:11.355 [2024-12-09 17:21:39.876008] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:11:11.355 [2024-12-09 17:21:39.876058] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:11.355 [2024-12-09 17:21:39.956327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:11.355 [2024-12-09 17:21:39.997679] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:11.355 [2024-12-09 17:21:39.997715] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:11.355 [2024-12-09 17:21:39.997722] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:11.355 [2024-12-09 17:21:39.997728] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:11.355 [2024-12-09 17:21:39.997733] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:11.355 [2024-12-09 17:21:39.999301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:11.355 [2024-12-09 17:21:39.999338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:11.355 [2024-12-09 17:21:39.999447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.355 [2024-12-09 17:21:39.999448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:11.355 17:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:11.355 17:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:11.355 17:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:11.355 17:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:11.355 17:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:11.356 17:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:11.356 17:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:11.356 17:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:11.356 17:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.356 17:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:11.356 [2024-12-09 17:21:40.139308] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:11.356 17:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.356 17:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:11.356 17:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.356 17:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:11.356 Malloc1 00:11:11.356 17:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.356 17:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:11.356 17:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.356 17:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:11.356 17:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.356 17:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:11.356 17:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.356 17:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:11.356 17:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.356 17:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:11.356 17:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.356 17:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:11.356 [2024-12-09 17:21:40.309365] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:11.356 17:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.356 17:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:11.356 17:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:11.356 17:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:11.356 17:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:11.356 17:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:11.356 17:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:11.356 17:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.356 17:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:11.356 17:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.356 17:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:11.356 { 00:11:11.356 "name": "Malloc1", 00:11:11.356 "aliases": [ 00:11:11.356 "2ecc4912-99b2-4166-90a4-18ef48400d13" 00:11:11.356 ], 00:11:11.356 "product_name": "Malloc disk", 00:11:11.356 "block_size": 512, 00:11:11.356 "num_blocks": 1048576, 00:11:11.356 "uuid": "2ecc4912-99b2-4166-90a4-18ef48400d13", 00:11:11.356 "assigned_rate_limits": { 00:11:11.356 "rw_ios_per_sec": 0, 00:11:11.356 "rw_mbytes_per_sec": 0, 00:11:11.356 "r_mbytes_per_sec": 0, 00:11:11.356 "w_mbytes_per_sec": 0 00:11:11.356 }, 00:11:11.356 "claimed": true, 00:11:11.356 "claim_type": "exclusive_write", 00:11:11.356 "zoned": false, 00:11:11.356 "supported_io_types": { 00:11:11.356 "read": true, 00:11:11.356 "write": true, 00:11:11.356 "unmap": true, 00:11:11.356 "flush": true, 00:11:11.356 "reset": true, 00:11:11.356 "nvme_admin": false, 00:11:11.356 "nvme_io": false, 00:11:11.356 "nvme_io_md": false, 00:11:11.356 "write_zeroes": true, 00:11:11.356 "zcopy": true, 00:11:11.356 "get_zone_info": false, 00:11:11.356 "zone_management": false, 00:11:11.356 "zone_append": false, 00:11:11.356 "compare": false, 00:11:11.356 "compare_and_write": false, 00:11:11.356 "abort": true, 00:11:11.356 "seek_hole": false, 00:11:11.356 "seek_data": false, 00:11:11.356 "copy": true, 00:11:11.356 "nvme_iov_md": false 00:11:11.356 }, 00:11:11.356 "memory_domains": [ 00:11:11.356 { 00:11:11.356 "dma_device_id": "system", 00:11:11.356 "dma_device_type": 1 00:11:11.356 }, 00:11:11.356 { 00:11:11.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.356 "dma_device_type": 2 00:11:11.356 } 00:11:11.356 ], 00:11:11.356 "driver_specific": {} 00:11:11.356 } 00:11:11.356 ]' 00:11:11.356 17:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:11.356 17:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:11.356 17:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:11.356 17:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:11.356 17:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:11.356 17:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:11.356 17:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:11.356 17:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:12.729 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:12.729 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:12.729 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:12.729 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:12.729 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:14.625 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:14.625 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:14.625 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:14.625 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:14.625 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:14.626 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:14.626 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:14.626 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:14.626 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:14.626 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:14.626 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:14.626 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:14.626 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:14.626 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:14.626 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:14.626 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:14.626 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:14.883 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:14.883 17:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:16.255 17:21:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:16.255 17:21:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:16.255 17:21:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:16.255 17:21:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:16.255 17:21:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:16.255 ************************************ 00:11:16.255 START TEST filesystem_ext4 00:11:16.255 ************************************ 00:11:16.255 17:21:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:16.255 17:21:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:16.255 17:21:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:16.255 17:21:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:16.255 17:21:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:16.255 17:21:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:16.255 17:21:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:16.255 17:21:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:16.255 17:21:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:16.255 17:21:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:16.255 17:21:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:16.255 mke2fs 1.47.0 (5-Feb-2023) 00:11:16.255 Discarding device blocks: 0/522240 done 00:11:16.255 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:16.255 Filesystem UUID: 8955a1cc-d67f-4e66-beb1-c189e83a7199 00:11:16.255 Superblock backups stored on blocks: 00:11:16.255 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:16.255 00:11:16.255 Allocating group tables: 0/64 done 00:11:16.255 Writing inode tables: 0/64 done 00:11:16.513 Creating journal (8192 blocks): done 00:11:17.964 Writing superblocks and filesystem accounting information: 0/64 done 00:11:17.964 00:11:17.964 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:17.964 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:24.524 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:24.524 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:24.524 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:24.524 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:24.524 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:24.524 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:24.524 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2483994 00:11:24.524 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:24.524 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:24.524 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:24.524 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:24.524 00:11:24.524 real 0m7.522s 00:11:24.524 user 0m0.024s 00:11:24.524 sys 0m0.072s 00:11:24.524 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:24.524 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:24.524 ************************************ 00:11:24.524 END TEST filesystem_ext4 00:11:24.524 ************************************ 00:11:24.524 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:24.524 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:24.524 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:24.524 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:24.524 ************************************ 00:11:24.524 START TEST filesystem_btrfs 00:11:24.524 ************************************ 00:11:24.524 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:24.524 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:24.524 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:24.524 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:24.524 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:24.524 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:24.524 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:24.524 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:24.524 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:24.524 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:24.524 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:24.524 btrfs-progs v6.8.1 00:11:24.524 See https://btrfs.readthedocs.io for more information. 00:11:24.524 00:11:24.524 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:24.524 NOTE: several default settings have changed in version 5.15, please make sure 00:11:24.524 this does not affect your deployments: 00:11:24.524 - DUP for metadata (-m dup) 00:11:24.524 - enabled no-holes (-O no-holes) 00:11:24.524 - enabled free-space-tree (-R free-space-tree) 00:11:24.524 00:11:24.524 Label: (null) 00:11:24.524 UUID: 493f9c89-65c6-4a7f-9fc0-68528c4ccd4a 00:11:24.524 Node size: 16384 00:11:24.524 Sector size: 4096 (CPU page size: 4096) 00:11:24.524 Filesystem size: 510.00MiB 00:11:24.524 Block group profiles: 00:11:24.524 Data: single 8.00MiB 00:11:24.524 Metadata: DUP 32.00MiB 00:11:24.524 System: DUP 8.00MiB 00:11:24.524 SSD detected: yes 00:11:24.524 Zoned device: no 00:11:24.524 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:24.524 Checksum: crc32c 00:11:24.524 Number of devices: 1 00:11:24.524 Devices: 00:11:24.524 ID SIZE PATH 00:11:24.524 1 510.00MiB /dev/nvme0n1p1 00:11:24.524 00:11:24.524 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:24.524 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:24.782 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:24.782 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:24.782 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:24.782 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:24.782 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:24.782 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:24.782 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2483994 00:11:24.782 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:24.782 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:24.782 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:24.782 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:25.040 00:11:25.040 real 0m1.280s 00:11:25.040 user 0m0.025s 00:11:25.040 sys 0m0.117s 00:11:25.040 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:25.040 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:25.040 ************************************ 00:11:25.040 END TEST filesystem_btrfs 00:11:25.040 ************************************ 00:11:25.040 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:25.040 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:25.040 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:25.040 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:25.040 ************************************ 00:11:25.040 START TEST filesystem_xfs 00:11:25.040 ************************************ 00:11:25.040 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:25.040 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:25.040 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:25.040 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:25.040 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:25.040 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:25.040 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:25.041 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:11:25.041 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:25.041 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:25.041 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:25.041 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:25.041 = sectsz=512 attr=2, projid32bit=1 00:11:25.041 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:25.041 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:25.041 data = bsize=4096 blocks=130560, imaxpct=25 00:11:25.041 = sunit=0 swidth=0 blks 00:11:25.041 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:25.041 log =internal log bsize=4096 blocks=16384, version=2 00:11:25.041 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:25.041 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:25.972 Discarding blocks...Done. 00:11:25.972 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:25.972 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:28.496 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:28.496 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:28.496 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:28.496 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:28.496 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:28.496 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:28.496 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2483994 00:11:28.496 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:28.496 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:28.496 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:28.496 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:28.496 00:11:28.496 real 0m3.300s 00:11:28.496 user 0m0.017s 00:11:28.496 sys 0m0.082s 00:11:28.496 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:28.496 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:28.496 ************************************ 00:11:28.496 END TEST filesystem_xfs 00:11:28.496 ************************************ 00:11:28.496 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:28.754 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:28.754 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:28.754 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:28.754 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:28.754 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:28.754 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:28.754 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:28.754 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:28.754 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:28.754 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:28.754 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:28.754 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.754 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:28.754 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.754 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:28.754 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2483994 00:11:28.754 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2483994 ']' 00:11:28.754 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2483994 00:11:28.754 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:28.754 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:28.754 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2483994 00:11:28.754 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:28.754 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:28.754 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2483994' 00:11:28.754 killing process with pid 2483994 00:11:28.754 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 2483994 00:11:28.754 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 2483994 00:11:29.013 17:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:29.013 00:11:29.013 real 0m18.361s 00:11:29.013 user 1m12.293s 00:11:29.013 sys 0m1.399s 00:11:29.013 17:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:29.013 17:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:29.013 ************************************ 00:11:29.013 END TEST nvmf_filesystem_no_in_capsule 00:11:29.013 ************************************ 00:11:29.272 17:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:29.272 17:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:29.272 17:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:29.272 17:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:29.272 ************************************ 00:11:29.272 START TEST nvmf_filesystem_in_capsule 00:11:29.272 ************************************ 00:11:29.272 17:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:11:29.272 17:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:29.272 17:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:29.272 17:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:29.272 17:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:29.272 17:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:29.272 17:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2487299 00:11:29.272 17:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2487299 00:11:29.272 17:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:29.272 17:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2487299 ']' 00:11:29.272 17:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:29.272 17:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:29.272 17:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:29.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:29.272 17:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:29.272 17:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:29.272 [2024-12-09 17:21:58.311956] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:11:29.272 [2024-12-09 17:21:58.311999] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:29.272 [2024-12-09 17:21:58.390011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:29.272 [2024-12-09 17:21:58.430848] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:29.272 [2024-12-09 17:21:58.430885] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:29.272 [2024-12-09 17:21:58.430892] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:29.272 [2024-12-09 17:21:58.430897] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:29.272 [2024-12-09 17:21:58.430903] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:29.272 [2024-12-09 17:21:58.432389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:29.272 [2024-12-09 17:21:58.432501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:29.272 [2024-12-09 17:21:58.432605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:29.272 [2024-12-09 17:21:58.432606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:29.530 17:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:29.530 17:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:29.530 17:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:29.530 17:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:29.530 17:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:29.530 17:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:29.530 17:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:29.530 17:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:29.530 17:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.530 17:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:29.530 [2024-12-09 17:21:58.570313] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:29.530 17:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.530 17:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:29.530 17:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.530 17:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:29.530 Malloc1 00:11:29.530 17:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.530 17:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:29.530 17:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.530 17:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:29.530 17:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.530 17:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:29.530 17:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.530 17:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:29.787 17:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.787 17:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:29.787 17:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.787 17:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:29.787 [2024-12-09 17:21:58.728395] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:29.787 17:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.787 17:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:29.787 17:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:29.787 17:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:29.787 17:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:29.787 17:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:29.787 17:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:29.787 17:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.787 17:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:29.787 17:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.787 17:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:29.787 { 00:11:29.787 "name": "Malloc1", 00:11:29.788 "aliases": [ 00:11:29.788 "1fcd6c42-e87c-4947-9f7a-eb6a9fd8a8e4" 00:11:29.788 ], 00:11:29.788 "product_name": "Malloc disk", 00:11:29.788 "block_size": 512, 00:11:29.788 "num_blocks": 1048576, 00:11:29.788 "uuid": "1fcd6c42-e87c-4947-9f7a-eb6a9fd8a8e4", 00:11:29.788 "assigned_rate_limits": { 00:11:29.788 "rw_ios_per_sec": 0, 00:11:29.788 "rw_mbytes_per_sec": 0, 00:11:29.788 "r_mbytes_per_sec": 0, 00:11:29.788 "w_mbytes_per_sec": 0 00:11:29.788 }, 00:11:29.788 "claimed": true, 00:11:29.788 "claim_type": "exclusive_write", 00:11:29.788 "zoned": false, 00:11:29.788 "supported_io_types": { 00:11:29.788 "read": true, 00:11:29.788 "write": true, 00:11:29.788 "unmap": true, 00:11:29.788 "flush": true, 00:11:29.788 "reset": true, 00:11:29.788 "nvme_admin": false, 00:11:29.788 "nvme_io": false, 00:11:29.788 "nvme_io_md": false, 00:11:29.788 "write_zeroes": true, 00:11:29.788 "zcopy": true, 00:11:29.788 "get_zone_info": false, 00:11:29.788 "zone_management": false, 00:11:29.788 "zone_append": false, 00:11:29.788 "compare": false, 00:11:29.788 "compare_and_write": false, 00:11:29.788 "abort": true, 00:11:29.788 "seek_hole": false, 00:11:29.788 "seek_data": false, 00:11:29.788 "copy": true, 00:11:29.788 "nvme_iov_md": false 00:11:29.788 }, 00:11:29.788 "memory_domains": [ 00:11:29.788 { 00:11:29.788 "dma_device_id": "system", 00:11:29.788 "dma_device_type": 1 00:11:29.788 }, 00:11:29.788 { 00:11:29.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.788 "dma_device_type": 2 00:11:29.788 } 00:11:29.788 ], 00:11:29.788 "driver_specific": {} 00:11:29.788 } 00:11:29.788 ]' 00:11:29.788 17:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:29.788 17:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:29.788 17:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:29.788 17:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:29.788 17:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:29.788 17:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:29.788 17:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:29.788 17:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:31.160 17:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:31.160 17:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:31.160 17:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:31.160 17:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:31.161 17:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:33.058 17:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:33.058 17:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:33.058 17:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:33.059 17:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:33.059 17:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:33.059 17:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:33.059 17:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:33.059 17:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:33.059 17:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:33.059 17:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:33.059 17:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:33.059 17:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:33.059 17:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:33.059 17:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:33.059 17:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:33.059 17:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:33.059 17:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:33.316 17:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:33.968 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:34.947 17:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:34.947 17:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:34.947 17:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:34.947 17:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:34.947 17:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:34.947 ************************************ 00:11:34.947 START TEST filesystem_in_capsule_ext4 00:11:34.947 ************************************ 00:11:34.947 17:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:34.947 17:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:34.947 17:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:34.947 17:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:34.947 17:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:34.947 17:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:34.947 17:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:34.947 17:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:34.947 17:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:34.947 17:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:34.947 17:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:34.947 mke2fs 1.47.0 (5-Feb-2023) 00:11:35.204 Discarding device blocks: 0/522240 done 00:11:35.204 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:35.204 Filesystem UUID: 6104da64-f6ba-4bdf-9e56-614cd2b90c3b 00:11:35.204 Superblock backups stored on blocks: 00:11:35.204 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:35.204 00:11:35.204 Allocating group tables: 0/64 done 00:11:35.204 Writing inode tables: 0/64 done 00:11:35.204 Creating journal (8192 blocks): done 00:11:36.590 Writing superblocks and filesystem accounting information: 0/64 6/64 done 00:11:36.590 00:11:36.590 17:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:36.590 17:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:43.143 17:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:43.143 17:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:43.143 17:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:43.143 17:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:43.143 17:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:43.143 17:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:43.143 17:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2487299 00:11:43.143 17:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:43.143 17:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:43.143 17:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:43.143 17:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:43.143 00:11:43.143 real 0m7.349s 00:11:43.143 user 0m0.031s 00:11:43.143 sys 0m0.067s 00:11:43.143 17:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:43.143 17:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:43.143 ************************************ 00:11:43.143 END TEST filesystem_in_capsule_ext4 00:11:43.143 ************************************ 00:11:43.143 17:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:43.143 17:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:43.143 17:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:43.143 17:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:43.143 ************************************ 00:11:43.143 START TEST filesystem_in_capsule_btrfs 00:11:43.143 ************************************ 00:11:43.143 17:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:43.143 17:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:43.143 17:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:43.143 17:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:43.143 17:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:43.143 17:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:43.143 17:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:43.143 17:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:43.143 17:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:43.143 17:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:43.144 17:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:43.144 btrfs-progs v6.8.1 00:11:43.144 See https://btrfs.readthedocs.io for more information. 00:11:43.144 00:11:43.144 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:43.144 NOTE: several default settings have changed in version 5.15, please make sure 00:11:43.144 this does not affect your deployments: 00:11:43.144 - DUP for metadata (-m dup) 00:11:43.144 - enabled no-holes (-O no-holes) 00:11:43.144 - enabled free-space-tree (-R free-space-tree) 00:11:43.144 00:11:43.144 Label: (null) 00:11:43.144 UUID: f63912d0-bc90-463c-966f-e62a1f210daf 00:11:43.144 Node size: 16384 00:11:43.144 Sector size: 4096 (CPU page size: 4096) 00:11:43.144 Filesystem size: 510.00MiB 00:11:43.144 Block group profiles: 00:11:43.144 Data: single 8.00MiB 00:11:43.144 Metadata: DUP 32.00MiB 00:11:43.144 System: DUP 8.00MiB 00:11:43.144 SSD detected: yes 00:11:43.144 Zoned device: no 00:11:43.144 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:43.144 Checksum: crc32c 00:11:43.144 Number of devices: 1 00:11:43.144 Devices: 00:11:43.144 ID SIZE PATH 00:11:43.144 1 510.00MiB /dev/nvme0n1p1 00:11:43.144 00:11:43.144 17:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:43.144 17:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:43.402 17:22:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:43.402 17:22:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:43.402 17:22:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:43.402 17:22:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:43.402 17:22:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:43.402 17:22:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:43.402 17:22:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2487299 00:11:43.402 17:22:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:43.402 17:22:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:43.402 17:22:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:43.402 17:22:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:43.402 00:11:43.402 real 0m0.868s 00:11:43.402 user 0m0.022s 00:11:43.402 sys 0m0.117s 00:11:43.402 17:22:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:43.402 17:22:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:43.402 ************************************ 00:11:43.402 END TEST filesystem_in_capsule_btrfs 00:11:43.402 ************************************ 00:11:43.402 17:22:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:43.402 17:22:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:43.402 17:22:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:43.402 17:22:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:43.402 ************************************ 00:11:43.402 START TEST filesystem_in_capsule_xfs 00:11:43.402 ************************************ 00:11:43.402 17:22:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:43.402 17:22:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:43.402 17:22:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:43.402 17:22:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:43.402 17:22:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:43.402 17:22:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:43.402 17:22:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:43.402 17:22:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:11:43.402 17:22:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:43.402 17:22:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:43.402 17:22:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:43.402 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:43.402 = sectsz=512 attr=2, projid32bit=1 00:11:43.403 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:43.403 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:43.403 data = bsize=4096 blocks=130560, imaxpct=25 00:11:43.403 = sunit=0 swidth=0 blks 00:11:43.403 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:43.403 log =internal log bsize=4096 blocks=16384, version=2 00:11:43.403 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:43.403 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:44.774 Discarding blocks...Done. 00:11:44.774 17:22:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:44.774 17:22:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:47.305 17:22:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:47.305 17:22:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:47.305 17:22:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:47.305 17:22:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:47.305 17:22:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:47.306 17:22:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:47.306 17:22:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2487299 00:11:47.306 17:22:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:47.306 17:22:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:47.306 17:22:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:47.306 17:22:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:47.306 00:11:47.306 real 0m3.534s 00:11:47.306 user 0m0.027s 00:11:47.306 sys 0m0.073s 00:11:47.306 17:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:47.306 17:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:47.306 ************************************ 00:11:47.306 END TEST filesystem_in_capsule_xfs 00:11:47.306 ************************************ 00:11:47.306 17:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:47.306 17:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:47.306 17:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:47.306 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.306 17:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:47.306 17:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:47.306 17:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:47.306 17:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:47.306 17:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:47.306 17:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:47.306 17:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:47.306 17:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:47.306 17:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.306 17:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:47.306 17:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.306 17:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:47.306 17:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2487299 00:11:47.306 17:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2487299 ']' 00:11:47.306 17:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2487299 00:11:47.306 17:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:47.306 17:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:47.306 17:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2487299 00:11:47.306 17:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:47.306 17:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:47.306 17:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2487299' 00:11:47.306 killing process with pid 2487299 00:11:47.306 17:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 2487299 00:11:47.306 17:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 2487299 00:11:47.565 17:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:47.565 00:11:47.565 real 0m18.476s 00:11:47.565 user 1m12.724s 00:11:47.565 sys 0m1.450s 00:11:47.565 17:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:47.565 17:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:47.565 ************************************ 00:11:47.565 END TEST nvmf_filesystem_in_capsule 00:11:47.565 ************************************ 00:11:47.824 17:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:47.824 17:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:47.824 17:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:47.824 17:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:47.824 17:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:47.824 17:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:47.824 17:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:47.824 rmmod nvme_tcp 00:11:47.824 rmmod nvme_fabrics 00:11:47.824 rmmod nvme_keyring 00:11:47.824 17:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:47.824 17:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:47.824 17:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:47.824 17:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:47.824 17:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:47.825 17:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:47.825 17:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:47.825 17:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:11:47.825 17:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:11:47.825 17:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:47.825 17:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:11:47.825 17:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:47.825 17:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:47.825 17:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:47.825 17:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:47.825 17:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:50.361 17:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:50.361 00:11:50.361 real 0m45.635s 00:11:50.361 user 2m27.082s 00:11:50.361 sys 0m7.572s 00:11:50.361 17:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:50.361 17:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:50.361 ************************************ 00:11:50.361 END TEST nvmf_filesystem 00:11:50.361 ************************************ 00:11:50.361 17:22:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:50.361 17:22:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:50.361 17:22:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:50.361 17:22:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:50.361 ************************************ 00:11:50.361 START TEST nvmf_target_discovery 00:11:50.361 ************************************ 00:11:50.361 17:22:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:50.361 * Looking for test storage... 00:11:50.361 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:50.361 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:50.361 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:11:50.361 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:50.361 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:50.361 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:50.361 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:50.361 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:50.362 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:50.362 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:50.362 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:50.362 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:50.362 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:50.362 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:50.362 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:50.362 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:50.362 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:50.362 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:50.362 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:50.362 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:50.362 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:50.362 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:50.362 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:50.362 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:50.362 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:50.362 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:50.362 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:50.362 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:50.362 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:50.362 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:50.362 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:50.362 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:50.362 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:50.362 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:50.362 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:50.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.362 --rc genhtml_branch_coverage=1 00:11:50.362 --rc genhtml_function_coverage=1 00:11:50.362 --rc genhtml_legend=1 00:11:50.362 --rc geninfo_all_blocks=1 00:11:50.362 --rc geninfo_unexecuted_blocks=1 00:11:50.362 00:11:50.362 ' 00:11:50.362 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:50.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.362 --rc genhtml_branch_coverage=1 00:11:50.362 --rc genhtml_function_coverage=1 00:11:50.362 --rc genhtml_legend=1 00:11:50.362 --rc geninfo_all_blocks=1 00:11:50.362 --rc geninfo_unexecuted_blocks=1 00:11:50.362 00:11:50.362 ' 00:11:50.362 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:50.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.362 --rc genhtml_branch_coverage=1 00:11:50.362 --rc genhtml_function_coverage=1 00:11:50.362 --rc genhtml_legend=1 00:11:50.362 --rc geninfo_all_blocks=1 00:11:50.362 --rc geninfo_unexecuted_blocks=1 00:11:50.362 00:11:50.362 ' 00:11:50.362 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:50.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.362 --rc genhtml_branch_coverage=1 00:11:50.362 --rc genhtml_function_coverage=1 00:11:50.362 --rc genhtml_legend=1 00:11:50.362 --rc geninfo_all_blocks=1 00:11:50.362 --rc geninfo_unexecuted_blocks=1 00:11:50.362 00:11:50.362 ' 00:11:50.362 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:50.362 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:50.362 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:50.362 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:50.362 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:50.362 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:50.362 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:50.362 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:50.362 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:50.362 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:50.362 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:50.362 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:50.362 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:11:50.362 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:11:50.362 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:50.362 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:50.362 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:50.362 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:50.362 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:50.362 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:50.362 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:50.362 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:50.362 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:50.362 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.362 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.362 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.362 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:50.362 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.362 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:50.362 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:50.362 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:50.362 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:50.362 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:50.362 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:50.362 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:50.362 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:50.362 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:50.362 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:50.362 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:50.362 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:50.362 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:50.362 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:50.362 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:50.362 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:50.362 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:50.362 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:50.362 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:50.362 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:50.362 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:50.363 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:50.363 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:50.363 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:50.363 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:50.363 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:50.363 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:50.363 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:56.937 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:56.937 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:11:56.937 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:56.937 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:56.937 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:56.937 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:56.937 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:56.937 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:11:56.937 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:56.937 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:11:56.937 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:11:56.937 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:11:56.937 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:11:56.937 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:11:56.937 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:11:56.937 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:56.937 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:56.937 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:56.937 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:56.937 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:56.937 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:56.937 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:56.937 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:56.937 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:56.937 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:56.937 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:56.937 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:56.937 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:56.937 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:56.937 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:56.937 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:56.937 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:56.937 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:56.937 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:56.937 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:56.937 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:56.937 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:56.937 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:56.937 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:56.937 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:56.937 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:56.937 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:56.937 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:56.937 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:56.937 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:56.937 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:56.937 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:56.937 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:56.937 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:56.937 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:56.937 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:56.937 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:56.937 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:56.937 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:56.937 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:56.937 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:56.937 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:56.937 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:56.937 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:56.937 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:56.937 Found net devices under 0000:af:00.0: cvl_0_0 00:11:56.937 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:56.937 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:56.937 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:56.937 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:56.937 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:56.937 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:56.937 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:56.938 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:56.938 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:56.938 Found net devices under 0000:af:00.1: cvl_0_1 00:11:56.938 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:56.938 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:56.938 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:11:56.938 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:56.938 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:56.938 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:56.938 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:56.938 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:56.938 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:56.938 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:56.938 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:56.938 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:56.938 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:56.938 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:56.938 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:56.938 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:56.938 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:56.938 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:56.938 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:56.938 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:56.938 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:56.938 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:56.938 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:56.938 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:56.938 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:56.938 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:56.938 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:56.938 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:56.938 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:56.938 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:56.938 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.340 ms 00:11:56.938 00:11:56.938 --- 10.0.0.2 ping statistics --- 00:11:56.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:56.938 rtt min/avg/max/mdev = 0.340/0.340/0.340/0.000 ms 00:11:56.938 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:56.938 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:56.938 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:11:56.938 00:11:56.938 --- 10.0.0.1 ping statistics --- 00:11:56.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:56.938 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:11:56.938 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:56.938 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:11:56.938 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:56.938 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:56.938 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:56.938 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:56.938 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:56.938 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:56.938 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:56.938 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:56.938 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:56.938 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:56.938 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:56.938 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=2494011 00:11:56.938 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:56.938 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 2494011 00:11:56.938 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 2494011 ']' 00:11:56.938 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:56.938 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:56.938 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:56.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:56.938 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:56.938 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:56.938 [2024-12-09 17:22:25.266703] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:11:56.938 [2024-12-09 17:22:25.266748] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:56.938 [2024-12-09 17:22:25.342159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:56.938 [2024-12-09 17:22:25.382605] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:56.938 [2024-12-09 17:22:25.382643] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:56.938 [2024-12-09 17:22:25.382649] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:56.938 [2024-12-09 17:22:25.382656] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:56.938 [2024-12-09 17:22:25.382660] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:56.938 [2024-12-09 17:22:25.384050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:56.938 [2024-12-09 17:22:25.384162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:56.938 [2024-12-09 17:22:25.384270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.938 [2024-12-09 17:22:25.384270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:56.938 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:56.938 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:11:56.938 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:56.938 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:56.938 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:56.938 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:56.938 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:56.938 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.938 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:56.938 [2024-12-09 17:22:25.520692] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:56.938 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.938 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:56.938 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:56.938 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:56.938 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.938 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:56.938 Null1 00:11:56.938 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.938 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:56.938 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.938 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:56.938 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.938 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:56.938 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.938 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:56.938 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.938 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:56.938 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.938 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:56.938 [2024-12-09 17:22:25.573370] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:56.938 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.939 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:56.939 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:56.939 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.939 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:56.939 Null2 00:11:56.939 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.939 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:56.939 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.939 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:56.939 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.939 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:56.939 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.939 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:56.939 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.939 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:56.939 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.939 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:56.939 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.939 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:56.939 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:56.939 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.939 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:56.939 Null3 00:11:56.939 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.939 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:56.939 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.939 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:56.939 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.939 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:56.939 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.939 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:56.939 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.939 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:56.939 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.939 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:56.939 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.939 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:56.939 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:56.939 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.939 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:56.939 Null4 00:11:56.939 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.939 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:56.939 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.939 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:56.939 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.939 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:56.939 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.939 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:56.939 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.939 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:56.939 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.939 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:56.939 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.939 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:56.939 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.939 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:56.939 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.939 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:56.939 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.939 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:56.939 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.939 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:11:56.939 00:11:56.939 Discovery Log Number of Records 6, Generation counter 6 00:11:56.939 =====Discovery Log Entry 0====== 00:11:56.939 trtype: tcp 00:11:56.939 adrfam: ipv4 00:11:56.939 subtype: current discovery subsystem 00:11:56.939 treq: not required 00:11:56.939 portid: 0 00:11:56.939 trsvcid: 4420 00:11:56.939 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:56.939 traddr: 10.0.0.2 00:11:56.939 eflags: explicit discovery connections, duplicate discovery information 00:11:56.939 sectype: none 00:11:56.939 =====Discovery Log Entry 1====== 00:11:56.939 trtype: tcp 00:11:56.939 adrfam: ipv4 00:11:56.939 subtype: nvme subsystem 00:11:56.939 treq: not required 00:11:56.939 portid: 0 00:11:56.939 trsvcid: 4420 00:11:56.939 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:56.939 traddr: 10.0.0.2 00:11:56.939 eflags: none 00:11:56.939 sectype: none 00:11:56.939 =====Discovery Log Entry 2====== 00:11:56.939 trtype: tcp 00:11:56.939 adrfam: ipv4 00:11:56.939 subtype: nvme subsystem 00:11:56.939 treq: not required 00:11:56.939 portid: 0 00:11:56.939 trsvcid: 4420 00:11:56.939 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:56.939 traddr: 10.0.0.2 00:11:56.939 eflags: none 00:11:56.939 sectype: none 00:11:56.939 =====Discovery Log Entry 3====== 00:11:56.939 trtype: tcp 00:11:56.939 adrfam: ipv4 00:11:56.939 subtype: nvme subsystem 00:11:56.939 treq: not required 00:11:56.939 portid: 0 00:11:56.939 trsvcid: 4420 00:11:56.939 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:56.939 traddr: 10.0.0.2 00:11:56.939 eflags: none 00:11:56.939 sectype: none 00:11:56.939 =====Discovery Log Entry 4====== 00:11:56.939 trtype: tcp 00:11:56.939 adrfam: ipv4 00:11:56.939 subtype: nvme subsystem 00:11:56.939 treq: not required 00:11:56.939 portid: 0 00:11:56.939 trsvcid: 4420 00:11:56.939 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:56.939 traddr: 10.0.0.2 00:11:56.939 eflags: none 00:11:56.939 sectype: none 00:11:56.939 =====Discovery Log Entry 5====== 00:11:56.939 trtype: tcp 00:11:56.939 adrfam: ipv4 00:11:56.939 subtype: discovery subsystem referral 00:11:56.939 treq: not required 00:11:56.939 portid: 0 00:11:56.939 trsvcid: 4430 00:11:56.939 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:56.939 traddr: 10.0.0.2 00:11:56.939 eflags: none 00:11:56.939 sectype: none 00:11:56.939 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:56.939 Perform nvmf subsystem discovery via RPC 00:11:56.939 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:56.939 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.939 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:56.939 [ 00:11:56.939 { 00:11:56.939 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:56.939 "subtype": "Discovery", 00:11:56.939 "listen_addresses": [ 00:11:56.939 { 00:11:56.939 "trtype": "TCP", 00:11:56.939 "adrfam": "IPv4", 00:11:56.939 "traddr": "10.0.0.2", 00:11:56.939 "trsvcid": "4420" 00:11:56.939 } 00:11:56.939 ], 00:11:56.939 "allow_any_host": true, 00:11:56.939 "hosts": [] 00:11:56.939 }, 00:11:56.939 { 00:11:56.939 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:56.939 "subtype": "NVMe", 00:11:56.939 "listen_addresses": [ 00:11:56.939 { 00:11:56.939 "trtype": "TCP", 00:11:56.939 "adrfam": "IPv4", 00:11:56.939 "traddr": "10.0.0.2", 00:11:56.939 "trsvcid": "4420" 00:11:56.939 } 00:11:56.939 ], 00:11:56.939 "allow_any_host": true, 00:11:56.939 "hosts": [], 00:11:56.939 "serial_number": "SPDK00000000000001", 00:11:56.939 "model_number": "SPDK bdev Controller", 00:11:56.939 "max_namespaces": 32, 00:11:56.939 "min_cntlid": 1, 00:11:56.939 "max_cntlid": 65519, 00:11:56.939 "namespaces": [ 00:11:56.939 { 00:11:56.939 "nsid": 1, 00:11:56.939 "bdev_name": "Null1", 00:11:56.939 "name": "Null1", 00:11:56.939 "nguid": "BCB72DC1F1164C56989B45878BB3B8F6", 00:11:56.940 "uuid": "bcb72dc1-f116-4c56-989b-45878bb3b8f6" 00:11:56.940 } 00:11:56.940 ] 00:11:56.940 }, 00:11:56.940 { 00:11:56.940 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:56.940 "subtype": "NVMe", 00:11:56.940 "listen_addresses": [ 00:11:56.940 { 00:11:56.940 "trtype": "TCP", 00:11:56.940 "adrfam": "IPv4", 00:11:56.940 "traddr": "10.0.0.2", 00:11:56.940 "trsvcid": "4420" 00:11:56.940 } 00:11:56.940 ], 00:11:56.940 "allow_any_host": true, 00:11:56.940 "hosts": [], 00:11:56.940 "serial_number": "SPDK00000000000002", 00:11:56.940 "model_number": "SPDK bdev Controller", 00:11:56.940 "max_namespaces": 32, 00:11:56.940 "min_cntlid": 1, 00:11:56.940 "max_cntlid": 65519, 00:11:56.940 "namespaces": [ 00:11:56.940 { 00:11:56.940 "nsid": 1, 00:11:56.940 "bdev_name": "Null2", 00:11:56.940 "name": "Null2", 00:11:56.940 "nguid": "75BF3EA655E0498D8B17FDC90557D583", 00:11:56.940 "uuid": "75bf3ea6-55e0-498d-8b17-fdc90557d583" 00:11:56.940 } 00:11:56.940 ] 00:11:56.940 }, 00:11:56.940 { 00:11:56.940 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:56.940 "subtype": "NVMe", 00:11:56.940 "listen_addresses": [ 00:11:56.940 { 00:11:56.940 "trtype": "TCP", 00:11:56.940 "adrfam": "IPv4", 00:11:56.940 "traddr": "10.0.0.2", 00:11:56.940 "trsvcid": "4420" 00:11:56.940 } 00:11:56.940 ], 00:11:56.940 "allow_any_host": true, 00:11:56.940 "hosts": [], 00:11:56.940 "serial_number": "SPDK00000000000003", 00:11:56.940 "model_number": "SPDK bdev Controller", 00:11:56.940 "max_namespaces": 32, 00:11:56.940 "min_cntlid": 1, 00:11:56.940 "max_cntlid": 65519, 00:11:56.940 "namespaces": [ 00:11:56.940 { 00:11:56.940 "nsid": 1, 00:11:56.940 "bdev_name": "Null3", 00:11:56.940 "name": "Null3", 00:11:56.940 "nguid": "5B10999B91AE49F98F2904A1730234F2", 00:11:56.940 "uuid": "5b10999b-91ae-49f9-8f29-04a1730234f2" 00:11:56.940 } 00:11:56.940 ] 00:11:56.940 }, 00:11:56.940 { 00:11:56.940 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:56.940 "subtype": "NVMe", 00:11:56.940 "listen_addresses": [ 00:11:56.940 { 00:11:56.940 "trtype": "TCP", 00:11:56.940 "adrfam": "IPv4", 00:11:56.940 "traddr": "10.0.0.2", 00:11:56.940 "trsvcid": "4420" 00:11:56.940 } 00:11:56.940 ], 00:11:56.940 "allow_any_host": true, 00:11:56.940 "hosts": [], 00:11:56.940 "serial_number": "SPDK00000000000004", 00:11:56.940 "model_number": "SPDK bdev Controller", 00:11:56.940 "max_namespaces": 32, 00:11:56.940 "min_cntlid": 1, 00:11:56.940 "max_cntlid": 65519, 00:11:56.940 "namespaces": [ 00:11:56.940 { 00:11:56.940 "nsid": 1, 00:11:56.940 "bdev_name": "Null4", 00:11:56.940 "name": "Null4", 00:11:56.940 "nguid": "A496B52B895B4A0A998E636E3282D391", 00:11:56.940 "uuid": "a496b52b-895b-4a0a-998e-636e3282d391" 00:11:56.940 } 00:11:56.940 ] 00:11:56.940 } 00:11:56.940 ] 00:11:56.940 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.940 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:56.940 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:56.940 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:56.940 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.940 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:56.940 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.940 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:56.940 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.940 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:56.940 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.940 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:56.940 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:56.940 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.940 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:56.940 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.940 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:56.940 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.940 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:56.940 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.940 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:56.940 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:56.940 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.940 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:56.940 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.940 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:56.940 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.940 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:56.940 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.940 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:56.940 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:56.940 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.940 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:56.940 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.940 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:56.940 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.940 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:56.940 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.940 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:56.940 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.940 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:56.940 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.940 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:56.940 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:56.940 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.940 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:56.940 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.940 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:56.940 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:56.940 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:56.940 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:56.940 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:56.940 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:56.940 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:56.940 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:56.940 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:56.940 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:56.940 rmmod nvme_tcp 00:11:56.940 rmmod nvme_fabrics 00:11:56.940 rmmod nvme_keyring 00:11:56.940 17:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:56.940 17:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:56.940 17:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:56.940 17:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 2494011 ']' 00:11:56.940 17:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 2494011 00:11:56.940 17:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 2494011 ']' 00:11:56.940 17:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 2494011 00:11:56.940 17:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:11:56.940 17:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:56.940 17:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2494011 00:11:56.940 17:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:56.940 17:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:56.940 17:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2494011' 00:11:56.940 killing process with pid 2494011 00:11:56.940 17:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 2494011 00:11:56.940 17:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 2494011 00:11:57.200 17:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:57.200 17:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:57.200 17:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:57.200 17:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:11:57.200 17:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:11:57.200 17:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:57.200 17:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:11:57.200 17:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:57.200 17:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:57.200 17:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:57.200 17:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:57.200 17:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:59.736 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:59.736 00:11:59.736 real 0m9.326s 00:11:59.736 user 0m5.340s 00:11:59.736 sys 0m4.763s 00:11:59.736 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:59.736 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:59.736 ************************************ 00:11:59.736 END TEST nvmf_target_discovery 00:11:59.736 ************************************ 00:11:59.736 17:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:59.736 17:22:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:59.736 17:22:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:59.736 17:22:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:59.736 ************************************ 00:11:59.736 START TEST nvmf_referrals 00:11:59.736 ************************************ 00:11:59.736 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:59.736 * Looking for test storage... 00:11:59.736 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:59.736 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:59.736 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:11:59.736 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:59.736 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:59.736 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:59.736 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:59.736 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:59.736 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:59.736 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:59.736 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:59.736 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:59.736 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:59.736 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:59.736 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:59.736 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:59.736 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:59.736 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:59.736 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:59.736 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:59.736 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:59.736 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:59.736 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:59.736 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:59.736 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:59.736 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:59.736 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:59.736 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:59.736 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:59.736 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:59.736 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:59.736 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:59.736 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:59.736 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:59.736 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:59.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.736 --rc genhtml_branch_coverage=1 00:11:59.736 --rc genhtml_function_coverage=1 00:11:59.736 --rc genhtml_legend=1 00:11:59.736 --rc geninfo_all_blocks=1 00:11:59.736 --rc geninfo_unexecuted_blocks=1 00:11:59.736 00:11:59.736 ' 00:11:59.736 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:59.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.736 --rc genhtml_branch_coverage=1 00:11:59.736 --rc genhtml_function_coverage=1 00:11:59.736 --rc genhtml_legend=1 00:11:59.736 --rc geninfo_all_blocks=1 00:11:59.736 --rc geninfo_unexecuted_blocks=1 00:11:59.736 00:11:59.736 ' 00:11:59.736 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:59.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.736 --rc genhtml_branch_coverage=1 00:11:59.736 --rc genhtml_function_coverage=1 00:11:59.736 --rc genhtml_legend=1 00:11:59.736 --rc geninfo_all_blocks=1 00:11:59.736 --rc geninfo_unexecuted_blocks=1 00:11:59.736 00:11:59.736 ' 00:11:59.736 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:59.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.736 --rc genhtml_branch_coverage=1 00:11:59.736 --rc genhtml_function_coverage=1 00:11:59.736 --rc genhtml_legend=1 00:11:59.736 --rc geninfo_all_blocks=1 00:11:59.736 --rc geninfo_unexecuted_blocks=1 00:11:59.736 00:11:59.736 ' 00:11:59.736 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:59.736 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:59.736 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:59.736 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:59.736 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:59.736 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:59.736 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:59.737 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:59.737 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:59.737 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:59.737 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:59.737 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:59.737 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:11:59.737 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:11:59.737 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:59.737 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:59.737 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:59.737 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:59.737 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:59.737 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:59.737 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:59.737 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:59.737 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:59.737 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.737 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.737 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.737 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:59.737 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.737 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:59.737 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:59.737 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:59.737 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:59.737 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:59.737 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:59.737 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:59.737 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:59.737 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:59.737 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:59.737 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:59.737 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:59.737 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:59.737 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:59.737 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:59.737 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:59.737 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:59.737 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:59.737 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:59.737 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:59.737 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:59.737 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:59.737 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:59.737 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:59.737 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:59.737 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:59.737 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:59.737 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:59.737 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:11:59.737 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:06.308 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:06.308 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:06.308 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:06.308 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:06.308 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:06.308 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:06.308 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:06.308 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:06.308 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:06.308 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:06.308 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:06.308 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:06.308 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:06.308 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:06.308 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:06.308 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:06.308 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:06.308 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:06.308 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:06.308 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:06.309 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:06.309 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:06.309 Found net devices under 0000:af:00.0: cvl_0_0 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:06.309 Found net devices under 0000:af:00.1: cvl_0_1 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:06.309 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:06.309 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.357 ms 00:12:06.309 00:12:06.309 --- 10.0.0.2 ping statistics --- 00:12:06.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:06.309 rtt min/avg/max/mdev = 0.357/0.357/0.357/0.000 ms 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:06.309 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:06.309 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:12:06.309 00:12:06.309 --- 10.0.0.1 ping statistics --- 00:12:06.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:06.309 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=2497547 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 2497547 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 2497547 ']' 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:06.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:06.309 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:06.310 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:06.310 [2024-12-09 17:22:34.674972] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:12:06.310 [2024-12-09 17:22:34.675024] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:06.310 [2024-12-09 17:22:34.752153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:06.310 [2024-12-09 17:22:34.794034] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:06.310 [2024-12-09 17:22:34.794069] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:06.310 [2024-12-09 17:22:34.794077] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:06.310 [2024-12-09 17:22:34.794083] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:06.310 [2024-12-09 17:22:34.794088] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:06.310 [2024-12-09 17:22:34.795663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:06.310 [2024-12-09 17:22:34.795773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:06.310 [2024-12-09 17:22:34.795878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.310 [2024-12-09 17:22:34.795879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:06.310 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:06.310 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:12:06.310 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:06.310 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:06.310 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:06.310 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:06.310 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:06.310 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.310 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:06.310 [2024-12-09 17:22:34.933754] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:06.310 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.310 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:06.310 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.310 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:06.310 [2024-12-09 17:22:34.964371] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:06.310 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.310 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:06.310 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.310 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:06.310 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.310 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:06.310 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.310 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:06.310 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.310 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:06.310 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.310 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:06.310 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.310 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:06.310 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:06.310 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.310 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:06.310 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.310 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:06.310 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:06.310 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:06.310 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:06.310 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:06.310 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.310 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:06.310 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:06.310 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.310 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:06.310 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:06.310 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:06.310 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:06.310 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:06.310 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:06.310 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:06.310 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:06.310 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:06.310 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:06.310 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:06.310 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.310 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:06.310 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.310 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:06.310 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.310 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:06.310 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.310 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:06.310 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.310 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:06.310 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.310 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:06.310 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:06.310 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.310 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:06.310 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.310 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:06.310 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:06.310 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:06.310 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:06.310 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:06.310 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:06.310 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:06.568 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:06.568 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:06.568 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:06.568 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.568 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:06.568 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.568 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:06.568 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.568 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:06.568 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.568 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:06.568 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:06.568 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:06.568 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:06.569 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.569 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:06.569 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:06.569 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.569 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:06.569 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:06.569 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:06.569 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:06.569 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:06.569 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:06.569 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:06.569 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:06.826 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:06.826 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:06.826 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:06.826 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:06.826 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:06.826 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:06.827 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:07.085 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:07.085 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:07.085 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:07.085 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:07.085 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:07.085 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:07.085 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:07.085 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:07.085 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.085 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:07.085 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.085 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:07.085 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:07.085 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:07.085 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:07.085 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.085 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:07.085 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:07.344 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.344 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:07.344 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:07.344 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:07.344 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:07.344 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:07.344 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:07.344 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:07.344 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:07.344 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:07.344 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:07.344 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:07.344 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:07.344 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:07.344 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:07.344 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:07.602 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:07.602 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:07.602 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:07.602 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:07.602 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:07.602 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:07.861 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:07.861 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:07.861 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.861 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:07.861 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.861 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:07.861 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:07.861 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.861 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:07.861 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.861 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:07.861 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:07.861 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:07.861 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:07.861 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:07.861 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:07.861 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:08.120 17:22:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:08.120 17:22:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:08.120 17:22:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:08.120 17:22:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:08.120 17:22:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:08.120 17:22:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:08.120 17:22:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:08.120 17:22:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:08.120 17:22:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:08.120 17:22:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:08.120 rmmod nvme_tcp 00:12:08.120 rmmod nvme_fabrics 00:12:08.120 rmmod nvme_keyring 00:12:08.120 17:22:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:08.120 17:22:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:08.120 17:22:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:08.120 17:22:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 2497547 ']' 00:12:08.120 17:22:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 2497547 00:12:08.120 17:22:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 2497547 ']' 00:12:08.120 17:22:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 2497547 00:12:08.120 17:22:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:12:08.120 17:22:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:08.120 17:22:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2497547 00:12:08.120 17:22:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:08.120 17:22:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:08.120 17:22:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2497547' 00:12:08.120 killing process with pid 2497547 00:12:08.120 17:22:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 2497547 00:12:08.120 17:22:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 2497547 00:12:08.380 17:22:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:08.380 17:22:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:08.380 17:22:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:08.380 17:22:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:08.380 17:22:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:12:08.380 17:22:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:08.380 17:22:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:12:08.380 17:22:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:08.380 17:22:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:08.380 17:22:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:08.380 17:22:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:08.380 17:22:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:10.283 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:10.283 00:12:10.283 real 0m11.042s 00:12:10.283 user 0m12.785s 00:12:10.283 sys 0m5.275s 00:12:10.283 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:10.283 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:10.283 ************************************ 00:12:10.283 END TEST nvmf_referrals 00:12:10.283 ************************************ 00:12:10.543 17:22:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:10.543 17:22:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:10.543 17:22:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:10.543 17:22:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:10.543 ************************************ 00:12:10.543 START TEST nvmf_connect_disconnect 00:12:10.543 ************************************ 00:12:10.543 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:10.543 * Looking for test storage... 00:12:10.543 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:10.543 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:10.543 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:12:10.543 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:10.543 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:10.543 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:10.543 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:10.543 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:10.543 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:10.543 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:10.543 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:10.543 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:10.543 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:10.543 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:10.543 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:10.543 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:10.543 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:10.543 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:10.543 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:10.543 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:10.543 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:10.543 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:10.543 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:10.543 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:10.543 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:10.543 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:10.543 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:10.543 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:10.543 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:10.543 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:10.543 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:10.543 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:10.543 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:10.543 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:10.543 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:10.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.543 --rc genhtml_branch_coverage=1 00:12:10.543 --rc genhtml_function_coverage=1 00:12:10.543 --rc genhtml_legend=1 00:12:10.543 --rc geninfo_all_blocks=1 00:12:10.543 --rc geninfo_unexecuted_blocks=1 00:12:10.543 00:12:10.543 ' 00:12:10.543 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:10.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.543 --rc genhtml_branch_coverage=1 00:12:10.543 --rc genhtml_function_coverage=1 00:12:10.543 --rc genhtml_legend=1 00:12:10.543 --rc geninfo_all_blocks=1 00:12:10.543 --rc geninfo_unexecuted_blocks=1 00:12:10.543 00:12:10.543 ' 00:12:10.543 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:10.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.543 --rc genhtml_branch_coverage=1 00:12:10.543 --rc genhtml_function_coverage=1 00:12:10.543 --rc genhtml_legend=1 00:12:10.543 --rc geninfo_all_blocks=1 00:12:10.543 --rc geninfo_unexecuted_blocks=1 00:12:10.543 00:12:10.543 ' 00:12:10.543 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:10.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.543 --rc genhtml_branch_coverage=1 00:12:10.543 --rc genhtml_function_coverage=1 00:12:10.543 --rc genhtml_legend=1 00:12:10.543 --rc geninfo_all_blocks=1 00:12:10.543 --rc geninfo_unexecuted_blocks=1 00:12:10.543 00:12:10.544 ' 00:12:10.544 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:10.544 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:10.544 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:10.544 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:10.544 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:10.544 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:10.544 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:10.544 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:10.544 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:10.544 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:10.544 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:10.544 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:10.544 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:12:10.544 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:12:10.544 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:10.544 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:10.544 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:10.544 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:10.544 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:10.544 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:10.544 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:10.544 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:10.544 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:10.544 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.544 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.544 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.544 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:10.544 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.544 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:10.544 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:10.544 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:10.544 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:10.544 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:10.544 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:10.544 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:10.544 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:10.544 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:10.544 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:10.544 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:10.544 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:10.544 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:10.544 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:10.544 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:10.544 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:10.544 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:10.544 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:10.544 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:10.544 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:10.544 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:10.544 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:10.803 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:10.803 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:10.803 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:10.803 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:17.375 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:17.375 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:17.375 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:17.375 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:17.375 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:17.375 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:17.375 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:17.375 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:17.375 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:17.375 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:17.375 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:17.375 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:17.375 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:17.375 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:17.375 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:17.375 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:17.375 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:17.375 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:17.375 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:17.375 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:17.375 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:17.375 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:17.375 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:17.375 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:17.375 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:17.375 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:17.375 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:17.375 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:17.375 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:17.375 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:17.375 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:17.375 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:17.375 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:17.375 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:17.375 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:17.375 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:17.376 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:17.376 Found net devices under 0000:af:00.0: cvl_0_0 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:17.376 Found net devices under 0000:af:00.1: cvl_0_1 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:17.376 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:17.376 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.364 ms 00:12:17.376 00:12:17.376 --- 10.0.0.2 ping statistics --- 00:12:17.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:17.376 rtt min/avg/max/mdev = 0.364/0.364/0.364/0.000 ms 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:17.376 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:17.376 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:12:17.376 00:12:17.376 --- 10.0.0.1 ping statistics --- 00:12:17.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:17.376 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=2501584 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 2501584 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 2501584 ']' 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:17.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:17.376 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:17.376 [2024-12-09 17:22:45.742638] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:12:17.376 [2024-12-09 17:22:45.742683] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:17.376 [2024-12-09 17:22:45.818457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:17.376 [2024-12-09 17:22:45.859013] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:17.376 [2024-12-09 17:22:45.859049] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:17.376 [2024-12-09 17:22:45.859060] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:17.376 [2024-12-09 17:22:45.859066] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:17.376 [2024-12-09 17:22:45.859070] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:17.376 [2024-12-09 17:22:45.860604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:17.376 [2024-12-09 17:22:45.860715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:17.377 [2024-12-09 17:22:45.860823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:17.377 [2024-12-09 17:22:45.860825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:17.377 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:17.377 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:12:17.377 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:17.377 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:17.377 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:17.377 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:17.377 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:17.377 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.377 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:17.377 [2024-12-09 17:22:45.993203] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:17.377 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.377 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:17.377 17:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.377 17:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:17.377 17:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.377 17:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:17.377 17:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:17.377 17:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.377 17:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:17.377 17:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.377 17:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:17.377 17:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.377 17:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:17.377 17:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.377 17:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:17.377 17:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.377 17:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:17.377 [2024-12-09 17:22:46.053989] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:17.377 17:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.377 17:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:12:17.377 17:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:12:17.377 17:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:20.662 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.947 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:27.233 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.518 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:33.804 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:33.804 17:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:33.804 17:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:33.804 17:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:33.804 17:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:12:33.804 17:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:33.804 17:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:12:33.804 17:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:33.804 17:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:33.804 rmmod nvme_tcp 00:12:33.804 rmmod nvme_fabrics 00:12:33.804 rmmod nvme_keyring 00:12:33.804 17:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:33.804 17:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:12:33.804 17:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:12:33.804 17:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 2501584 ']' 00:12:33.804 17:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 2501584 00:12:33.804 17:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2501584 ']' 00:12:33.804 17:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 2501584 00:12:33.804 17:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:12:33.804 17:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:33.804 17:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2501584 00:12:33.804 17:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:33.804 17:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:33.804 17:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2501584' 00:12:33.804 killing process with pid 2501584 00:12:33.805 17:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 2501584 00:12:33.805 17:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 2501584 00:12:33.805 17:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:33.805 17:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:33.805 17:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:33.805 17:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:12:33.805 17:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:12:33.805 17:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:33.805 17:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:12:33.805 17:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:33.805 17:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:33.805 17:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:33.805 17:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:33.805 17:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:35.711 17:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:35.711 00:12:35.711 real 0m25.272s 00:12:35.711 user 1m8.604s 00:12:35.711 sys 0m5.804s 00:12:35.711 17:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:35.711 17:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:35.711 ************************************ 00:12:35.711 END TEST nvmf_connect_disconnect 00:12:35.711 ************************************ 00:12:35.711 17:23:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:35.711 17:23:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:35.711 17:23:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:35.711 17:23:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:35.711 ************************************ 00:12:35.711 START TEST nvmf_multitarget 00:12:35.711 ************************************ 00:12:35.711 17:23:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:35.971 * Looking for test storage... 00:12:35.971 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:35.971 17:23:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:35.971 17:23:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:12:35.971 17:23:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:35.971 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:35.971 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:35.971 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:35.971 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:35.971 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:12:35.971 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:12:35.971 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:12:35.971 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:12:35.971 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:12:35.971 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:12:35.971 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:12:35.971 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:35.971 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:12:35.971 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:12:35.971 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:35.971 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:35.972 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:12:35.972 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:12:35.972 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:35.972 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:12:35.972 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:12:35.972 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:12:35.972 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:12:35.972 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:35.972 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:12:35.972 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:12:35.972 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:35.972 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:35.972 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:12:35.972 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:35.972 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:35.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.972 --rc genhtml_branch_coverage=1 00:12:35.972 --rc genhtml_function_coverage=1 00:12:35.972 --rc genhtml_legend=1 00:12:35.972 --rc geninfo_all_blocks=1 00:12:35.972 --rc geninfo_unexecuted_blocks=1 00:12:35.972 00:12:35.972 ' 00:12:35.972 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:35.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.972 --rc genhtml_branch_coverage=1 00:12:35.972 --rc genhtml_function_coverage=1 00:12:35.972 --rc genhtml_legend=1 00:12:35.972 --rc geninfo_all_blocks=1 00:12:35.972 --rc geninfo_unexecuted_blocks=1 00:12:35.972 00:12:35.972 ' 00:12:35.972 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:35.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.972 --rc genhtml_branch_coverage=1 00:12:35.972 --rc genhtml_function_coverage=1 00:12:35.972 --rc genhtml_legend=1 00:12:35.972 --rc geninfo_all_blocks=1 00:12:35.972 --rc geninfo_unexecuted_blocks=1 00:12:35.972 00:12:35.972 ' 00:12:35.972 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:35.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.972 --rc genhtml_branch_coverage=1 00:12:35.972 --rc genhtml_function_coverage=1 00:12:35.972 --rc genhtml_legend=1 00:12:35.972 --rc geninfo_all_blocks=1 00:12:35.972 --rc geninfo_unexecuted_blocks=1 00:12:35.972 00:12:35.972 ' 00:12:35.972 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:35.972 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:35.972 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:35.972 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:35.972 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:35.972 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:35.972 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:35.972 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:35.972 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:35.972 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:35.972 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:35.972 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:35.972 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:12:35.972 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:12:35.972 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:35.972 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:35.972 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:35.972 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:35.972 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:35.972 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:12:35.972 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:35.972 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:35.972 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:35.972 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.972 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.972 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.972 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:35.972 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.972 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:12:35.972 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:35.972 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:35.972 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:35.972 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:35.972 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:35.972 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:35.972 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:35.972 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:35.972 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:35.972 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:35.972 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:35.972 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:35.972 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:35.972 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:35.972 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:35.972 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:35.972 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:35.972 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:35.972 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:35.972 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:35.972 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:35.972 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:35.972 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:12:35.972 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:42.655 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:42.655 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:12:42.655 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:42.655 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:42.655 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:42.655 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:42.655 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:42.655 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:12:42.655 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:42.655 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:12:42.655 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:12:42.655 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:12:42.655 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:12:42.655 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:12:42.655 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:12:42.655 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:42.655 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:42.655 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:42.655 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:42.656 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:42.656 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:42.656 Found net devices under 0000:af:00.0: cvl_0_0 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:42.656 Found net devices under 0000:af:00.1: cvl_0_1 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:42.656 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:42.656 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.344 ms 00:12:42.656 00:12:42.656 --- 10.0.0.2 ping statistics --- 00:12:42.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:42.656 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:42.656 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:42.656 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.246 ms 00:12:42.656 00:12:42.656 --- 10.0.0.1 ping statistics --- 00:12:42.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:42.656 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:42.656 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:42.656 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:42.656 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:42.657 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:42.657 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:42.657 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=2507911 00:12:42.657 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:42.657 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 2507911 00:12:42.657 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 2507911 ']' 00:12:42.657 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:42.657 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:42.657 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:42.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:42.657 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:42.657 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:42.657 [2024-12-09 17:23:11.061535] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:12:42.657 [2024-12-09 17:23:11.061579] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:42.657 [2024-12-09 17:23:11.145207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:42.657 [2024-12-09 17:23:11.186409] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:42.657 [2024-12-09 17:23:11.186445] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:42.657 [2024-12-09 17:23:11.186453] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:42.657 [2024-12-09 17:23:11.186458] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:42.657 [2024-12-09 17:23:11.186464] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:42.657 [2024-12-09 17:23:11.187838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:42.657 [2024-12-09 17:23:11.187949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:42.657 [2024-12-09 17:23:11.187964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:42.657 [2024-12-09 17:23:11.187971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:42.915 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:42.915 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:12:42.915 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:42.915 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:42.915 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:42.915 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:42.915 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:42.915 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:42.915 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:42.915 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:42.915 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:43.173 "nvmf_tgt_1" 00:12:43.173 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:43.173 "nvmf_tgt_2" 00:12:43.173 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:43.173 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:43.432 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:43.432 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:43.432 true 00:12:43.432 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:43.432 true 00:12:43.432 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:43.432 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:43.691 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:43.691 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:43.691 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:43.691 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:43.691 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:12:43.691 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:43.691 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:12:43.691 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:43.691 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:43.691 rmmod nvme_tcp 00:12:43.691 rmmod nvme_fabrics 00:12:43.691 rmmod nvme_keyring 00:12:43.691 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:43.691 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:12:43.691 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:12:43.691 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 2507911 ']' 00:12:43.691 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 2507911 00:12:43.691 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 2507911 ']' 00:12:43.691 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 2507911 00:12:43.691 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:12:43.691 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:43.691 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2507911 00:12:43.691 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:43.691 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:43.691 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2507911' 00:12:43.691 killing process with pid 2507911 00:12:43.691 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 2507911 00:12:43.691 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 2507911 00:12:43.950 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:43.950 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:43.950 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:43.950 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:12:43.950 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:12:43.950 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:43.950 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:12:43.950 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:43.950 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:43.950 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:43.950 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:43.950 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:45.855 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:46.113 00:12:46.113 real 0m10.181s 00:12:46.113 user 0m9.714s 00:12:46.113 sys 0m4.930s 00:12:46.113 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:46.113 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:46.113 ************************************ 00:12:46.113 END TEST nvmf_multitarget 00:12:46.113 ************************************ 00:12:46.113 17:23:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:46.113 17:23:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:46.113 17:23:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:46.113 17:23:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:46.113 ************************************ 00:12:46.113 START TEST nvmf_rpc 00:12:46.113 ************************************ 00:12:46.113 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:46.113 * Looking for test storage... 00:12:46.113 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:46.113 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:46.113 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:12:46.113 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:46.113 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:46.113 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:46.113 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:46.113 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:46.113 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:12:46.113 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:12:46.113 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:12:46.113 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:12:46.113 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:12:46.113 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:12:46.113 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:12:46.113 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:46.113 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:12:46.113 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:12:46.113 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:46.113 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:46.113 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:12:46.113 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:12:46.113 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:46.113 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:12:46.113 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:12:46.113 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:12:46.113 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:12:46.113 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:46.113 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:12:46.113 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:12:46.113 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:46.113 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:46.113 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:12:46.113 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:46.113 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:46.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:46.113 --rc genhtml_branch_coverage=1 00:12:46.113 --rc genhtml_function_coverage=1 00:12:46.113 --rc genhtml_legend=1 00:12:46.113 --rc geninfo_all_blocks=1 00:12:46.113 --rc geninfo_unexecuted_blocks=1 00:12:46.113 00:12:46.113 ' 00:12:46.113 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:46.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:46.113 --rc genhtml_branch_coverage=1 00:12:46.113 --rc genhtml_function_coverage=1 00:12:46.113 --rc genhtml_legend=1 00:12:46.113 --rc geninfo_all_blocks=1 00:12:46.113 --rc geninfo_unexecuted_blocks=1 00:12:46.113 00:12:46.113 ' 00:12:46.113 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:46.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:46.113 --rc genhtml_branch_coverage=1 00:12:46.113 --rc genhtml_function_coverage=1 00:12:46.113 --rc genhtml_legend=1 00:12:46.113 --rc geninfo_all_blocks=1 00:12:46.113 --rc geninfo_unexecuted_blocks=1 00:12:46.113 00:12:46.113 ' 00:12:46.113 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:46.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:46.113 --rc genhtml_branch_coverage=1 00:12:46.113 --rc genhtml_function_coverage=1 00:12:46.113 --rc genhtml_legend=1 00:12:46.113 --rc geninfo_all_blocks=1 00:12:46.113 --rc geninfo_unexecuted_blocks=1 00:12:46.113 00:12:46.113 ' 00:12:46.113 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:46.372 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:46.372 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:46.372 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:46.372 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:46.372 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:46.372 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:46.372 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:46.372 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:46.372 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:46.372 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:46.372 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:46.372 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:12:46.372 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:12:46.372 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:46.372 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:46.372 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:46.372 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:46.372 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:46.372 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:12:46.372 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:46.372 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:46.372 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:46.372 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.372 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.373 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.373 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:46.373 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.373 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:12:46.373 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:46.373 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:46.373 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:46.373 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:46.373 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:46.373 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:46.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:46.373 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:46.373 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:46.373 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:46.373 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:46.373 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:46.373 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:46.373 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:46.373 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:46.373 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:46.373 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:46.373 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:46.373 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:46.373 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:46.373 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:46.373 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:46.373 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:12:46.373 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.942 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:52.942 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:12:52.942 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:52.942 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:52.942 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:52.942 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:52.942 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:52.942 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:12:52.942 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:52.942 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:12:52.942 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:12:52.942 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:12:52.942 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:12:52.942 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:12:52.942 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:12:52.942 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:52.942 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:52.942 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:52.942 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:52.942 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:52.942 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:52.942 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:52.942 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:52.942 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:52.942 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:52.942 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:52.942 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:52.942 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:52.942 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:52.942 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:52.943 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:52.943 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:52.943 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:52.943 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:52.943 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:52.943 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:52.943 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:52.943 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:52.943 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:52.943 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:52.943 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:52.943 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:52.943 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:52.943 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:52.943 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:52.943 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:52.943 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:52.943 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:52.943 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:52.943 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:52.943 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:52.943 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:52.943 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:52.943 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:52.943 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:52.943 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:52.943 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:52.943 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:52.943 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:52.943 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:52.943 Found net devices under 0000:af:00.0: cvl_0_0 00:12:52.943 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:52.943 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:52.943 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:52.943 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:52.943 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:52.943 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:52.943 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:52.943 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:52.943 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:52.943 Found net devices under 0000:af:00.1: cvl_0_1 00:12:52.943 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:52.943 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:52.943 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:12:52.943 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:52.943 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:52.943 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:52.943 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:52.943 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:52.943 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:52.943 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:52.943 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:52.943 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:52.943 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:52.943 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:52.943 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:52.943 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:52.943 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:52.943 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:52.943 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:52.943 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:52.943 17:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:52.943 17:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:52.943 17:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:52.943 17:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:52.943 17:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:52.943 17:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:52.943 17:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:52.943 17:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:52.943 17:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:52.943 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:52.943 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.283 ms 00:12:52.943 00:12:52.943 --- 10.0.0.2 ping statistics --- 00:12:52.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:52.943 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:12:52.943 17:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:52.943 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:52.943 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:12:52.943 00:12:52.943 --- 10.0.0.1 ping statistics --- 00:12:52.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:52.943 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:12:52.943 17:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:52.943 17:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:12:52.943 17:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:52.943 17:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:52.943 17:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:52.943 17:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:52.943 17:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:52.943 17:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:52.943 17:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:52.943 17:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:52.943 17:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:52.943 17:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:52.943 17:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.943 17:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=2511747 00:12:52.943 17:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 2511747 00:12:52.943 17:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 2511747 ']' 00:12:52.943 17:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:52.943 17:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:52.943 17:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:52.943 17:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:52.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:52.943 17:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:52.943 17:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.943 [2024-12-09 17:23:21.382995] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:12:52.943 [2024-12-09 17:23:21.383046] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:52.943 [2024-12-09 17:23:21.466232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:52.943 [2024-12-09 17:23:21.510003] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:52.943 [2024-12-09 17:23:21.510039] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:52.943 [2024-12-09 17:23:21.510046] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:52.943 [2024-12-09 17:23:21.510052] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:52.943 [2024-12-09 17:23:21.510058] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:52.943 [2024-12-09 17:23:21.511681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:52.943 [2024-12-09 17:23:21.511789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:52.943 [2024-12-09 17:23:21.511875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.943 [2024-12-09 17:23:21.511876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:53.202 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:53.202 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:53.202 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:53.202 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:53.202 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.202 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:53.202 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:53.202 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.202 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.202 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.202 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:53.202 "tick_rate": 2100000000, 00:12:53.202 "poll_groups": [ 00:12:53.202 { 00:12:53.202 "name": "nvmf_tgt_poll_group_000", 00:12:53.202 "admin_qpairs": 0, 00:12:53.202 "io_qpairs": 0, 00:12:53.202 "current_admin_qpairs": 0, 00:12:53.202 "current_io_qpairs": 0, 00:12:53.202 "pending_bdev_io": 0, 00:12:53.202 "completed_nvme_io": 0, 00:12:53.202 "transports": [] 00:12:53.202 }, 00:12:53.202 { 00:12:53.202 "name": "nvmf_tgt_poll_group_001", 00:12:53.202 "admin_qpairs": 0, 00:12:53.202 "io_qpairs": 0, 00:12:53.202 "current_admin_qpairs": 0, 00:12:53.202 "current_io_qpairs": 0, 00:12:53.202 "pending_bdev_io": 0, 00:12:53.202 "completed_nvme_io": 0, 00:12:53.202 "transports": [] 00:12:53.202 }, 00:12:53.202 { 00:12:53.202 "name": "nvmf_tgt_poll_group_002", 00:12:53.202 "admin_qpairs": 0, 00:12:53.202 "io_qpairs": 0, 00:12:53.202 "current_admin_qpairs": 0, 00:12:53.202 "current_io_qpairs": 0, 00:12:53.202 "pending_bdev_io": 0, 00:12:53.202 "completed_nvme_io": 0, 00:12:53.202 "transports": [] 00:12:53.202 }, 00:12:53.202 { 00:12:53.202 "name": "nvmf_tgt_poll_group_003", 00:12:53.202 "admin_qpairs": 0, 00:12:53.202 "io_qpairs": 0, 00:12:53.202 "current_admin_qpairs": 0, 00:12:53.202 "current_io_qpairs": 0, 00:12:53.202 "pending_bdev_io": 0, 00:12:53.202 "completed_nvme_io": 0, 00:12:53.202 "transports": [] 00:12:53.202 } 00:12:53.202 ] 00:12:53.202 }' 00:12:53.202 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:53.202 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:53.202 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:53.202 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:53.202 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:53.202 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:53.202 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:53.202 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:53.202 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.202 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.202 [2024-12-09 17:23:22.359887] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:53.202 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.202 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:53.202 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.202 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.461 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.461 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:53.461 "tick_rate": 2100000000, 00:12:53.461 "poll_groups": [ 00:12:53.461 { 00:12:53.461 "name": "nvmf_tgt_poll_group_000", 00:12:53.461 "admin_qpairs": 0, 00:12:53.461 "io_qpairs": 0, 00:12:53.461 "current_admin_qpairs": 0, 00:12:53.461 "current_io_qpairs": 0, 00:12:53.461 "pending_bdev_io": 0, 00:12:53.461 "completed_nvme_io": 0, 00:12:53.461 "transports": [ 00:12:53.461 { 00:12:53.461 "trtype": "TCP" 00:12:53.461 } 00:12:53.461 ] 00:12:53.461 }, 00:12:53.461 { 00:12:53.461 "name": "nvmf_tgt_poll_group_001", 00:12:53.461 "admin_qpairs": 0, 00:12:53.461 "io_qpairs": 0, 00:12:53.461 "current_admin_qpairs": 0, 00:12:53.461 "current_io_qpairs": 0, 00:12:53.461 "pending_bdev_io": 0, 00:12:53.461 "completed_nvme_io": 0, 00:12:53.461 "transports": [ 00:12:53.461 { 00:12:53.461 "trtype": "TCP" 00:12:53.461 } 00:12:53.461 ] 00:12:53.461 }, 00:12:53.461 { 00:12:53.461 "name": "nvmf_tgt_poll_group_002", 00:12:53.461 "admin_qpairs": 0, 00:12:53.461 "io_qpairs": 0, 00:12:53.461 "current_admin_qpairs": 0, 00:12:53.461 "current_io_qpairs": 0, 00:12:53.461 "pending_bdev_io": 0, 00:12:53.461 "completed_nvme_io": 0, 00:12:53.461 "transports": [ 00:12:53.461 { 00:12:53.461 "trtype": "TCP" 00:12:53.461 } 00:12:53.461 ] 00:12:53.461 }, 00:12:53.461 { 00:12:53.461 "name": "nvmf_tgt_poll_group_003", 00:12:53.461 "admin_qpairs": 0, 00:12:53.461 "io_qpairs": 0, 00:12:53.461 "current_admin_qpairs": 0, 00:12:53.461 "current_io_qpairs": 0, 00:12:53.461 "pending_bdev_io": 0, 00:12:53.461 "completed_nvme_io": 0, 00:12:53.461 "transports": [ 00:12:53.461 { 00:12:53.461 "trtype": "TCP" 00:12:53.461 } 00:12:53.461 ] 00:12:53.461 } 00:12:53.461 ] 00:12:53.461 }' 00:12:53.461 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:53.461 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:53.461 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:53.461 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:53.461 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:53.461 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:53.461 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:53.461 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:53.461 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:53.461 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:53.461 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:53.461 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:53.461 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:53.461 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:53.462 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.462 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.462 Malloc1 00:12:53.462 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.462 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:53.462 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.462 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.462 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.462 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:53.462 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.462 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.462 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.462 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:53.462 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.462 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.462 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.462 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:53.462 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.462 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.462 [2024-12-09 17:23:22.531291] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:53.462 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.462 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:53.462 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:12:53.462 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:53.462 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:12:53.462 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:53.462 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:12:53.462 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:53.462 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:12:53.462 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:53.462 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:12:53.462 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:12:53.462 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:53.462 [2024-12-09 17:23:22.565789] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562' 00:12:53.462 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:53.462 could not add new controller: failed to write to nvme-fabrics device 00:12:53.462 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:53.462 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:53.462 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:53.462 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:53.462 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:12:53.462 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.462 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.462 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.462 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:54.837 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:54.837 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:54.837 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:54.837 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:54.837 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:56.740 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:56.740 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:56.740 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:56.740 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:56.740 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:56.740 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:56.740 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:56.740 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:56.740 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:56.740 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:56.740 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:56.740 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:56.740 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:56.740 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:56.740 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:56.740 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:12:56.740 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.740 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.740 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.740 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:56.740 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:12:56.740 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:56.740 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:12:56.740 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:56.740 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:12:56.740 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:56.740 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:12:56.740 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:56.740 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:12:56.740 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:12:56.740 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:56.740 [2024-12-09 17:23:25.900163] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562' 00:12:56.998 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:56.998 could not add new controller: failed to write to nvme-fabrics device 00:12:56.998 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:56.998 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:56.998 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:56.998 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:56.998 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:56.998 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.998 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.998 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.998 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:58.375 17:23:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:58.375 17:23:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:58.375 17:23:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:58.375 17:23:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:58.375 17:23:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:00.279 17:23:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:00.279 17:23:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:00.279 17:23:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:00.279 17:23:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:00.279 17:23:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:00.279 17:23:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:00.279 17:23:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:00.279 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:00.279 17:23:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:00.279 17:23:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:00.279 17:23:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:00.279 17:23:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:00.279 17:23:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:00.279 17:23:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:00.279 17:23:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:00.279 17:23:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:00.279 17:23:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.279 17:23:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.279 17:23:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.279 17:23:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:13:00.279 17:23:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:00.279 17:23:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:00.279 17:23:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.279 17:23:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.279 17:23:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.279 17:23:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:00.279 17:23:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.279 17:23:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.279 [2024-12-09 17:23:29.276471] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:00.279 17:23:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.279 17:23:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:00.279 17:23:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.279 17:23:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.279 17:23:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.279 17:23:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:00.279 17:23:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.279 17:23:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.279 17:23:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.279 17:23:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:01.655 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:01.655 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:01.655 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:01.655 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:01.655 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:03.558 17:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:03.558 17:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:03.558 17:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:03.558 17:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:03.558 17:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:03.558 17:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:03.558 17:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:03.558 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.558 17:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:03.558 17:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:03.558 17:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:03.558 17:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:03.558 17:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:03.558 17:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:03.558 17:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:03.558 17:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:03.558 17:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.558 17:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.558 17:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.558 17:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:03.558 17:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.558 17:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.558 17:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.558 17:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:03.558 17:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:03.558 17:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.558 17:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.558 17:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.558 17:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:03.558 17:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.558 17:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.558 [2024-12-09 17:23:32.682739] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:03.558 17:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.558 17:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:03.558 17:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.558 17:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.558 17:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.558 17:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:03.558 17:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.558 17:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.558 17:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.558 17:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:04.935 17:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:04.935 17:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:04.935 17:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:04.935 17:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:04.935 17:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:06.837 17:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:06.837 17:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:06.837 17:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:06.837 17:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:06.837 17:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:06.837 17:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:06.837 17:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:06.837 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:06.837 17:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:06.837 17:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:06.837 17:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:06.837 17:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:06.837 17:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:06.837 17:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:06.837 17:23:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:06.837 17:23:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:06.837 17:23:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.837 17:23:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.096 17:23:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.096 17:23:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:07.096 17:23:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.096 17:23:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.096 17:23:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.096 17:23:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:07.096 17:23:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:07.096 17:23:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.096 17:23:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.096 17:23:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.096 17:23:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:07.096 17:23:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.096 17:23:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.096 [2024-12-09 17:23:36.039726] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:07.096 17:23:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.096 17:23:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:07.096 17:23:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.096 17:23:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.096 17:23:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.096 17:23:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:07.096 17:23:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.096 17:23:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.096 17:23:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.096 17:23:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:08.032 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:08.032 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:08.032 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:08.032 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:08.032 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:10.565 17:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:10.565 17:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:10.565 17:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:10.565 17:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:10.565 17:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:10.565 17:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:10.565 17:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:10.565 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:10.565 17:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:10.565 17:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:10.565 17:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:10.565 17:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:10.565 17:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:10.565 17:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:10.565 17:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:10.565 17:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:10.565 17:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.565 17:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.565 17:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.565 17:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:10.565 17:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.565 17:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.565 17:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.565 17:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:10.565 17:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:10.565 17:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.565 17:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.565 17:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.565 17:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:10.565 17:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.565 17:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.565 [2024-12-09 17:23:39.350638] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:10.565 17:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.565 17:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:10.565 17:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.565 17:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.565 17:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.565 17:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:10.565 17:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.565 17:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.565 17:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.566 17:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:11.501 17:23:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:11.501 17:23:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:11.501 17:23:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:11.501 17:23:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:11.501 17:23:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:13.405 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:13.405 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:13.405 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:13.405 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:13.405 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:13.405 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:13.405 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:13.405 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:13.405 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:13.405 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:13.405 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:13.405 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:13.664 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:13.664 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:13.664 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:13.664 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:13.664 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.664 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.664 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.664 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:13.664 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.664 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.664 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.664 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:13.664 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:13.664 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.664 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.664 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.664 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:13.664 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.664 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.664 [2024-12-09 17:23:42.627328] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:13.664 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.664 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:13.664 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.664 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.664 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.665 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:13.665 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.665 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.665 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.665 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:14.601 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:14.601 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:14.601 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:14.601 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:14.601 17:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:17.136 17:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:17.136 17:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:17.136 17:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:17.136 17:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:17.137 17:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:17.137 17:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:17.137 17:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:17.137 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:17.137 17:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:17.137 17:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:17.137 17:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:17.137 17:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:17.137 17:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:17.137 17:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:17.137 17:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:17.137 17:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:17.137 17:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.137 17:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.137 17:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.137 17:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:17.137 17:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.137 17:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.137 17:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.137 17:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:17.137 17:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:17.137 17:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:17.137 17:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.137 17:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.137 17:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.137 17:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:17.137 17:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.137 17:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.137 [2024-12-09 17:23:45.951780] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:17.137 17:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.137 17:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:17.137 17:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.137 17:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.137 17:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.137 17:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:17.137 17:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.137 17:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.137 17:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.137 17:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:17.137 17:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.137 17:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.137 17:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.137 17:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:17.137 17:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.137 17:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.137 17:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.137 17:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:17.137 17:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:17.137 17:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.137 17:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.137 17:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.137 17:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:17.137 17:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.137 17:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.137 [2024-12-09 17:23:45.999810] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:17.137 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.137 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:17.137 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.137 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.137 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.137 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:17.137 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.137 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.137 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.137 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:17.137 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.137 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.137 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.137 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:17.137 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.137 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.137 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.137 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:17.137 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:17.137 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.137 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.137 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.137 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:17.137 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.137 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.137 [2024-12-09 17:23:46.047963] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:17.137 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.137 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:17.137 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.137 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.137 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.137 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:17.137 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.137 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.137 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.137 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:17.137 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.137 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.137 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.137 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:17.137 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.137 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.137 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.137 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:17.137 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:17.137 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.137 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.137 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.137 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:17.137 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.137 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.137 [2024-12-09 17:23:46.096144] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:17.137 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.137 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:17.137 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.137 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.138 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.138 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:17.138 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.138 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.138 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.138 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:17.138 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.138 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.138 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.138 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:17.138 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.138 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.138 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.138 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:17.138 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:17.138 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.138 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.138 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.138 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:17.138 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.138 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.138 [2024-12-09 17:23:46.144319] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:17.138 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.138 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:17.138 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.138 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.138 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.138 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:17.138 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.138 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.138 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.138 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:17.138 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.138 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.138 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.138 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:17.138 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.138 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.138 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.138 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:17.138 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.138 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.138 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.138 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:17.138 "tick_rate": 2100000000, 00:13:17.138 "poll_groups": [ 00:13:17.138 { 00:13:17.138 "name": "nvmf_tgt_poll_group_000", 00:13:17.138 "admin_qpairs": 2, 00:13:17.138 "io_qpairs": 168, 00:13:17.138 "current_admin_qpairs": 0, 00:13:17.138 "current_io_qpairs": 0, 00:13:17.138 "pending_bdev_io": 0, 00:13:17.138 "completed_nvme_io": 251, 00:13:17.138 "transports": [ 00:13:17.138 { 00:13:17.138 "trtype": "TCP" 00:13:17.138 } 00:13:17.138 ] 00:13:17.138 }, 00:13:17.138 { 00:13:17.138 "name": "nvmf_tgt_poll_group_001", 00:13:17.138 "admin_qpairs": 2, 00:13:17.138 "io_qpairs": 168, 00:13:17.138 "current_admin_qpairs": 0, 00:13:17.138 "current_io_qpairs": 0, 00:13:17.138 "pending_bdev_io": 0, 00:13:17.138 "completed_nvme_io": 301, 00:13:17.138 "transports": [ 00:13:17.138 { 00:13:17.138 "trtype": "TCP" 00:13:17.138 } 00:13:17.138 ] 00:13:17.138 }, 00:13:17.138 { 00:13:17.138 "name": "nvmf_tgt_poll_group_002", 00:13:17.138 "admin_qpairs": 1, 00:13:17.138 "io_qpairs": 168, 00:13:17.138 "current_admin_qpairs": 0, 00:13:17.138 "current_io_qpairs": 0, 00:13:17.138 "pending_bdev_io": 0, 00:13:17.138 "completed_nvme_io": 252, 00:13:17.138 "transports": [ 00:13:17.138 { 00:13:17.138 "trtype": "TCP" 00:13:17.138 } 00:13:17.138 ] 00:13:17.138 }, 00:13:17.138 { 00:13:17.138 "name": "nvmf_tgt_poll_group_003", 00:13:17.138 "admin_qpairs": 2, 00:13:17.138 "io_qpairs": 168, 00:13:17.138 "current_admin_qpairs": 0, 00:13:17.138 "current_io_qpairs": 0, 00:13:17.138 "pending_bdev_io": 0, 00:13:17.138 "completed_nvme_io": 218, 00:13:17.138 "transports": [ 00:13:17.138 { 00:13:17.138 "trtype": "TCP" 00:13:17.138 } 00:13:17.138 ] 00:13:17.138 } 00:13:17.138 ] 00:13:17.138 }' 00:13:17.138 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:17.138 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:17.138 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:17.138 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:17.138 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:17.138 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:17.138 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:17.138 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:17.138 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:17.138 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:13:17.138 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:17.138 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:17.138 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:17.138 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:17.138 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:13:17.138 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:17.138 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:13:17.138 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:17.138 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:17.138 rmmod nvme_tcp 00:13:17.397 rmmod nvme_fabrics 00:13:17.397 rmmod nvme_keyring 00:13:17.397 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:17.397 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:13:17.397 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:13:17.397 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 2511747 ']' 00:13:17.397 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 2511747 00:13:17.397 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 2511747 ']' 00:13:17.397 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 2511747 00:13:17.397 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:13:17.397 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:17.397 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2511747 00:13:17.397 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:17.397 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:17.397 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2511747' 00:13:17.397 killing process with pid 2511747 00:13:17.397 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 2511747 00:13:17.397 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 2511747 00:13:17.657 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:17.657 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:17.657 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:17.657 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:13:17.657 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:13:17.657 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:17.657 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:13:17.657 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:17.657 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:17.657 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:17.657 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:17.657 17:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:19.561 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:19.561 00:13:19.561 real 0m33.566s 00:13:19.561 user 1m41.794s 00:13:19.561 sys 0m6.541s 00:13:19.561 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:19.561 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.561 ************************************ 00:13:19.561 END TEST nvmf_rpc 00:13:19.561 ************************************ 00:13:19.561 17:23:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:19.561 17:23:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:19.561 17:23:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:19.561 17:23:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:19.820 ************************************ 00:13:19.820 START TEST nvmf_invalid 00:13:19.820 ************************************ 00:13:19.820 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:19.820 * Looking for test storage... 00:13:19.820 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:19.820 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:19.820 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:13:19.820 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:19.820 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:19.820 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:19.820 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:19.820 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:19.820 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:13:19.820 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:13:19.820 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:13:19.820 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:13:19.820 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:13:19.820 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:13:19.820 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:13:19.820 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:19.820 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:13:19.820 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:13:19.820 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:19.820 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:19.820 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:13:19.820 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:13:19.820 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:19.820 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:13:19.820 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:13:19.820 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:13:19.820 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:13:19.820 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:19.820 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:13:19.821 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:13:19.821 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:19.821 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:19.821 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:13:19.821 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:19.821 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:19.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:19.821 --rc genhtml_branch_coverage=1 00:13:19.821 --rc genhtml_function_coverage=1 00:13:19.821 --rc genhtml_legend=1 00:13:19.821 --rc geninfo_all_blocks=1 00:13:19.821 --rc geninfo_unexecuted_blocks=1 00:13:19.821 00:13:19.821 ' 00:13:19.821 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:19.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:19.821 --rc genhtml_branch_coverage=1 00:13:19.821 --rc genhtml_function_coverage=1 00:13:19.821 --rc genhtml_legend=1 00:13:19.821 --rc geninfo_all_blocks=1 00:13:19.821 --rc geninfo_unexecuted_blocks=1 00:13:19.821 00:13:19.821 ' 00:13:19.821 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:19.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:19.821 --rc genhtml_branch_coverage=1 00:13:19.821 --rc genhtml_function_coverage=1 00:13:19.821 --rc genhtml_legend=1 00:13:19.821 --rc geninfo_all_blocks=1 00:13:19.821 --rc geninfo_unexecuted_blocks=1 00:13:19.821 00:13:19.821 ' 00:13:19.821 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:19.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:19.821 --rc genhtml_branch_coverage=1 00:13:19.821 --rc genhtml_function_coverage=1 00:13:19.821 --rc genhtml_legend=1 00:13:19.821 --rc geninfo_all_blocks=1 00:13:19.821 --rc geninfo_unexecuted_blocks=1 00:13:19.821 00:13:19.821 ' 00:13:19.821 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:19.821 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:19.821 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:19.821 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:19.821 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:19.821 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:19.821 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:19.821 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:19.821 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:19.821 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:19.821 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:19.821 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:19.821 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:13:19.821 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:13:19.821 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:19.821 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:19.821 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:19.821 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:19.821 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:19.821 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:13:19.821 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:19.821 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:19.821 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:19.821 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.821 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.821 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.821 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:19.821 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.821 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:13:19.821 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:19.821 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:19.821 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:19.821 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:19.821 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:19.821 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:19.821 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:19.821 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:19.821 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:19.821 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:19.821 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:19.821 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:19.821 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:19.821 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:19.821 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:19.821 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:19.821 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:19.821 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:19.821 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:19.821 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:19.821 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:19.821 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:19.821 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:19.821 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:19.821 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:19.821 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:19.821 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:13:19.821 17:23:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:26.391 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:26.391 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:26.391 Found net devices under 0000:af:00.0: cvl_0_0 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:26.391 Found net devices under 0000:af:00.1: cvl_0_1 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:26.391 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:26.391 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.317 ms 00:13:26.391 00:13:26.391 --- 10.0.0.2 ping statistics --- 00:13:26.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:26.391 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:26.391 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:26.391 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:13:26.391 00:13:26.391 --- 10.0.0.1 ping statistics --- 00:13:26.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:26.391 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=2519431 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 2519431 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 2519431 ']' 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:26.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:26.391 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:26.391 [2024-12-09 17:23:54.926652] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:13:26.391 [2024-12-09 17:23:54.926703] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:26.391 [2024-12-09 17:23:55.006367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:26.391 [2024-12-09 17:23:55.049266] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:26.391 [2024-12-09 17:23:55.049301] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:26.391 [2024-12-09 17:23:55.049308] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:26.391 [2024-12-09 17:23:55.049315] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:26.391 [2024-12-09 17:23:55.049321] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:26.391 [2024-12-09 17:23:55.050733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:26.391 [2024-12-09 17:23:55.050841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:26.391 [2024-12-09 17:23:55.050952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:26.391 [2024-12-09 17:23:55.050952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:26.391 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:26.391 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:13:26.391 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:26.391 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:26.391 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:26.391 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:26.391 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:26.392 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode18479 00:13:26.392 [2024-12-09 17:23:55.365209] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:26.392 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:26.392 { 00:13:26.392 "nqn": "nqn.2016-06.io.spdk:cnode18479", 00:13:26.392 "tgt_name": "foobar", 00:13:26.392 "method": "nvmf_create_subsystem", 00:13:26.392 "req_id": 1 00:13:26.392 } 00:13:26.392 Got JSON-RPC error response 00:13:26.392 response: 00:13:26.392 { 00:13:26.392 "code": -32603, 00:13:26.392 "message": "Unable to find target foobar" 00:13:26.392 }' 00:13:26.392 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:26.392 { 00:13:26.392 "nqn": "nqn.2016-06.io.spdk:cnode18479", 00:13:26.392 "tgt_name": "foobar", 00:13:26.392 "method": "nvmf_create_subsystem", 00:13:26.392 "req_id": 1 00:13:26.392 } 00:13:26.392 Got JSON-RPC error response 00:13:26.392 response: 00:13:26.392 { 00:13:26.392 "code": -32603, 00:13:26.392 "message": "Unable to find target foobar" 00:13:26.392 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:26.392 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:26.392 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode11747 00:13:26.392 [2024-12-09 17:23:55.565889] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11747: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:26.651 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:26.651 { 00:13:26.651 "nqn": "nqn.2016-06.io.spdk:cnode11747", 00:13:26.651 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:26.651 "method": "nvmf_create_subsystem", 00:13:26.651 "req_id": 1 00:13:26.651 } 00:13:26.651 Got JSON-RPC error response 00:13:26.651 response: 00:13:26.651 { 00:13:26.651 "code": -32602, 00:13:26.651 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:26.651 }' 00:13:26.651 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:26.651 { 00:13:26.651 "nqn": "nqn.2016-06.io.spdk:cnode11747", 00:13:26.651 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:26.651 "method": "nvmf_create_subsystem", 00:13:26.651 "req_id": 1 00:13:26.651 } 00:13:26.651 Got JSON-RPC error response 00:13:26.651 response: 00:13:26.651 { 00:13:26.651 "code": -32602, 00:13:26.651 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:26.651 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:26.651 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:26.651 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode5267 00:13:26.651 [2024-12-09 17:23:55.770587] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5267: invalid model number 'SPDK_Controller' 00:13:26.651 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:26.651 { 00:13:26.651 "nqn": "nqn.2016-06.io.spdk:cnode5267", 00:13:26.651 "model_number": "SPDK_Controller\u001f", 00:13:26.651 "method": "nvmf_create_subsystem", 00:13:26.651 "req_id": 1 00:13:26.651 } 00:13:26.651 Got JSON-RPC error response 00:13:26.651 response: 00:13:26.651 { 00:13:26.651 "code": -32602, 00:13:26.651 "message": "Invalid MN SPDK_Controller\u001f" 00:13:26.651 }' 00:13:26.651 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:26.651 { 00:13:26.651 "nqn": "nqn.2016-06.io.spdk:cnode5267", 00:13:26.651 "model_number": "SPDK_Controller\u001f", 00:13:26.651 "method": "nvmf_create_subsystem", 00:13:26.651 "req_id": 1 00:13:26.651 } 00:13:26.651 Got JSON-RPC error response 00:13:26.651 response: 00:13:26.651 { 00:13:26.651 "code": -32602, 00:13:26.651 "message": "Invalid MN SPDK_Controller\u001f" 00:13:26.651 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:26.651 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:26.651 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:26.651 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:26.651 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:26.651 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:26.651 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:26.651 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:26.651 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:13:26.651 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:13:26.651 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:13:26.651 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:26.651 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:26.651 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:13:26.651 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:13:26.651 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:13:26.651 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:26.651 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:26.651 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:13:26.651 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:13:26.651 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:13:26.651 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:26.651 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:26.910 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:13:26.910 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:13:26.910 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:13:26.910 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:26.910 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:26.910 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:13:26.910 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:13:26.910 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:13:26.910 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:26.910 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:26.910 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:13:26.910 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:13:26.910 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:13:26.910 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:26.910 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:26.910 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:13:26.910 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:13:26.910 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:13:26.910 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:26.910 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:26.910 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:13:26.910 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:13:26.910 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:13:26.910 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:26.910 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:26.910 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:13:26.910 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:26.910 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:13:26.910 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:26.910 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:26.910 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:13:26.910 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:26.910 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:13:26.910 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:26.910 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:26.910 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:13:26.910 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:26.910 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:13:26.910 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:26.910 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:26.910 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:13:26.910 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:13:26.910 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:13:26.910 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:26.910 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:26.910 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:13:26.910 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:13:26.910 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:13:26.911 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:26.911 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:26.911 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:13:26.911 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:13:26.911 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:13:26.911 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:26.911 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:26.911 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:13:26.911 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:26.911 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:13:26.911 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:26.911 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:26.911 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:13:26.911 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:13:26.911 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:13:26.911 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:26.911 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:26.911 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:13:26.911 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:13:26.911 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:13:26.911 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:26.911 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:26.911 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:13:26.911 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:13:26.911 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:13:26.911 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:26.911 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:26.911 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:13:26.911 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:26.911 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:13:26.911 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:26.911 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:26.911 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:13:26.911 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:26.911 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:13:26.911 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:26.911 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:26.911 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:13:26.911 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:13:26.911 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:13:26.911 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:26.911 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:26.911 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ s == \- ]] 00:13:26.911 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'snIn$8jh/+fS:2?NL0v=' 00:13:26.911 17:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'snIn$8jh/+fS:2?NL0v=' nqn.2016-06.io.spdk:cnode15928 00:13:27.170 [2024-12-09 17:23:56.115757] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15928: invalid serial number 'snIn$8jh/+fS:2?NL0v=' 00:13:27.170 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:27.170 { 00:13:27.170 "nqn": "nqn.2016-06.io.spdk:cnode15928", 00:13:27.171 "serial_number": "snIn\u007f$8jh/+fS:2?NL0v=", 00:13:27.171 "method": "nvmf_create_subsystem", 00:13:27.171 "req_id": 1 00:13:27.171 } 00:13:27.171 Got JSON-RPC error response 00:13:27.171 response: 00:13:27.171 { 00:13:27.171 "code": -32602, 00:13:27.171 "message": "Invalid SN snIn\u007f$8jh/+fS:2?NL0v=" 00:13:27.171 }' 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:27.171 { 00:13:27.171 "nqn": "nqn.2016-06.io.spdk:cnode15928", 00:13:27.171 "serial_number": "snIn\u007f$8jh/+fS:2?NL0v=", 00:13:27.171 "method": "nvmf_create_subsystem", 00:13:27.171 "req_id": 1 00:13:27.171 } 00:13:27.171 Got JSON-RPC error response 00:13:27.171 response: 00:13:27.171 { 00:13:27.171 "code": -32602, 00:13:27.171 "message": "Invalid SN snIn\u007f$8jh/+fS:2?NL0v=" 00:13:27.171 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:13:27.171 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.172 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.172 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:13:27.172 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:27.172 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:13:27.172 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.172 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.172 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:13:27.172 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:13:27.172 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:13:27.172 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.172 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.172 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:13:27.172 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:13:27.172 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:13:27.172 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.172 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.172 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:13:27.172 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:13:27.172 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:13:27.172 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.172 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.172 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:13:27.172 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:13:27.172 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:13:27.172 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.172 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.172 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:13:27.172 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:13:27.172 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:13:27.172 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.172 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.172 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:13:27.172 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:13:27.172 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:13:27.172 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.172 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.172 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:13:27.172 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:27.172 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:13:27.172 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.172 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.172 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:13:27.172 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:13:27.172 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:13:27.172 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.172 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.172 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:13:27.172 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:13:27.431 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:13:27.431 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.431 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.431 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:13:27.431 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:13:27.431 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:13:27.431 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.431 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.431 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:13:27.431 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:13:27.431 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:13:27.431 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.431 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.431 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:13:27.431 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:27.431 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:13:27.431 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.431 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.431 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:13:27.431 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:13:27.431 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:13:27.431 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.431 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.431 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:13:27.431 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:13:27.431 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:13:27.431 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.431 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.431 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:13:27.431 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:13:27.431 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:13:27.431 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.431 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.431 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:13:27.431 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:13:27.431 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:13:27.431 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.431 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.431 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:13:27.431 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:13:27.431 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:13:27.431 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.431 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.431 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:13:27.431 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:27.431 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:13:27.431 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.431 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.431 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:13:27.431 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:13:27.431 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:13:27.431 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.431 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.431 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:13:27.431 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:13:27.431 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:13:27.431 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.431 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.431 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ R == \- ]] 00:13:27.431 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'R$)d1KHWKm'\''H-@!3bFOM+9|@K8J-sbMF~kIZMzW' 00:13:27.431 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'R$)d1KHWKm'\''H-@!3bFOM+9|@K8J-sbMF~kIZMzW' nqn.2016-06.io.spdk:cnode8117 00:13:27.431 [2024-12-09 17:23:56.601304] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8117: invalid model number 'R$)d1KHWKm'H-@!3bFOM+9|@K8J-sbMF~kIZMzW' 00:13:27.689 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:13:27.689 { 00:13:27.690 "nqn": "nqn.2016-06.io.spdk:cnode8117", 00:13:27.690 "model_number": "R$)d1KHWKm'\''H-@!3bFOM+9|@K8J-sbM\u007fF~kIZMzW\u007f", 00:13:27.690 "method": "nvmf_create_subsystem", 00:13:27.690 "req_id": 1 00:13:27.690 } 00:13:27.690 Got JSON-RPC error response 00:13:27.690 response: 00:13:27.690 { 00:13:27.690 "code": -32602, 00:13:27.690 "message": "Invalid MN R$)d1KHWKm'\''H-@!3bFOM+9|@K8J-sbM\u007fF~kIZMzW\u007f" 00:13:27.690 }' 00:13:27.690 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:13:27.690 { 00:13:27.690 "nqn": "nqn.2016-06.io.spdk:cnode8117", 00:13:27.690 "model_number": "R$)d1KHWKm'H-@!3bFOM+9|@K8J-sbM\u007fF~kIZMzW\u007f", 00:13:27.690 "method": "nvmf_create_subsystem", 00:13:27.690 "req_id": 1 00:13:27.690 } 00:13:27.690 Got JSON-RPC error response 00:13:27.690 response: 00:13:27.690 { 00:13:27.690 "code": -32602, 00:13:27.690 "message": "Invalid MN R$)d1KHWKm'H-@!3bFOM+9|@K8J-sbM\u007fF~kIZMzW\u007f" 00:13:27.690 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:27.690 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:27.690 [2024-12-09 17:23:56.802061] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:27.690 17:23:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:27.948 17:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:27.948 17:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:13:27.948 17:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:13:27.948 17:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:13:27.948 17:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:28.206 [2024-12-09 17:23:57.211413] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:28.206 17:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:13:28.206 { 00:13:28.206 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:28.206 "listen_address": { 00:13:28.206 "trtype": "tcp", 00:13:28.206 "traddr": "", 00:13:28.206 "trsvcid": "4421" 00:13:28.206 }, 00:13:28.206 "method": "nvmf_subsystem_remove_listener", 00:13:28.206 "req_id": 1 00:13:28.206 } 00:13:28.206 Got JSON-RPC error response 00:13:28.206 response: 00:13:28.206 { 00:13:28.206 "code": -32602, 00:13:28.206 "message": "Invalid parameters" 00:13:28.206 }' 00:13:28.206 17:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:13:28.206 { 00:13:28.206 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:28.206 "listen_address": { 00:13:28.206 "trtype": "tcp", 00:13:28.206 "traddr": "", 00:13:28.206 "trsvcid": "4421" 00:13:28.206 }, 00:13:28.206 "method": "nvmf_subsystem_remove_listener", 00:13:28.206 "req_id": 1 00:13:28.206 } 00:13:28.206 Got JSON-RPC error response 00:13:28.206 response: 00:13:28.206 { 00:13:28.206 "code": -32602, 00:13:28.206 "message": "Invalid parameters" 00:13:28.206 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:28.206 17:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21125 -i 0 00:13:28.465 [2024-12-09 17:23:57.400012] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21125: invalid cntlid range [0-65519] 00:13:28.465 17:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:13:28.465 { 00:13:28.465 "nqn": "nqn.2016-06.io.spdk:cnode21125", 00:13:28.465 "min_cntlid": 0, 00:13:28.465 "method": "nvmf_create_subsystem", 00:13:28.465 "req_id": 1 00:13:28.465 } 00:13:28.465 Got JSON-RPC error response 00:13:28.465 response: 00:13:28.465 { 00:13:28.465 "code": -32602, 00:13:28.465 "message": "Invalid cntlid range [0-65519]" 00:13:28.465 }' 00:13:28.465 17:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:13:28.465 { 00:13:28.465 "nqn": "nqn.2016-06.io.spdk:cnode21125", 00:13:28.465 "min_cntlid": 0, 00:13:28.465 "method": "nvmf_create_subsystem", 00:13:28.465 "req_id": 1 00:13:28.465 } 00:13:28.465 Got JSON-RPC error response 00:13:28.465 response: 00:13:28.465 { 00:13:28.465 "code": -32602, 00:13:28.465 "message": "Invalid cntlid range [0-65519]" 00:13:28.465 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:28.465 17:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode30699 -i 65520 00:13:28.465 [2024-12-09 17:23:57.600669] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30699: invalid cntlid range [65520-65519] 00:13:28.465 17:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:13:28.465 { 00:13:28.465 "nqn": "nqn.2016-06.io.spdk:cnode30699", 00:13:28.465 "min_cntlid": 65520, 00:13:28.465 "method": "nvmf_create_subsystem", 00:13:28.465 "req_id": 1 00:13:28.465 } 00:13:28.465 Got JSON-RPC error response 00:13:28.465 response: 00:13:28.465 { 00:13:28.465 "code": -32602, 00:13:28.465 "message": "Invalid cntlid range [65520-65519]" 00:13:28.465 }' 00:13:28.465 17:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:13:28.465 { 00:13:28.465 "nqn": "nqn.2016-06.io.spdk:cnode30699", 00:13:28.465 "min_cntlid": 65520, 00:13:28.465 "method": "nvmf_create_subsystem", 00:13:28.465 "req_id": 1 00:13:28.465 } 00:13:28.465 Got JSON-RPC error response 00:13:28.465 response: 00:13:28.465 { 00:13:28.465 "code": -32602, 00:13:28.465 "message": "Invalid cntlid range [65520-65519]" 00:13:28.465 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:28.465 17:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode22920 -I 0 00:13:28.724 [2024-12-09 17:23:57.817399] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22920: invalid cntlid range [1-0] 00:13:28.724 17:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:13:28.724 { 00:13:28.724 "nqn": "nqn.2016-06.io.spdk:cnode22920", 00:13:28.724 "max_cntlid": 0, 00:13:28.724 "method": "nvmf_create_subsystem", 00:13:28.724 "req_id": 1 00:13:28.724 } 00:13:28.724 Got JSON-RPC error response 00:13:28.724 response: 00:13:28.724 { 00:13:28.724 "code": -32602, 00:13:28.724 "message": "Invalid cntlid range [1-0]" 00:13:28.724 }' 00:13:28.724 17:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:13:28.724 { 00:13:28.724 "nqn": "nqn.2016-06.io.spdk:cnode22920", 00:13:28.724 "max_cntlid": 0, 00:13:28.724 "method": "nvmf_create_subsystem", 00:13:28.724 "req_id": 1 00:13:28.724 } 00:13:28.724 Got JSON-RPC error response 00:13:28.724 response: 00:13:28.724 { 00:13:28.724 "code": -32602, 00:13:28.724 "message": "Invalid cntlid range [1-0]" 00:13:28.724 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:28.724 17:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25824 -I 65520 00:13:28.982 [2024-12-09 17:23:58.026104] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25824: invalid cntlid range [1-65520] 00:13:28.982 17:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:13:28.982 { 00:13:28.982 "nqn": "nqn.2016-06.io.spdk:cnode25824", 00:13:28.982 "max_cntlid": 65520, 00:13:28.982 "method": "nvmf_create_subsystem", 00:13:28.982 "req_id": 1 00:13:28.982 } 00:13:28.982 Got JSON-RPC error response 00:13:28.982 response: 00:13:28.982 { 00:13:28.982 "code": -32602, 00:13:28.982 "message": "Invalid cntlid range [1-65520]" 00:13:28.982 }' 00:13:28.982 17:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:13:28.982 { 00:13:28.982 "nqn": "nqn.2016-06.io.spdk:cnode25824", 00:13:28.982 "max_cntlid": 65520, 00:13:28.982 "method": "nvmf_create_subsystem", 00:13:28.982 "req_id": 1 00:13:28.982 } 00:13:28.982 Got JSON-RPC error response 00:13:28.982 response: 00:13:28.982 { 00:13:28.982 "code": -32602, 00:13:28.982 "message": "Invalid cntlid range [1-65520]" 00:13:28.982 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:28.982 17:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20065 -i 6 -I 5 00:13:29.241 [2024-12-09 17:23:58.230823] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20065: invalid cntlid range [6-5] 00:13:29.241 17:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:13:29.241 { 00:13:29.241 "nqn": "nqn.2016-06.io.spdk:cnode20065", 00:13:29.241 "min_cntlid": 6, 00:13:29.241 "max_cntlid": 5, 00:13:29.241 "method": "nvmf_create_subsystem", 00:13:29.241 "req_id": 1 00:13:29.241 } 00:13:29.241 Got JSON-RPC error response 00:13:29.241 response: 00:13:29.241 { 00:13:29.241 "code": -32602, 00:13:29.241 "message": "Invalid cntlid range [6-5]" 00:13:29.241 }' 00:13:29.242 17:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:13:29.242 { 00:13:29.242 "nqn": "nqn.2016-06.io.spdk:cnode20065", 00:13:29.242 "min_cntlid": 6, 00:13:29.242 "max_cntlid": 5, 00:13:29.242 "method": "nvmf_create_subsystem", 00:13:29.242 "req_id": 1 00:13:29.242 } 00:13:29.242 Got JSON-RPC error response 00:13:29.242 response: 00:13:29.242 { 00:13:29.242 "code": -32602, 00:13:29.242 "message": "Invalid cntlid range [6-5]" 00:13:29.242 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:29.242 17:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:29.242 17:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:13:29.242 { 00:13:29.242 "name": "foobar", 00:13:29.242 "method": "nvmf_delete_target", 00:13:29.242 "req_id": 1 00:13:29.242 } 00:13:29.242 Got JSON-RPC error response 00:13:29.242 response: 00:13:29.242 { 00:13:29.242 "code": -32602, 00:13:29.242 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:29.242 }' 00:13:29.242 17:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:13:29.242 { 00:13:29.242 "name": "foobar", 00:13:29.242 "method": "nvmf_delete_target", 00:13:29.242 "req_id": 1 00:13:29.242 } 00:13:29.242 Got JSON-RPC error response 00:13:29.242 response: 00:13:29.242 { 00:13:29.242 "code": -32602, 00:13:29.242 "message": "The specified target doesn't exist, cannot delete it." 00:13:29.242 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:29.242 17:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:29.242 17:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:13:29.242 17:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:29.242 17:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:13:29.242 17:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:29.242 17:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:13:29.242 17:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:29.242 17:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:29.242 rmmod nvme_tcp 00:13:29.242 rmmod nvme_fabrics 00:13:29.242 rmmod nvme_keyring 00:13:29.501 17:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:29.501 17:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:13:29.501 17:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:13:29.501 17:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 2519431 ']' 00:13:29.501 17:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 2519431 00:13:29.501 17:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 2519431 ']' 00:13:29.501 17:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 2519431 00:13:29.501 17:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:13:29.501 17:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:29.501 17:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2519431 00:13:29.501 17:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:29.501 17:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:29.501 17:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2519431' 00:13:29.501 killing process with pid 2519431 00:13:29.501 17:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 2519431 00:13:29.501 17:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 2519431 00:13:29.501 17:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:29.501 17:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:29.501 17:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:29.501 17:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:13:29.501 17:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:13:29.501 17:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:29.501 17:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:13:29.501 17:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:29.501 17:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:29.501 17:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:29.501 17:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:29.501 17:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:32.036 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:32.036 00:13:32.036 real 0m11.966s 00:13:32.036 user 0m18.686s 00:13:32.036 sys 0m5.284s 00:13:32.036 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:32.036 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:32.036 ************************************ 00:13:32.036 END TEST nvmf_invalid 00:13:32.036 ************************************ 00:13:32.036 17:24:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:32.036 17:24:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:32.036 17:24:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:32.036 17:24:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:32.036 ************************************ 00:13:32.036 START TEST nvmf_connect_stress 00:13:32.036 ************************************ 00:13:32.036 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:32.036 * Looking for test storage... 00:13:32.036 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:32.036 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:32.036 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:13:32.036 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:32.036 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:32.036 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:32.036 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:32.037 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:32.037 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:13:32.037 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:13:32.037 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:13:32.037 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:13:32.037 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:13:32.037 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:13:32.037 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:13:32.037 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:32.037 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:13:32.037 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:13:32.037 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:32.037 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:32.037 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:13:32.037 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:13:32.037 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:32.037 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:13:32.037 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:13:32.037 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:13:32.037 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:13:32.037 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:32.037 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:13:32.037 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:13:32.037 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:32.037 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:32.037 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:13:32.037 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:32.037 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:32.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:32.037 --rc genhtml_branch_coverage=1 00:13:32.037 --rc genhtml_function_coverage=1 00:13:32.037 --rc genhtml_legend=1 00:13:32.037 --rc geninfo_all_blocks=1 00:13:32.037 --rc geninfo_unexecuted_blocks=1 00:13:32.037 00:13:32.037 ' 00:13:32.037 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:32.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:32.037 --rc genhtml_branch_coverage=1 00:13:32.037 --rc genhtml_function_coverage=1 00:13:32.037 --rc genhtml_legend=1 00:13:32.037 --rc geninfo_all_blocks=1 00:13:32.037 --rc geninfo_unexecuted_blocks=1 00:13:32.037 00:13:32.037 ' 00:13:32.037 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:32.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:32.037 --rc genhtml_branch_coverage=1 00:13:32.037 --rc genhtml_function_coverage=1 00:13:32.037 --rc genhtml_legend=1 00:13:32.037 --rc geninfo_all_blocks=1 00:13:32.037 --rc geninfo_unexecuted_blocks=1 00:13:32.037 00:13:32.037 ' 00:13:32.037 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:32.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:32.037 --rc genhtml_branch_coverage=1 00:13:32.037 --rc genhtml_function_coverage=1 00:13:32.037 --rc genhtml_legend=1 00:13:32.037 --rc geninfo_all_blocks=1 00:13:32.037 --rc geninfo_unexecuted_blocks=1 00:13:32.037 00:13:32.037 ' 00:13:32.037 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:32.037 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:32.037 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:32.037 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:32.037 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:32.037 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:32.037 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:32.037 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:32.037 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:32.037 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:32.037 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:32.037 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:32.037 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:13:32.037 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:13:32.037 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:32.037 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:32.037 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:32.037 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:32.037 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:32.037 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:13:32.037 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:32.037 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:32.037 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:32.037 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.037 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.037 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.037 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:32.037 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.037 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:13:32.037 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:32.037 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:32.037 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:32.037 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:32.037 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:32.037 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:32.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:32.037 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:32.037 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:32.037 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:32.037 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:32.037 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:32.037 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:32.037 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:32.037 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:32.037 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:32.037 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:32.037 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:32.037 17:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:32.038 17:24:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:32.038 17:24:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:32.038 17:24:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:13:32.038 17:24:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:38.609 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:38.609 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:38.609 Found net devices under 0000:af:00.0: cvl_0_0 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:38.609 Found net devices under 0000:af:00.1: cvl_0_1 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:38.609 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:38.610 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:38.610 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:38.610 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:38.610 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:38.610 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:38.610 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:38.610 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:38.610 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:38.610 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:38.610 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:38.610 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:38.610 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:38.610 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:38.610 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:38.610 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:13:38.610 00:13:38.610 --- 10.0.0.2 ping statistics --- 00:13:38.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:38.610 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:13:38.610 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:38.610 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:38.610 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:13:38.610 00:13:38.610 --- 10.0.0.1 ping statistics --- 00:13:38.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:38.610 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:13:38.610 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:38.610 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:13:38.610 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:38.610 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:38.610 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:38.610 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:38.610 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:38.610 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:38.610 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:38.610 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:38.610 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:38.610 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:38.610 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:38.610 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=2524146 00:13:38.610 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:38.610 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 2524146 00:13:38.610 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 2524146 ']' 00:13:38.610 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:38.610 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:38.610 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:38.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:38.610 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:38.610 17:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:38.610 [2024-12-09 17:24:06.960522] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:13:38.610 [2024-12-09 17:24:06.960568] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:38.610 [2024-12-09 17:24:07.038786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:38.610 [2024-12-09 17:24:07.077818] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:38.610 [2024-12-09 17:24:07.077855] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:38.610 [2024-12-09 17:24:07.077862] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:38.610 [2024-12-09 17:24:07.077868] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:38.610 [2024-12-09 17:24:07.077873] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:38.610 [2024-12-09 17:24:07.079291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:38.610 [2024-12-09 17:24:07.079417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:38.610 [2024-12-09 17:24:07.079418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:38.924 17:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:38.924 17:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:13:38.924 17:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:38.924 17:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:38.924 17:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:38.924 17:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:38.924 17:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:38.924 17:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.924 17:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:38.924 [2024-12-09 17:24:07.845746] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:38.924 17:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.924 17:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:38.924 17:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.924 17:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:38.924 17:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.924 17:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:38.924 17:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.924 17:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:38.924 [2024-12-09 17:24:07.869988] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:38.924 17:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.924 17:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:38.924 17:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.924 17:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:38.924 NULL1 00:13:38.924 17:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.924 17:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2524494 00:13:38.924 17:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:38.924 17:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:38.924 17:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:38.924 17:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:38.924 17:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:38.924 17:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:38.924 17:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:38.924 17:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:38.924 17:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:38.924 17:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:38.924 17:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:38.924 17:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:38.924 17:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:38.924 17:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:38.924 17:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:38.924 17:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:38.924 17:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:38.924 17:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:38.924 17:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:38.924 17:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:38.924 17:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:38.924 17:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:38.924 17:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:38.924 17:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:38.924 17:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:38.924 17:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:38.925 17:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:38.925 17:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:38.925 17:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:38.925 17:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:38.925 17:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:38.925 17:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:38.925 17:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:38.925 17:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:38.925 17:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:38.925 17:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:38.925 17:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:38.925 17:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:38.925 17:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:38.925 17:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:38.925 17:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:38.925 17:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:38.925 17:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:38.925 17:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:38.925 17:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2524494 00:13:38.925 17:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:38.925 17:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.925 17:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:39.216 17:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.216 17:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2524494 00:13:39.216 17:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:39.216 17:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.216 17:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:39.474 17:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.474 17:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2524494 00:13:39.474 17:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:39.474 17:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.474 17:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:40.041 17:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.041 17:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2524494 00:13:40.041 17:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:40.041 17:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.041 17:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:40.299 17:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.299 17:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2524494 00:13:40.299 17:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:40.299 17:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.299 17:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:40.557 17:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.557 17:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2524494 00:13:40.557 17:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:40.557 17:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.557 17:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:40.816 17:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.816 17:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2524494 00:13:40.816 17:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:40.816 17:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.816 17:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:41.074 17:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.074 17:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2524494 00:13:41.074 17:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:41.074 17:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.074 17:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:41.640 17:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.640 17:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2524494 00:13:41.640 17:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:41.640 17:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.640 17:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:41.899 17:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.899 17:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2524494 00:13:41.899 17:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:41.899 17:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.899 17:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:42.157 17:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.157 17:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2524494 00:13:42.157 17:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:42.157 17:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.157 17:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:42.416 17:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.416 17:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2524494 00:13:42.416 17:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:42.416 17:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.416 17:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:42.983 17:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.983 17:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2524494 00:13:42.983 17:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:42.983 17:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.983 17:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.241 17:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.241 17:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2524494 00:13:43.241 17:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:43.241 17:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.241 17:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.500 17:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.500 17:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2524494 00:13:43.500 17:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:43.500 17:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.500 17:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.759 17:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.759 17:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2524494 00:13:43.759 17:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:43.759 17:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.759 17:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:44.018 17:24:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.018 17:24:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2524494 00:13:44.018 17:24:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.018 17:24:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.018 17:24:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:44.584 17:24:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.584 17:24:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2524494 00:13:44.584 17:24:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.584 17:24:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.584 17:24:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:44.843 17:24:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.843 17:24:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2524494 00:13:44.843 17:24:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.843 17:24:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.843 17:24:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.101 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.101 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2524494 00:13:45.101 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:45.101 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.101 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.360 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.360 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2524494 00:13:45.360 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:45.360 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.360 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.623 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.623 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2524494 00:13:45.623 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:45.623 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.623 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.189 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.189 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2524494 00:13:46.189 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.189 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.189 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.447 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.447 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2524494 00:13:46.447 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.447 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.447 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.704 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.704 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2524494 00:13:46.704 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.704 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.704 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.962 17:24:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.962 17:24:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2524494 00:13:46.962 17:24:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.962 17:24:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.963 17:24:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.530 17:24:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.530 17:24:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2524494 00:13:47.530 17:24:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.530 17:24:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.530 17:24:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.788 17:24:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.788 17:24:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2524494 00:13:47.788 17:24:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.788 17:24:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.788 17:24:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.047 17:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.047 17:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2524494 00:13:48.047 17:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.047 17:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.047 17:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.305 17:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.305 17:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2524494 00:13:48.305 17:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.305 17:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.305 17:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.564 17:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.564 17:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2524494 00:13:48.564 17:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.564 17:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.564 17:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.131 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:49.131 17:24:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.131 17:24:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2524494 00:13:49.131 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2524494) - No such process 00:13:49.132 17:24:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2524494 00:13:49.132 17:24:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:49.132 17:24:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:49.132 17:24:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:49.132 17:24:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:49.132 17:24:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:13:49.132 17:24:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:49.132 17:24:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:13:49.132 17:24:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:49.132 17:24:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:49.132 rmmod nvme_tcp 00:13:49.132 rmmod nvme_fabrics 00:13:49.132 rmmod nvme_keyring 00:13:49.132 17:24:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:49.132 17:24:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:13:49.132 17:24:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:13:49.132 17:24:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 2524146 ']' 00:13:49.132 17:24:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 2524146 00:13:49.132 17:24:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 2524146 ']' 00:13:49.132 17:24:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 2524146 00:13:49.132 17:24:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:13:49.132 17:24:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:49.132 17:24:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2524146 00:13:49.132 17:24:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:49.132 17:24:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:49.132 17:24:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2524146' 00:13:49.132 killing process with pid 2524146 00:13:49.132 17:24:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 2524146 00:13:49.132 17:24:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 2524146 00:13:49.391 17:24:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:49.391 17:24:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:49.391 17:24:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:49.391 17:24:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:13:49.391 17:24:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:13:49.391 17:24:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:49.391 17:24:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:13:49.391 17:24:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:49.391 17:24:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:49.391 17:24:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:49.391 17:24:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:49.391 17:24:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:51.296 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:51.296 00:13:51.296 real 0m19.621s 00:13:51.296 user 0m41.487s 00:13:51.296 sys 0m8.533s 00:13:51.296 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:51.296 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.296 ************************************ 00:13:51.296 END TEST nvmf_connect_stress 00:13:51.296 ************************************ 00:13:51.296 17:24:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:51.296 17:24:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:51.296 17:24:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:51.296 17:24:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:51.556 ************************************ 00:13:51.556 START TEST nvmf_fused_ordering 00:13:51.556 ************************************ 00:13:51.556 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:51.556 * Looking for test storage... 00:13:51.556 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:51.556 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:51.556 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:13:51.556 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:51.556 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:51.556 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:51.556 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:51.556 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:51.556 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:13:51.556 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:13:51.556 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:13:51.556 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:13:51.556 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:13:51.556 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:13:51.556 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:13:51.556 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:51.556 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:13:51.556 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:13:51.556 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:51.556 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:51.556 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:13:51.556 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:13:51.556 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:51.556 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:13:51.556 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:13:51.556 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:13:51.556 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:13:51.556 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:51.556 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:13:51.556 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:13:51.556 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:51.556 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:51.556 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:13:51.556 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:51.556 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:51.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:51.556 --rc genhtml_branch_coverage=1 00:13:51.556 --rc genhtml_function_coverage=1 00:13:51.556 --rc genhtml_legend=1 00:13:51.556 --rc geninfo_all_blocks=1 00:13:51.556 --rc geninfo_unexecuted_blocks=1 00:13:51.556 00:13:51.556 ' 00:13:51.556 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:51.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:51.556 --rc genhtml_branch_coverage=1 00:13:51.556 --rc genhtml_function_coverage=1 00:13:51.556 --rc genhtml_legend=1 00:13:51.556 --rc geninfo_all_blocks=1 00:13:51.556 --rc geninfo_unexecuted_blocks=1 00:13:51.556 00:13:51.556 ' 00:13:51.556 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:51.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:51.556 --rc genhtml_branch_coverage=1 00:13:51.556 --rc genhtml_function_coverage=1 00:13:51.556 --rc genhtml_legend=1 00:13:51.556 --rc geninfo_all_blocks=1 00:13:51.556 --rc geninfo_unexecuted_blocks=1 00:13:51.556 00:13:51.556 ' 00:13:51.556 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:51.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:51.556 --rc genhtml_branch_coverage=1 00:13:51.556 --rc genhtml_function_coverage=1 00:13:51.556 --rc genhtml_legend=1 00:13:51.556 --rc geninfo_all_blocks=1 00:13:51.556 --rc geninfo_unexecuted_blocks=1 00:13:51.556 00:13:51.556 ' 00:13:51.556 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:51.557 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:51.557 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:51.557 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:51.557 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:51.557 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:51.557 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:51.557 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:51.557 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:51.557 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:51.557 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:51.557 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:51.557 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:13:51.557 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:13:51.557 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:51.557 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:51.557 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:51.557 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:51.557 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:51.557 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:13:51.557 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:51.557 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:51.557 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:51.557 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.557 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.557 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.557 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:51.557 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.557 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:13:51.557 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:51.557 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:51.557 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:51.557 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:51.557 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:51.557 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:51.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:51.557 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:51.557 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:51.557 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:51.557 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:51.557 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:51.557 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:51.557 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:51.557 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:51.557 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:51.557 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:51.557 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:51.557 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:51.557 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:51.557 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:51.557 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:13:51.557 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:58.125 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:58.125 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:13:58.125 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:58.125 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:58.125 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:58.125 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:58.125 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:58.125 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:13:58.125 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:58.125 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:13:58.125 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:13:58.125 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:13:58.125 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:13:58.125 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:13:58.125 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:13:58.125 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:58.125 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:58.125 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:58.125 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:58.125 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:58.125 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:58.125 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:58.125 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:58.125 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:58.125 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:58.125 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:58.125 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:58.125 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:58.125 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:58.125 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:58.125 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:58.125 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:58.125 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:58.125 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:58.125 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:58.125 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:58.125 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:58.125 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:58.125 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:58.125 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:58.125 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:58.125 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:58.125 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:58.125 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:58.125 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:58.125 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:58.125 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:58.125 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:58.125 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:58.125 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:58.125 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:58.125 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:58.125 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:58.125 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:58.125 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:58.125 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:58.125 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:58.125 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:58.125 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:58.126 Found net devices under 0000:af:00.0: cvl_0_0 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:58.126 Found net devices under 0000:af:00.1: cvl_0_1 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:58.126 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:58.126 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.354 ms 00:13:58.126 00:13:58.126 --- 10.0.0.2 ping statistics --- 00:13:58.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:58.126 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:58.126 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:58.126 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:13:58.126 00:13:58.126 --- 10.0.0.1 ping statistics --- 00:13:58.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:58.126 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=2529602 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 2529602 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 2529602 ']' 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:58.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:58.126 [2024-12-09 17:24:26.644639] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:13:58.126 [2024-12-09 17:24:26.644684] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:58.126 [2024-12-09 17:24:26.723145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:58.126 [2024-12-09 17:24:26.761949] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:58.126 [2024-12-09 17:24:26.761983] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:58.126 [2024-12-09 17:24:26.761990] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:58.126 [2024-12-09 17:24:26.761997] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:58.126 [2024-12-09 17:24:26.762003] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:58.126 [2024-12-09 17:24:26.762537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:58.126 [2024-12-09 17:24:26.897906] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:58.126 [2024-12-09 17:24:26.918085] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:58.126 NULL1 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.126 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:58.127 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.127 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:58.127 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.127 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:58.127 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.127 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:58.127 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.127 17:24:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:58.127 [2024-12-09 17:24:26.975516] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:13:58.127 [2024-12-09 17:24:26.975547] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2529623 ] 00:13:58.385 Attached to nqn.2016-06.io.spdk:cnode1 00:13:58.385 Namespace ID: 1 size: 1GB 00:13:58.385 fused_ordering(0) 00:13:58.385 fused_ordering(1) 00:13:58.385 fused_ordering(2) 00:13:58.385 fused_ordering(3) 00:13:58.385 fused_ordering(4) 00:13:58.385 fused_ordering(5) 00:13:58.385 fused_ordering(6) 00:13:58.385 fused_ordering(7) 00:13:58.385 fused_ordering(8) 00:13:58.385 fused_ordering(9) 00:13:58.385 fused_ordering(10) 00:13:58.385 fused_ordering(11) 00:13:58.385 fused_ordering(12) 00:13:58.385 fused_ordering(13) 00:13:58.385 fused_ordering(14) 00:13:58.385 fused_ordering(15) 00:13:58.385 fused_ordering(16) 00:13:58.385 fused_ordering(17) 00:13:58.385 fused_ordering(18) 00:13:58.385 fused_ordering(19) 00:13:58.385 fused_ordering(20) 00:13:58.385 fused_ordering(21) 00:13:58.385 fused_ordering(22) 00:13:58.385 fused_ordering(23) 00:13:58.385 fused_ordering(24) 00:13:58.385 fused_ordering(25) 00:13:58.385 fused_ordering(26) 00:13:58.385 fused_ordering(27) 00:13:58.385 fused_ordering(28) 00:13:58.385 fused_ordering(29) 00:13:58.385 fused_ordering(30) 00:13:58.385 fused_ordering(31) 00:13:58.385 fused_ordering(32) 00:13:58.385 fused_ordering(33) 00:13:58.385 fused_ordering(34) 00:13:58.385 fused_ordering(35) 00:13:58.385 fused_ordering(36) 00:13:58.385 fused_ordering(37) 00:13:58.385 fused_ordering(38) 00:13:58.385 fused_ordering(39) 00:13:58.385 fused_ordering(40) 00:13:58.385 fused_ordering(41) 00:13:58.385 fused_ordering(42) 00:13:58.385 fused_ordering(43) 00:13:58.385 fused_ordering(44) 00:13:58.385 fused_ordering(45) 00:13:58.385 fused_ordering(46) 00:13:58.385 fused_ordering(47) 00:13:58.385 fused_ordering(48) 00:13:58.385 fused_ordering(49) 00:13:58.385 fused_ordering(50) 00:13:58.385 fused_ordering(51) 00:13:58.385 fused_ordering(52) 00:13:58.385 fused_ordering(53) 00:13:58.385 fused_ordering(54) 00:13:58.385 fused_ordering(55) 00:13:58.385 fused_ordering(56) 00:13:58.385 fused_ordering(57) 00:13:58.385 fused_ordering(58) 00:13:58.385 fused_ordering(59) 00:13:58.385 fused_ordering(60) 00:13:58.385 fused_ordering(61) 00:13:58.385 fused_ordering(62) 00:13:58.385 fused_ordering(63) 00:13:58.385 fused_ordering(64) 00:13:58.385 fused_ordering(65) 00:13:58.385 fused_ordering(66) 00:13:58.385 fused_ordering(67) 00:13:58.385 fused_ordering(68) 00:13:58.385 fused_ordering(69) 00:13:58.385 fused_ordering(70) 00:13:58.385 fused_ordering(71) 00:13:58.385 fused_ordering(72) 00:13:58.385 fused_ordering(73) 00:13:58.385 fused_ordering(74) 00:13:58.385 fused_ordering(75) 00:13:58.385 fused_ordering(76) 00:13:58.385 fused_ordering(77) 00:13:58.385 fused_ordering(78) 00:13:58.385 fused_ordering(79) 00:13:58.385 fused_ordering(80) 00:13:58.385 fused_ordering(81) 00:13:58.385 fused_ordering(82) 00:13:58.385 fused_ordering(83) 00:13:58.385 fused_ordering(84) 00:13:58.385 fused_ordering(85) 00:13:58.385 fused_ordering(86) 00:13:58.385 fused_ordering(87) 00:13:58.385 fused_ordering(88) 00:13:58.385 fused_ordering(89) 00:13:58.385 fused_ordering(90) 00:13:58.385 fused_ordering(91) 00:13:58.385 fused_ordering(92) 00:13:58.385 fused_ordering(93) 00:13:58.385 fused_ordering(94) 00:13:58.385 fused_ordering(95) 00:13:58.385 fused_ordering(96) 00:13:58.385 fused_ordering(97) 00:13:58.385 fused_ordering(98) 00:13:58.385 fused_ordering(99) 00:13:58.385 fused_ordering(100) 00:13:58.385 fused_ordering(101) 00:13:58.385 fused_ordering(102) 00:13:58.385 fused_ordering(103) 00:13:58.385 fused_ordering(104) 00:13:58.386 fused_ordering(105) 00:13:58.386 fused_ordering(106) 00:13:58.386 fused_ordering(107) 00:13:58.386 fused_ordering(108) 00:13:58.386 fused_ordering(109) 00:13:58.386 fused_ordering(110) 00:13:58.386 fused_ordering(111) 00:13:58.386 fused_ordering(112) 00:13:58.386 fused_ordering(113) 00:13:58.386 fused_ordering(114) 00:13:58.386 fused_ordering(115) 00:13:58.386 fused_ordering(116) 00:13:58.386 fused_ordering(117) 00:13:58.386 fused_ordering(118) 00:13:58.386 fused_ordering(119) 00:13:58.386 fused_ordering(120) 00:13:58.386 fused_ordering(121) 00:13:58.386 fused_ordering(122) 00:13:58.386 fused_ordering(123) 00:13:58.386 fused_ordering(124) 00:13:58.386 fused_ordering(125) 00:13:58.386 fused_ordering(126) 00:13:58.386 fused_ordering(127) 00:13:58.386 fused_ordering(128) 00:13:58.386 fused_ordering(129) 00:13:58.386 fused_ordering(130) 00:13:58.386 fused_ordering(131) 00:13:58.386 fused_ordering(132) 00:13:58.386 fused_ordering(133) 00:13:58.386 fused_ordering(134) 00:13:58.386 fused_ordering(135) 00:13:58.386 fused_ordering(136) 00:13:58.386 fused_ordering(137) 00:13:58.386 fused_ordering(138) 00:13:58.386 fused_ordering(139) 00:13:58.386 fused_ordering(140) 00:13:58.386 fused_ordering(141) 00:13:58.386 fused_ordering(142) 00:13:58.386 fused_ordering(143) 00:13:58.386 fused_ordering(144) 00:13:58.386 fused_ordering(145) 00:13:58.386 fused_ordering(146) 00:13:58.386 fused_ordering(147) 00:13:58.386 fused_ordering(148) 00:13:58.386 fused_ordering(149) 00:13:58.386 fused_ordering(150) 00:13:58.386 fused_ordering(151) 00:13:58.386 fused_ordering(152) 00:13:58.386 fused_ordering(153) 00:13:58.386 fused_ordering(154) 00:13:58.386 fused_ordering(155) 00:13:58.386 fused_ordering(156) 00:13:58.386 fused_ordering(157) 00:13:58.386 fused_ordering(158) 00:13:58.386 fused_ordering(159) 00:13:58.386 fused_ordering(160) 00:13:58.386 fused_ordering(161) 00:13:58.386 fused_ordering(162) 00:13:58.386 fused_ordering(163) 00:13:58.386 fused_ordering(164) 00:13:58.386 fused_ordering(165) 00:13:58.386 fused_ordering(166) 00:13:58.386 fused_ordering(167) 00:13:58.386 fused_ordering(168) 00:13:58.386 fused_ordering(169) 00:13:58.386 fused_ordering(170) 00:13:58.386 fused_ordering(171) 00:13:58.386 fused_ordering(172) 00:13:58.386 fused_ordering(173) 00:13:58.386 fused_ordering(174) 00:13:58.386 fused_ordering(175) 00:13:58.386 fused_ordering(176) 00:13:58.386 fused_ordering(177) 00:13:58.386 fused_ordering(178) 00:13:58.386 fused_ordering(179) 00:13:58.386 fused_ordering(180) 00:13:58.386 fused_ordering(181) 00:13:58.386 fused_ordering(182) 00:13:58.386 fused_ordering(183) 00:13:58.386 fused_ordering(184) 00:13:58.386 fused_ordering(185) 00:13:58.386 fused_ordering(186) 00:13:58.386 fused_ordering(187) 00:13:58.386 fused_ordering(188) 00:13:58.386 fused_ordering(189) 00:13:58.386 fused_ordering(190) 00:13:58.386 fused_ordering(191) 00:13:58.386 fused_ordering(192) 00:13:58.386 fused_ordering(193) 00:13:58.386 fused_ordering(194) 00:13:58.386 fused_ordering(195) 00:13:58.386 fused_ordering(196) 00:13:58.386 fused_ordering(197) 00:13:58.386 fused_ordering(198) 00:13:58.386 fused_ordering(199) 00:13:58.386 fused_ordering(200) 00:13:58.386 fused_ordering(201) 00:13:58.386 fused_ordering(202) 00:13:58.386 fused_ordering(203) 00:13:58.386 fused_ordering(204) 00:13:58.386 fused_ordering(205) 00:13:58.645 fused_ordering(206) 00:13:58.645 fused_ordering(207) 00:13:58.645 fused_ordering(208) 00:13:58.645 fused_ordering(209) 00:13:58.645 fused_ordering(210) 00:13:58.645 fused_ordering(211) 00:13:58.645 fused_ordering(212) 00:13:58.645 fused_ordering(213) 00:13:58.645 fused_ordering(214) 00:13:58.645 fused_ordering(215) 00:13:58.645 fused_ordering(216) 00:13:58.645 fused_ordering(217) 00:13:58.645 fused_ordering(218) 00:13:58.645 fused_ordering(219) 00:13:58.645 fused_ordering(220) 00:13:58.645 fused_ordering(221) 00:13:58.645 fused_ordering(222) 00:13:58.645 fused_ordering(223) 00:13:58.645 fused_ordering(224) 00:13:58.645 fused_ordering(225) 00:13:58.645 fused_ordering(226) 00:13:58.645 fused_ordering(227) 00:13:58.645 fused_ordering(228) 00:13:58.645 fused_ordering(229) 00:13:58.645 fused_ordering(230) 00:13:58.645 fused_ordering(231) 00:13:58.645 fused_ordering(232) 00:13:58.645 fused_ordering(233) 00:13:58.645 fused_ordering(234) 00:13:58.645 fused_ordering(235) 00:13:58.645 fused_ordering(236) 00:13:58.645 fused_ordering(237) 00:13:58.645 fused_ordering(238) 00:13:58.645 fused_ordering(239) 00:13:58.645 fused_ordering(240) 00:13:58.645 fused_ordering(241) 00:13:58.645 fused_ordering(242) 00:13:58.645 fused_ordering(243) 00:13:58.645 fused_ordering(244) 00:13:58.645 fused_ordering(245) 00:13:58.645 fused_ordering(246) 00:13:58.645 fused_ordering(247) 00:13:58.645 fused_ordering(248) 00:13:58.645 fused_ordering(249) 00:13:58.645 fused_ordering(250) 00:13:58.645 fused_ordering(251) 00:13:58.645 fused_ordering(252) 00:13:58.645 fused_ordering(253) 00:13:58.645 fused_ordering(254) 00:13:58.645 fused_ordering(255) 00:13:58.645 fused_ordering(256) 00:13:58.645 fused_ordering(257) 00:13:58.645 fused_ordering(258) 00:13:58.645 fused_ordering(259) 00:13:58.645 fused_ordering(260) 00:13:58.645 fused_ordering(261) 00:13:58.645 fused_ordering(262) 00:13:58.645 fused_ordering(263) 00:13:58.645 fused_ordering(264) 00:13:58.645 fused_ordering(265) 00:13:58.645 fused_ordering(266) 00:13:58.645 fused_ordering(267) 00:13:58.645 fused_ordering(268) 00:13:58.645 fused_ordering(269) 00:13:58.645 fused_ordering(270) 00:13:58.645 fused_ordering(271) 00:13:58.645 fused_ordering(272) 00:13:58.645 fused_ordering(273) 00:13:58.645 fused_ordering(274) 00:13:58.645 fused_ordering(275) 00:13:58.645 fused_ordering(276) 00:13:58.645 fused_ordering(277) 00:13:58.645 fused_ordering(278) 00:13:58.645 fused_ordering(279) 00:13:58.645 fused_ordering(280) 00:13:58.645 fused_ordering(281) 00:13:58.645 fused_ordering(282) 00:13:58.645 fused_ordering(283) 00:13:58.645 fused_ordering(284) 00:13:58.645 fused_ordering(285) 00:13:58.645 fused_ordering(286) 00:13:58.645 fused_ordering(287) 00:13:58.645 fused_ordering(288) 00:13:58.645 fused_ordering(289) 00:13:58.645 fused_ordering(290) 00:13:58.645 fused_ordering(291) 00:13:58.645 fused_ordering(292) 00:13:58.645 fused_ordering(293) 00:13:58.645 fused_ordering(294) 00:13:58.645 fused_ordering(295) 00:13:58.645 fused_ordering(296) 00:13:58.645 fused_ordering(297) 00:13:58.645 fused_ordering(298) 00:13:58.645 fused_ordering(299) 00:13:58.645 fused_ordering(300) 00:13:58.645 fused_ordering(301) 00:13:58.645 fused_ordering(302) 00:13:58.645 fused_ordering(303) 00:13:58.645 fused_ordering(304) 00:13:58.645 fused_ordering(305) 00:13:58.645 fused_ordering(306) 00:13:58.645 fused_ordering(307) 00:13:58.645 fused_ordering(308) 00:13:58.645 fused_ordering(309) 00:13:58.645 fused_ordering(310) 00:13:58.645 fused_ordering(311) 00:13:58.645 fused_ordering(312) 00:13:58.645 fused_ordering(313) 00:13:58.645 fused_ordering(314) 00:13:58.645 fused_ordering(315) 00:13:58.645 fused_ordering(316) 00:13:58.645 fused_ordering(317) 00:13:58.646 fused_ordering(318) 00:13:58.646 fused_ordering(319) 00:13:58.646 fused_ordering(320) 00:13:58.646 fused_ordering(321) 00:13:58.646 fused_ordering(322) 00:13:58.646 fused_ordering(323) 00:13:58.646 fused_ordering(324) 00:13:58.646 fused_ordering(325) 00:13:58.646 fused_ordering(326) 00:13:58.646 fused_ordering(327) 00:13:58.646 fused_ordering(328) 00:13:58.646 fused_ordering(329) 00:13:58.646 fused_ordering(330) 00:13:58.646 fused_ordering(331) 00:13:58.646 fused_ordering(332) 00:13:58.646 fused_ordering(333) 00:13:58.646 fused_ordering(334) 00:13:58.646 fused_ordering(335) 00:13:58.646 fused_ordering(336) 00:13:58.646 fused_ordering(337) 00:13:58.646 fused_ordering(338) 00:13:58.646 fused_ordering(339) 00:13:58.646 fused_ordering(340) 00:13:58.646 fused_ordering(341) 00:13:58.646 fused_ordering(342) 00:13:58.646 fused_ordering(343) 00:13:58.646 fused_ordering(344) 00:13:58.646 fused_ordering(345) 00:13:58.646 fused_ordering(346) 00:13:58.646 fused_ordering(347) 00:13:58.646 fused_ordering(348) 00:13:58.646 fused_ordering(349) 00:13:58.646 fused_ordering(350) 00:13:58.646 fused_ordering(351) 00:13:58.646 fused_ordering(352) 00:13:58.646 fused_ordering(353) 00:13:58.646 fused_ordering(354) 00:13:58.646 fused_ordering(355) 00:13:58.646 fused_ordering(356) 00:13:58.646 fused_ordering(357) 00:13:58.646 fused_ordering(358) 00:13:58.646 fused_ordering(359) 00:13:58.646 fused_ordering(360) 00:13:58.646 fused_ordering(361) 00:13:58.646 fused_ordering(362) 00:13:58.646 fused_ordering(363) 00:13:58.646 fused_ordering(364) 00:13:58.646 fused_ordering(365) 00:13:58.646 fused_ordering(366) 00:13:58.646 fused_ordering(367) 00:13:58.646 fused_ordering(368) 00:13:58.646 fused_ordering(369) 00:13:58.646 fused_ordering(370) 00:13:58.646 fused_ordering(371) 00:13:58.646 fused_ordering(372) 00:13:58.646 fused_ordering(373) 00:13:58.646 fused_ordering(374) 00:13:58.646 fused_ordering(375) 00:13:58.646 fused_ordering(376) 00:13:58.646 fused_ordering(377) 00:13:58.646 fused_ordering(378) 00:13:58.646 fused_ordering(379) 00:13:58.646 fused_ordering(380) 00:13:58.646 fused_ordering(381) 00:13:58.646 fused_ordering(382) 00:13:58.646 fused_ordering(383) 00:13:58.646 fused_ordering(384) 00:13:58.646 fused_ordering(385) 00:13:58.646 fused_ordering(386) 00:13:58.646 fused_ordering(387) 00:13:58.646 fused_ordering(388) 00:13:58.646 fused_ordering(389) 00:13:58.646 fused_ordering(390) 00:13:58.646 fused_ordering(391) 00:13:58.646 fused_ordering(392) 00:13:58.646 fused_ordering(393) 00:13:58.646 fused_ordering(394) 00:13:58.646 fused_ordering(395) 00:13:58.646 fused_ordering(396) 00:13:58.646 fused_ordering(397) 00:13:58.646 fused_ordering(398) 00:13:58.646 fused_ordering(399) 00:13:58.646 fused_ordering(400) 00:13:58.646 fused_ordering(401) 00:13:58.646 fused_ordering(402) 00:13:58.646 fused_ordering(403) 00:13:58.646 fused_ordering(404) 00:13:58.646 fused_ordering(405) 00:13:58.646 fused_ordering(406) 00:13:58.646 fused_ordering(407) 00:13:58.646 fused_ordering(408) 00:13:58.646 fused_ordering(409) 00:13:58.646 fused_ordering(410) 00:13:58.905 fused_ordering(411) 00:13:58.905 fused_ordering(412) 00:13:58.905 fused_ordering(413) 00:13:58.905 fused_ordering(414) 00:13:58.905 fused_ordering(415) 00:13:58.905 fused_ordering(416) 00:13:58.905 fused_ordering(417) 00:13:58.905 fused_ordering(418) 00:13:58.905 fused_ordering(419) 00:13:58.905 fused_ordering(420) 00:13:58.905 fused_ordering(421) 00:13:58.905 fused_ordering(422) 00:13:58.905 fused_ordering(423) 00:13:58.905 fused_ordering(424) 00:13:58.905 fused_ordering(425) 00:13:58.905 fused_ordering(426) 00:13:58.905 fused_ordering(427) 00:13:58.905 fused_ordering(428) 00:13:58.905 fused_ordering(429) 00:13:58.905 fused_ordering(430) 00:13:58.905 fused_ordering(431) 00:13:58.905 fused_ordering(432) 00:13:58.905 fused_ordering(433) 00:13:58.905 fused_ordering(434) 00:13:58.905 fused_ordering(435) 00:13:58.905 fused_ordering(436) 00:13:58.905 fused_ordering(437) 00:13:58.905 fused_ordering(438) 00:13:58.905 fused_ordering(439) 00:13:58.905 fused_ordering(440) 00:13:58.905 fused_ordering(441) 00:13:58.905 fused_ordering(442) 00:13:58.905 fused_ordering(443) 00:13:58.905 fused_ordering(444) 00:13:58.905 fused_ordering(445) 00:13:58.905 fused_ordering(446) 00:13:58.905 fused_ordering(447) 00:13:58.905 fused_ordering(448) 00:13:58.905 fused_ordering(449) 00:13:58.905 fused_ordering(450) 00:13:58.905 fused_ordering(451) 00:13:58.905 fused_ordering(452) 00:13:58.905 fused_ordering(453) 00:13:58.905 fused_ordering(454) 00:13:58.905 fused_ordering(455) 00:13:58.905 fused_ordering(456) 00:13:58.905 fused_ordering(457) 00:13:58.905 fused_ordering(458) 00:13:58.905 fused_ordering(459) 00:13:58.905 fused_ordering(460) 00:13:58.905 fused_ordering(461) 00:13:58.905 fused_ordering(462) 00:13:58.905 fused_ordering(463) 00:13:58.905 fused_ordering(464) 00:13:58.905 fused_ordering(465) 00:13:58.905 fused_ordering(466) 00:13:58.905 fused_ordering(467) 00:13:58.905 fused_ordering(468) 00:13:58.905 fused_ordering(469) 00:13:58.905 fused_ordering(470) 00:13:58.905 fused_ordering(471) 00:13:58.905 fused_ordering(472) 00:13:58.905 fused_ordering(473) 00:13:58.905 fused_ordering(474) 00:13:58.905 fused_ordering(475) 00:13:58.905 fused_ordering(476) 00:13:58.905 fused_ordering(477) 00:13:58.905 fused_ordering(478) 00:13:58.905 fused_ordering(479) 00:13:58.905 fused_ordering(480) 00:13:58.905 fused_ordering(481) 00:13:58.905 fused_ordering(482) 00:13:58.905 fused_ordering(483) 00:13:58.905 fused_ordering(484) 00:13:58.905 fused_ordering(485) 00:13:58.905 fused_ordering(486) 00:13:58.905 fused_ordering(487) 00:13:58.905 fused_ordering(488) 00:13:58.905 fused_ordering(489) 00:13:58.905 fused_ordering(490) 00:13:58.905 fused_ordering(491) 00:13:58.905 fused_ordering(492) 00:13:58.905 fused_ordering(493) 00:13:58.905 fused_ordering(494) 00:13:58.905 fused_ordering(495) 00:13:58.905 fused_ordering(496) 00:13:58.905 fused_ordering(497) 00:13:58.905 fused_ordering(498) 00:13:58.905 fused_ordering(499) 00:13:58.905 fused_ordering(500) 00:13:58.905 fused_ordering(501) 00:13:58.905 fused_ordering(502) 00:13:58.905 fused_ordering(503) 00:13:58.905 fused_ordering(504) 00:13:58.905 fused_ordering(505) 00:13:58.905 fused_ordering(506) 00:13:58.905 fused_ordering(507) 00:13:58.905 fused_ordering(508) 00:13:58.905 fused_ordering(509) 00:13:58.905 fused_ordering(510) 00:13:58.905 fused_ordering(511) 00:13:58.905 fused_ordering(512) 00:13:58.905 fused_ordering(513) 00:13:58.905 fused_ordering(514) 00:13:58.905 fused_ordering(515) 00:13:58.905 fused_ordering(516) 00:13:58.905 fused_ordering(517) 00:13:58.905 fused_ordering(518) 00:13:58.905 fused_ordering(519) 00:13:58.905 fused_ordering(520) 00:13:58.905 fused_ordering(521) 00:13:58.905 fused_ordering(522) 00:13:58.905 fused_ordering(523) 00:13:58.905 fused_ordering(524) 00:13:58.905 fused_ordering(525) 00:13:58.905 fused_ordering(526) 00:13:58.905 fused_ordering(527) 00:13:58.905 fused_ordering(528) 00:13:58.905 fused_ordering(529) 00:13:58.905 fused_ordering(530) 00:13:58.905 fused_ordering(531) 00:13:58.905 fused_ordering(532) 00:13:58.905 fused_ordering(533) 00:13:58.905 fused_ordering(534) 00:13:58.905 fused_ordering(535) 00:13:58.905 fused_ordering(536) 00:13:58.905 fused_ordering(537) 00:13:58.905 fused_ordering(538) 00:13:58.905 fused_ordering(539) 00:13:58.905 fused_ordering(540) 00:13:58.905 fused_ordering(541) 00:13:58.905 fused_ordering(542) 00:13:58.905 fused_ordering(543) 00:13:58.905 fused_ordering(544) 00:13:58.905 fused_ordering(545) 00:13:58.905 fused_ordering(546) 00:13:58.905 fused_ordering(547) 00:13:58.905 fused_ordering(548) 00:13:58.905 fused_ordering(549) 00:13:58.905 fused_ordering(550) 00:13:58.905 fused_ordering(551) 00:13:58.905 fused_ordering(552) 00:13:58.905 fused_ordering(553) 00:13:58.905 fused_ordering(554) 00:13:58.905 fused_ordering(555) 00:13:58.905 fused_ordering(556) 00:13:58.905 fused_ordering(557) 00:13:58.905 fused_ordering(558) 00:13:58.905 fused_ordering(559) 00:13:58.905 fused_ordering(560) 00:13:58.905 fused_ordering(561) 00:13:58.905 fused_ordering(562) 00:13:58.905 fused_ordering(563) 00:13:58.905 fused_ordering(564) 00:13:58.905 fused_ordering(565) 00:13:58.905 fused_ordering(566) 00:13:58.905 fused_ordering(567) 00:13:58.905 fused_ordering(568) 00:13:58.905 fused_ordering(569) 00:13:58.905 fused_ordering(570) 00:13:58.905 fused_ordering(571) 00:13:58.905 fused_ordering(572) 00:13:58.905 fused_ordering(573) 00:13:58.905 fused_ordering(574) 00:13:58.905 fused_ordering(575) 00:13:58.905 fused_ordering(576) 00:13:58.905 fused_ordering(577) 00:13:58.905 fused_ordering(578) 00:13:58.905 fused_ordering(579) 00:13:58.905 fused_ordering(580) 00:13:58.905 fused_ordering(581) 00:13:58.905 fused_ordering(582) 00:13:58.905 fused_ordering(583) 00:13:58.905 fused_ordering(584) 00:13:58.905 fused_ordering(585) 00:13:58.905 fused_ordering(586) 00:13:58.905 fused_ordering(587) 00:13:58.905 fused_ordering(588) 00:13:58.905 fused_ordering(589) 00:13:58.905 fused_ordering(590) 00:13:58.905 fused_ordering(591) 00:13:58.905 fused_ordering(592) 00:13:58.905 fused_ordering(593) 00:13:58.905 fused_ordering(594) 00:13:58.905 fused_ordering(595) 00:13:58.905 fused_ordering(596) 00:13:58.905 fused_ordering(597) 00:13:58.905 fused_ordering(598) 00:13:58.905 fused_ordering(599) 00:13:58.905 fused_ordering(600) 00:13:58.905 fused_ordering(601) 00:13:58.905 fused_ordering(602) 00:13:58.905 fused_ordering(603) 00:13:58.905 fused_ordering(604) 00:13:58.905 fused_ordering(605) 00:13:58.905 fused_ordering(606) 00:13:58.905 fused_ordering(607) 00:13:58.905 fused_ordering(608) 00:13:58.905 fused_ordering(609) 00:13:58.905 fused_ordering(610) 00:13:58.905 fused_ordering(611) 00:13:58.905 fused_ordering(612) 00:13:58.905 fused_ordering(613) 00:13:58.905 fused_ordering(614) 00:13:58.905 fused_ordering(615) 00:13:59.164 fused_ordering(616) 00:13:59.164 fused_ordering(617) 00:13:59.164 fused_ordering(618) 00:13:59.164 fused_ordering(619) 00:13:59.164 fused_ordering(620) 00:13:59.164 fused_ordering(621) 00:13:59.164 fused_ordering(622) 00:13:59.164 fused_ordering(623) 00:13:59.164 fused_ordering(624) 00:13:59.164 fused_ordering(625) 00:13:59.164 fused_ordering(626) 00:13:59.164 fused_ordering(627) 00:13:59.164 fused_ordering(628) 00:13:59.164 fused_ordering(629) 00:13:59.164 fused_ordering(630) 00:13:59.164 fused_ordering(631) 00:13:59.164 fused_ordering(632) 00:13:59.164 fused_ordering(633) 00:13:59.164 fused_ordering(634) 00:13:59.164 fused_ordering(635) 00:13:59.164 fused_ordering(636) 00:13:59.164 fused_ordering(637) 00:13:59.164 fused_ordering(638) 00:13:59.164 fused_ordering(639) 00:13:59.164 fused_ordering(640) 00:13:59.164 fused_ordering(641) 00:13:59.164 fused_ordering(642) 00:13:59.164 fused_ordering(643) 00:13:59.164 fused_ordering(644) 00:13:59.164 fused_ordering(645) 00:13:59.164 fused_ordering(646) 00:13:59.164 fused_ordering(647) 00:13:59.164 fused_ordering(648) 00:13:59.164 fused_ordering(649) 00:13:59.164 fused_ordering(650) 00:13:59.164 fused_ordering(651) 00:13:59.164 fused_ordering(652) 00:13:59.164 fused_ordering(653) 00:13:59.164 fused_ordering(654) 00:13:59.164 fused_ordering(655) 00:13:59.164 fused_ordering(656) 00:13:59.164 fused_ordering(657) 00:13:59.164 fused_ordering(658) 00:13:59.164 fused_ordering(659) 00:13:59.164 fused_ordering(660) 00:13:59.164 fused_ordering(661) 00:13:59.164 fused_ordering(662) 00:13:59.164 fused_ordering(663) 00:13:59.164 fused_ordering(664) 00:13:59.164 fused_ordering(665) 00:13:59.164 fused_ordering(666) 00:13:59.164 fused_ordering(667) 00:13:59.164 fused_ordering(668) 00:13:59.164 fused_ordering(669) 00:13:59.164 fused_ordering(670) 00:13:59.164 fused_ordering(671) 00:13:59.164 fused_ordering(672) 00:13:59.164 fused_ordering(673) 00:13:59.164 fused_ordering(674) 00:13:59.164 fused_ordering(675) 00:13:59.164 fused_ordering(676) 00:13:59.164 fused_ordering(677) 00:13:59.164 fused_ordering(678) 00:13:59.164 fused_ordering(679) 00:13:59.164 fused_ordering(680) 00:13:59.164 fused_ordering(681) 00:13:59.164 fused_ordering(682) 00:13:59.164 fused_ordering(683) 00:13:59.164 fused_ordering(684) 00:13:59.164 fused_ordering(685) 00:13:59.164 fused_ordering(686) 00:13:59.164 fused_ordering(687) 00:13:59.164 fused_ordering(688) 00:13:59.164 fused_ordering(689) 00:13:59.164 fused_ordering(690) 00:13:59.164 fused_ordering(691) 00:13:59.164 fused_ordering(692) 00:13:59.164 fused_ordering(693) 00:13:59.164 fused_ordering(694) 00:13:59.164 fused_ordering(695) 00:13:59.164 fused_ordering(696) 00:13:59.164 fused_ordering(697) 00:13:59.164 fused_ordering(698) 00:13:59.164 fused_ordering(699) 00:13:59.164 fused_ordering(700) 00:13:59.164 fused_ordering(701) 00:13:59.164 fused_ordering(702) 00:13:59.164 fused_ordering(703) 00:13:59.164 fused_ordering(704) 00:13:59.164 fused_ordering(705) 00:13:59.164 fused_ordering(706) 00:13:59.164 fused_ordering(707) 00:13:59.164 fused_ordering(708) 00:13:59.164 fused_ordering(709) 00:13:59.164 fused_ordering(710) 00:13:59.164 fused_ordering(711) 00:13:59.164 fused_ordering(712) 00:13:59.164 fused_ordering(713) 00:13:59.164 fused_ordering(714) 00:13:59.164 fused_ordering(715) 00:13:59.164 fused_ordering(716) 00:13:59.164 fused_ordering(717) 00:13:59.164 fused_ordering(718) 00:13:59.164 fused_ordering(719) 00:13:59.164 fused_ordering(720) 00:13:59.164 fused_ordering(721) 00:13:59.164 fused_ordering(722) 00:13:59.164 fused_ordering(723) 00:13:59.164 fused_ordering(724) 00:13:59.164 fused_ordering(725) 00:13:59.164 fused_ordering(726) 00:13:59.164 fused_ordering(727) 00:13:59.164 fused_ordering(728) 00:13:59.164 fused_ordering(729) 00:13:59.164 fused_ordering(730) 00:13:59.164 fused_ordering(731) 00:13:59.164 fused_ordering(732) 00:13:59.164 fused_ordering(733) 00:13:59.164 fused_ordering(734) 00:13:59.164 fused_ordering(735) 00:13:59.164 fused_ordering(736) 00:13:59.164 fused_ordering(737) 00:13:59.164 fused_ordering(738) 00:13:59.164 fused_ordering(739) 00:13:59.164 fused_ordering(740) 00:13:59.164 fused_ordering(741) 00:13:59.164 fused_ordering(742) 00:13:59.164 fused_ordering(743) 00:13:59.164 fused_ordering(744) 00:13:59.164 fused_ordering(745) 00:13:59.164 fused_ordering(746) 00:13:59.164 fused_ordering(747) 00:13:59.164 fused_ordering(748) 00:13:59.164 fused_ordering(749) 00:13:59.164 fused_ordering(750) 00:13:59.164 fused_ordering(751) 00:13:59.164 fused_ordering(752) 00:13:59.164 fused_ordering(753) 00:13:59.164 fused_ordering(754) 00:13:59.164 fused_ordering(755) 00:13:59.164 fused_ordering(756) 00:13:59.164 fused_ordering(757) 00:13:59.164 fused_ordering(758) 00:13:59.165 fused_ordering(759) 00:13:59.165 fused_ordering(760) 00:13:59.165 fused_ordering(761) 00:13:59.165 fused_ordering(762) 00:13:59.165 fused_ordering(763) 00:13:59.165 fused_ordering(764) 00:13:59.165 fused_ordering(765) 00:13:59.165 fused_ordering(766) 00:13:59.165 fused_ordering(767) 00:13:59.165 fused_ordering(768) 00:13:59.165 fused_ordering(769) 00:13:59.165 fused_ordering(770) 00:13:59.165 fused_ordering(771) 00:13:59.165 fused_ordering(772) 00:13:59.165 fused_ordering(773) 00:13:59.165 fused_ordering(774) 00:13:59.165 fused_ordering(775) 00:13:59.165 fused_ordering(776) 00:13:59.165 fused_ordering(777) 00:13:59.165 fused_ordering(778) 00:13:59.165 fused_ordering(779) 00:13:59.165 fused_ordering(780) 00:13:59.165 fused_ordering(781) 00:13:59.165 fused_ordering(782) 00:13:59.165 fused_ordering(783) 00:13:59.165 fused_ordering(784) 00:13:59.165 fused_ordering(785) 00:13:59.165 fused_ordering(786) 00:13:59.165 fused_ordering(787) 00:13:59.165 fused_ordering(788) 00:13:59.165 fused_ordering(789) 00:13:59.165 fused_ordering(790) 00:13:59.165 fused_ordering(791) 00:13:59.165 fused_ordering(792) 00:13:59.165 fused_ordering(793) 00:13:59.165 fused_ordering(794) 00:13:59.165 fused_ordering(795) 00:13:59.165 fused_ordering(796) 00:13:59.165 fused_ordering(797) 00:13:59.165 fused_ordering(798) 00:13:59.165 fused_ordering(799) 00:13:59.165 fused_ordering(800) 00:13:59.165 fused_ordering(801) 00:13:59.165 fused_ordering(802) 00:13:59.165 fused_ordering(803) 00:13:59.165 fused_ordering(804) 00:13:59.165 fused_ordering(805) 00:13:59.165 fused_ordering(806) 00:13:59.165 fused_ordering(807) 00:13:59.165 fused_ordering(808) 00:13:59.165 fused_ordering(809) 00:13:59.165 fused_ordering(810) 00:13:59.165 fused_ordering(811) 00:13:59.165 fused_ordering(812) 00:13:59.165 fused_ordering(813) 00:13:59.165 fused_ordering(814) 00:13:59.165 fused_ordering(815) 00:13:59.165 fused_ordering(816) 00:13:59.165 fused_ordering(817) 00:13:59.165 fused_ordering(818) 00:13:59.165 fused_ordering(819) 00:13:59.165 fused_ordering(820) 00:13:59.732 fused_ordering(821) 00:13:59.732 fused_ordering(822) 00:13:59.732 fused_ordering(823) 00:13:59.732 fused_ordering(824) 00:13:59.732 fused_ordering(825) 00:13:59.732 fused_ordering(826) 00:13:59.732 fused_ordering(827) 00:13:59.732 fused_ordering(828) 00:13:59.732 fused_ordering(829) 00:13:59.732 fused_ordering(830) 00:13:59.732 fused_ordering(831) 00:13:59.732 fused_ordering(832) 00:13:59.732 fused_ordering(833) 00:13:59.732 fused_ordering(834) 00:13:59.732 fused_ordering(835) 00:13:59.732 fused_ordering(836) 00:13:59.732 fused_ordering(837) 00:13:59.732 fused_ordering(838) 00:13:59.732 fused_ordering(839) 00:13:59.732 fused_ordering(840) 00:13:59.732 fused_ordering(841) 00:13:59.732 fused_ordering(842) 00:13:59.732 fused_ordering(843) 00:13:59.732 fused_ordering(844) 00:13:59.732 fused_ordering(845) 00:13:59.732 fused_ordering(846) 00:13:59.732 fused_ordering(847) 00:13:59.732 fused_ordering(848) 00:13:59.732 fused_ordering(849) 00:13:59.732 fused_ordering(850) 00:13:59.732 fused_ordering(851) 00:13:59.732 fused_ordering(852) 00:13:59.732 fused_ordering(853) 00:13:59.732 fused_ordering(854) 00:13:59.732 fused_ordering(855) 00:13:59.732 fused_ordering(856) 00:13:59.732 fused_ordering(857) 00:13:59.732 fused_ordering(858) 00:13:59.732 fused_ordering(859) 00:13:59.732 fused_ordering(860) 00:13:59.733 fused_ordering(861) 00:13:59.733 fused_ordering(862) 00:13:59.733 fused_ordering(863) 00:13:59.733 fused_ordering(864) 00:13:59.733 fused_ordering(865) 00:13:59.733 fused_ordering(866) 00:13:59.733 fused_ordering(867) 00:13:59.733 fused_ordering(868) 00:13:59.733 fused_ordering(869) 00:13:59.733 fused_ordering(870) 00:13:59.733 fused_ordering(871) 00:13:59.733 fused_ordering(872) 00:13:59.733 fused_ordering(873) 00:13:59.733 fused_ordering(874) 00:13:59.733 fused_ordering(875) 00:13:59.733 fused_ordering(876) 00:13:59.733 fused_ordering(877) 00:13:59.733 fused_ordering(878) 00:13:59.733 fused_ordering(879) 00:13:59.733 fused_ordering(880) 00:13:59.733 fused_ordering(881) 00:13:59.733 fused_ordering(882) 00:13:59.733 fused_ordering(883) 00:13:59.733 fused_ordering(884) 00:13:59.733 fused_ordering(885) 00:13:59.733 fused_ordering(886) 00:13:59.733 fused_ordering(887) 00:13:59.733 fused_ordering(888) 00:13:59.733 fused_ordering(889) 00:13:59.733 fused_ordering(890) 00:13:59.733 fused_ordering(891) 00:13:59.733 fused_ordering(892) 00:13:59.733 fused_ordering(893) 00:13:59.733 fused_ordering(894) 00:13:59.733 fused_ordering(895) 00:13:59.733 fused_ordering(896) 00:13:59.733 fused_ordering(897) 00:13:59.733 fused_ordering(898) 00:13:59.733 fused_ordering(899) 00:13:59.733 fused_ordering(900) 00:13:59.733 fused_ordering(901) 00:13:59.733 fused_ordering(902) 00:13:59.733 fused_ordering(903) 00:13:59.733 fused_ordering(904) 00:13:59.733 fused_ordering(905) 00:13:59.733 fused_ordering(906) 00:13:59.733 fused_ordering(907) 00:13:59.733 fused_ordering(908) 00:13:59.733 fused_ordering(909) 00:13:59.733 fused_ordering(910) 00:13:59.733 fused_ordering(911) 00:13:59.733 fused_ordering(912) 00:13:59.733 fused_ordering(913) 00:13:59.733 fused_ordering(914) 00:13:59.733 fused_ordering(915) 00:13:59.733 fused_ordering(916) 00:13:59.733 fused_ordering(917) 00:13:59.733 fused_ordering(918) 00:13:59.733 fused_ordering(919) 00:13:59.733 fused_ordering(920) 00:13:59.733 fused_ordering(921) 00:13:59.733 fused_ordering(922) 00:13:59.733 fused_ordering(923) 00:13:59.733 fused_ordering(924) 00:13:59.733 fused_ordering(925) 00:13:59.733 fused_ordering(926) 00:13:59.733 fused_ordering(927) 00:13:59.733 fused_ordering(928) 00:13:59.733 fused_ordering(929) 00:13:59.733 fused_ordering(930) 00:13:59.733 fused_ordering(931) 00:13:59.733 fused_ordering(932) 00:13:59.733 fused_ordering(933) 00:13:59.733 fused_ordering(934) 00:13:59.733 fused_ordering(935) 00:13:59.733 fused_ordering(936) 00:13:59.733 fused_ordering(937) 00:13:59.733 fused_ordering(938) 00:13:59.733 fused_ordering(939) 00:13:59.733 fused_ordering(940) 00:13:59.733 fused_ordering(941) 00:13:59.733 fused_ordering(942) 00:13:59.733 fused_ordering(943) 00:13:59.733 fused_ordering(944) 00:13:59.733 fused_ordering(945) 00:13:59.733 fused_ordering(946) 00:13:59.733 fused_ordering(947) 00:13:59.733 fused_ordering(948) 00:13:59.733 fused_ordering(949) 00:13:59.733 fused_ordering(950) 00:13:59.733 fused_ordering(951) 00:13:59.733 fused_ordering(952) 00:13:59.733 fused_ordering(953) 00:13:59.733 fused_ordering(954) 00:13:59.733 fused_ordering(955) 00:13:59.733 fused_ordering(956) 00:13:59.733 fused_ordering(957) 00:13:59.733 fused_ordering(958) 00:13:59.733 fused_ordering(959) 00:13:59.733 fused_ordering(960) 00:13:59.733 fused_ordering(961) 00:13:59.733 fused_ordering(962) 00:13:59.733 fused_ordering(963) 00:13:59.733 fused_ordering(964) 00:13:59.733 fused_ordering(965) 00:13:59.733 fused_ordering(966) 00:13:59.733 fused_ordering(967) 00:13:59.733 fused_ordering(968) 00:13:59.733 fused_ordering(969) 00:13:59.733 fused_ordering(970) 00:13:59.733 fused_ordering(971) 00:13:59.733 fused_ordering(972) 00:13:59.733 fused_ordering(973) 00:13:59.733 fused_ordering(974) 00:13:59.733 fused_ordering(975) 00:13:59.733 fused_ordering(976) 00:13:59.733 fused_ordering(977) 00:13:59.733 fused_ordering(978) 00:13:59.733 fused_ordering(979) 00:13:59.733 fused_ordering(980) 00:13:59.733 fused_ordering(981) 00:13:59.733 fused_ordering(982) 00:13:59.733 fused_ordering(983) 00:13:59.733 fused_ordering(984) 00:13:59.733 fused_ordering(985) 00:13:59.733 fused_ordering(986) 00:13:59.733 fused_ordering(987) 00:13:59.733 fused_ordering(988) 00:13:59.733 fused_ordering(989) 00:13:59.733 fused_ordering(990) 00:13:59.733 fused_ordering(991) 00:13:59.733 fused_ordering(992) 00:13:59.733 fused_ordering(993) 00:13:59.733 fused_ordering(994) 00:13:59.733 fused_ordering(995) 00:13:59.733 fused_ordering(996) 00:13:59.733 fused_ordering(997) 00:13:59.733 fused_ordering(998) 00:13:59.733 fused_ordering(999) 00:13:59.733 fused_ordering(1000) 00:13:59.733 fused_ordering(1001) 00:13:59.733 fused_ordering(1002) 00:13:59.733 fused_ordering(1003) 00:13:59.733 fused_ordering(1004) 00:13:59.733 fused_ordering(1005) 00:13:59.733 fused_ordering(1006) 00:13:59.733 fused_ordering(1007) 00:13:59.733 fused_ordering(1008) 00:13:59.733 fused_ordering(1009) 00:13:59.733 fused_ordering(1010) 00:13:59.733 fused_ordering(1011) 00:13:59.733 fused_ordering(1012) 00:13:59.733 fused_ordering(1013) 00:13:59.733 fused_ordering(1014) 00:13:59.733 fused_ordering(1015) 00:13:59.733 fused_ordering(1016) 00:13:59.733 fused_ordering(1017) 00:13:59.733 fused_ordering(1018) 00:13:59.733 fused_ordering(1019) 00:13:59.733 fused_ordering(1020) 00:13:59.733 fused_ordering(1021) 00:13:59.733 fused_ordering(1022) 00:13:59.733 fused_ordering(1023) 00:13:59.733 17:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:59.733 17:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:59.733 17:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:59.733 17:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:13:59.733 17:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:59.733 17:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:13:59.733 17:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:59.733 17:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:59.733 rmmod nvme_tcp 00:13:59.733 rmmod nvme_fabrics 00:13:59.733 rmmod nvme_keyring 00:13:59.733 17:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:59.733 17:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:13:59.733 17:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:13:59.733 17:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 2529602 ']' 00:13:59.733 17:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 2529602 00:13:59.733 17:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 2529602 ']' 00:13:59.733 17:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 2529602 00:13:59.733 17:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:13:59.733 17:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:59.733 17:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2529602 00:13:59.993 17:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:59.993 17:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:59.993 17:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2529602' 00:13:59.993 killing process with pid 2529602 00:13:59.993 17:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 2529602 00:13:59.993 17:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 2529602 00:13:59.993 17:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:59.993 17:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:59.993 17:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:59.993 17:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:13:59.993 17:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:13:59.993 17:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:59.993 17:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:13:59.993 17:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:59.993 17:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:59.993 17:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:59.993 17:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:59.993 17:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:02.528 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:02.528 00:14:02.528 real 0m10.660s 00:14:02.528 user 0m5.033s 00:14:02.528 sys 0m5.812s 00:14:02.528 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:02.528 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:02.528 ************************************ 00:14:02.528 END TEST nvmf_fused_ordering 00:14:02.528 ************************************ 00:14:02.528 17:24:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:02.528 17:24:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:02.528 17:24:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:02.528 17:24:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:02.528 ************************************ 00:14:02.528 START TEST nvmf_ns_masking 00:14:02.528 ************************************ 00:14:02.528 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:02.528 * Looking for test storage... 00:14:02.528 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:02.528 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:02.528 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:14:02.528 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:02.528 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:02.528 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:02.528 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:02.528 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:02.528 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:14:02.528 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:14:02.528 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:14:02.528 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:14:02.528 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:14:02.528 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:14:02.528 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:14:02.528 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:02.528 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:14:02.528 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:14:02.528 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:02.528 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:02.528 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:14:02.528 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:14:02.528 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:02.528 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:14:02.528 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:14:02.528 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:14:02.528 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:14:02.528 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:02.528 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:14:02.528 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:14:02.528 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:02.528 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:02.528 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:14:02.528 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:02.528 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:02.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:02.528 --rc genhtml_branch_coverage=1 00:14:02.528 --rc genhtml_function_coverage=1 00:14:02.528 --rc genhtml_legend=1 00:14:02.528 --rc geninfo_all_blocks=1 00:14:02.528 --rc geninfo_unexecuted_blocks=1 00:14:02.528 00:14:02.528 ' 00:14:02.528 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:02.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:02.528 --rc genhtml_branch_coverage=1 00:14:02.528 --rc genhtml_function_coverage=1 00:14:02.528 --rc genhtml_legend=1 00:14:02.528 --rc geninfo_all_blocks=1 00:14:02.528 --rc geninfo_unexecuted_blocks=1 00:14:02.528 00:14:02.528 ' 00:14:02.528 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:02.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:02.528 --rc genhtml_branch_coverage=1 00:14:02.528 --rc genhtml_function_coverage=1 00:14:02.528 --rc genhtml_legend=1 00:14:02.528 --rc geninfo_all_blocks=1 00:14:02.528 --rc geninfo_unexecuted_blocks=1 00:14:02.528 00:14:02.528 ' 00:14:02.528 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:02.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:02.528 --rc genhtml_branch_coverage=1 00:14:02.528 --rc genhtml_function_coverage=1 00:14:02.528 --rc genhtml_legend=1 00:14:02.528 --rc geninfo_all_blocks=1 00:14:02.528 --rc geninfo_unexecuted_blocks=1 00:14:02.528 00:14:02.528 ' 00:14:02.528 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:02.528 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:02.528 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:02.528 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:02.528 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:02.528 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:02.528 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:02.528 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:02.528 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:02.529 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:02.529 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:02.529 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:02.529 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:14:02.529 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:14:02.529 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:02.529 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:02.529 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:02.529 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:02.529 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:02.529 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:14:02.529 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:02.529 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:02.529 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:02.529 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.529 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.529 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.529 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:02.529 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.529 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:14:02.529 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:02.529 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:02.529 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:02.529 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:02.529 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:02.529 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:02.529 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:02.529 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:02.529 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:02.529 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:02.529 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:02.529 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:14:02.529 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:14:02.529 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:14:02.529 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=f0b9dcf0-b019-4b52-82f4-cda414a00d65 00:14:02.529 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:14:02.529 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=24c2bb2b-02f2-49d7-9ef8-bb277202bc02 00:14:02.529 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:02.529 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:14:02.529 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:14:02.529 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:14:02.529 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=dd320cf6-0bd2-4391-90c8-9c5132a155f2 00:14:02.529 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:14:02.529 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:02.529 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:02.529 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:02.529 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:02.529 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:02.529 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:02.529 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:02.529 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:02.529 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:02.529 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:02.529 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:14:02.529 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:09.097 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:09.097 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:14:09.097 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:09.097 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:09.097 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:09.097 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:09.097 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:09.097 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:14:09.097 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:09.097 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:14:09.097 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:14:09.097 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:14:09.097 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:14:09.097 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:14:09.097 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:14:09.097 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:09.097 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:09.097 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:09.097 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:09.097 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:09.097 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:09.097 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:09.097 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:09.097 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:09.097 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:09.097 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:09.097 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:09.097 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:09.097 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:09.097 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:09.097 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:09.097 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:09.097 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:09.097 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:09.097 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:09.097 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:09.097 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:09.097 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:09.097 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:09.097 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:09.097 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:09.097 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:09.097 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:09.097 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:09.097 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:09.097 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:09.097 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:09.097 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:09.097 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:09.097 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:09.097 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:09.097 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:09.097 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:09.097 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:09.097 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:09.097 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:09.097 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:09.097 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:09.097 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:09.097 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:09.097 Found net devices under 0000:af:00.0: cvl_0_0 00:14:09.097 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:09.097 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:09.097 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:09.097 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:09.098 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:09.098 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:09.098 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:09.098 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:09.098 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:09.098 Found net devices under 0000:af:00.1: cvl_0_1 00:14:09.098 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:09.098 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:09.098 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:14:09.098 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:09.098 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:09.098 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:09.098 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:09.098 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:09.098 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:09.098 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:09.098 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:09.098 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:09.098 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:09.098 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:09.098 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:09.098 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:09.098 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:09.098 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:09.098 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:09.098 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:09.098 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:09.098 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:09.098 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:09.098 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:09.098 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:09.098 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:09.098 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:09.098 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:09.098 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:09.098 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:09.098 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.354 ms 00:14:09.098 00:14:09.098 --- 10.0.0.2 ping statistics --- 00:14:09.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:09.098 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:14:09.098 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:09.098 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:09.098 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:14:09.098 00:14:09.098 --- 10.0.0.1 ping statistics --- 00:14:09.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:09.098 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:14:09.098 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:09.098 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:14:09.098 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:09.098 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:09.098 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:09.098 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:09.098 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:09.098 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:09.098 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:09.098 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:14:09.098 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:09.098 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:09.098 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:09.098 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=2533566 00:14:09.098 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:09.098 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 2533566 00:14:09.098 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2533566 ']' 00:14:09.098 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:09.098 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:09.098 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:09.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:09.098 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:09.098 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:09.098 [2024-12-09 17:24:37.423689] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:14:09.098 [2024-12-09 17:24:37.423732] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:09.098 [2024-12-09 17:24:37.501321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:09.098 [2024-12-09 17:24:37.539881] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:09.098 [2024-12-09 17:24:37.539915] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:09.098 [2024-12-09 17:24:37.539922] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:09.098 [2024-12-09 17:24:37.539928] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:09.098 [2024-12-09 17:24:37.539933] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:09.098 [2024-12-09 17:24:37.540448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:09.098 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:09.098 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:14:09.098 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:09.098 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:09.098 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:09.098 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:09.098 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:09.098 [2024-12-09 17:24:37.835834] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:09.098 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:14:09.098 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:14:09.098 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:09.098 Malloc1 00:14:09.098 17:24:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:09.098 Malloc2 00:14:09.357 17:24:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:09.357 17:24:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:09.616 17:24:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:09.874 [2024-12-09 17:24:38.878038] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:09.874 17:24:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:14:09.874 17:24:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I dd320cf6-0bd2-4391-90c8-9c5132a155f2 -a 10.0.0.2 -s 4420 -i 4 00:14:10.132 17:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:14:10.132 17:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:10.132 17:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:10.132 17:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:10.132 17:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:12.035 17:24:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:12.035 17:24:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:12.035 17:24:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:12.035 17:24:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:12.035 17:24:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:12.035 17:24:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:12.035 17:24:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:12.035 17:24:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:12.035 17:24:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:12.035 17:24:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:12.035 17:24:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:14:12.035 17:24:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:12.035 17:24:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:12.035 [ 0]:0x1 00:14:12.035 17:24:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:12.035 17:24:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:12.294 17:24:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=49971685740b4a8ca4543a10753485af 00:14:12.294 17:24:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 49971685740b4a8ca4543a10753485af != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:12.294 17:24:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:12.294 17:24:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:14:12.294 17:24:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:12.294 17:24:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:12.294 [ 0]:0x1 00:14:12.294 17:24:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:12.294 17:24:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:12.553 17:24:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=49971685740b4a8ca4543a10753485af 00:14:12.553 17:24:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 49971685740b4a8ca4543a10753485af != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:12.553 17:24:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:14:12.553 17:24:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:12.553 17:24:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:12.553 [ 1]:0x2 00:14:12.553 17:24:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:12.553 17:24:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:12.553 17:24:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c0e6aa38cddb4306bb423e970294b80b 00:14:12.553 17:24:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c0e6aa38cddb4306bb423e970294b80b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:12.553 17:24:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:14:12.553 17:24:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:12.553 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:12.553 17:24:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:12.811 17:24:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:13.070 17:24:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:14:13.070 17:24:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I dd320cf6-0bd2-4391-90c8-9c5132a155f2 -a 10.0.0.2 -s 4420 -i 4 00:14:13.070 17:24:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:13.070 17:24:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:13.070 17:24:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:13.070 17:24:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:14:13.070 17:24:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:14:13.070 17:24:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:15.603 17:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:15.604 17:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:15.604 17:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:15.604 17:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:15.604 17:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:15.604 17:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:15.604 17:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:15.604 17:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:15.604 17:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:15.604 17:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:15.604 17:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:14:15.604 17:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:15.604 17:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:15.604 17:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:15.604 17:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:15.604 17:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:15.604 17:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:15.604 17:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:15.604 17:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:15.604 17:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:15.604 17:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:15.604 17:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:15.604 17:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:15.604 17:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:15.604 17:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:15.604 17:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:15.604 17:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:15.604 17:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:15.604 17:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:14:15.604 17:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:15.604 17:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:15.604 [ 0]:0x2 00:14:15.604 17:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:15.604 17:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:15.604 17:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c0e6aa38cddb4306bb423e970294b80b 00:14:15.604 17:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c0e6aa38cddb4306bb423e970294b80b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:15.604 17:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:15.604 17:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:14:15.604 17:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:15.604 17:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:15.604 [ 0]:0x1 00:14:15.604 17:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:15.604 17:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:15.604 17:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=49971685740b4a8ca4543a10753485af 00:14:15.604 17:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 49971685740b4a8ca4543a10753485af != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:15.604 17:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:14:15.604 17:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:15.604 17:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:15.604 [ 1]:0x2 00:14:15.604 17:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:15.604 17:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:15.604 17:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c0e6aa38cddb4306bb423e970294b80b 00:14:15.604 17:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c0e6aa38cddb4306bb423e970294b80b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:15.604 17:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:15.863 17:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:14:15.863 17:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:15.863 17:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:15.863 17:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:15.863 17:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:15.863 17:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:15.863 17:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:15.863 17:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:15.863 17:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:15.863 17:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:15.863 17:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:15.863 17:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:15.863 17:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:15.863 17:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:15.863 17:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:15.863 17:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:15.863 17:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:15.863 17:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:15.863 17:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:14:15.863 17:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:15.863 17:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:15.863 [ 0]:0x2 00:14:15.863 17:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:15.863 17:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:15.863 17:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c0e6aa38cddb4306bb423e970294b80b 00:14:15.863 17:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c0e6aa38cddb4306bb423e970294b80b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:15.863 17:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:14:15.863 17:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:15.863 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:15.863 17:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:16.124 17:24:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:14:16.124 17:24:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I dd320cf6-0bd2-4391-90c8-9c5132a155f2 -a 10.0.0.2 -s 4420 -i 4 00:14:16.384 17:24:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:16.384 17:24:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:16.384 17:24:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:16.384 17:24:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:14:16.384 17:24:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:14:16.384 17:24:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:18.284 17:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:18.284 17:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:18.284 17:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:18.284 17:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:14:18.284 17:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:18.284 17:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:18.284 17:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:18.284 17:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:18.284 17:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:18.284 17:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:18.284 17:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:14:18.284 17:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:18.284 17:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:18.543 [ 0]:0x1 00:14:18.543 17:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:18.543 17:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:18.543 17:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=49971685740b4a8ca4543a10753485af 00:14:18.543 17:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 49971685740b4a8ca4543a10753485af != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:18.543 17:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:14:18.543 17:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:18.543 17:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:18.543 [ 1]:0x2 00:14:18.543 17:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:18.543 17:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:18.543 17:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c0e6aa38cddb4306bb423e970294b80b 00:14:18.543 17:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c0e6aa38cddb4306bb423e970294b80b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:18.543 17:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:18.802 17:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:14:18.802 17:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:18.802 17:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:18.802 17:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:18.802 17:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:18.802 17:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:18.802 17:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:18.802 17:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:18.802 17:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:18.802 17:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:18.802 17:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:18.802 17:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:18.802 17:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:18.802 17:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:18.802 17:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:18.802 17:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:18.802 17:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:18.802 17:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:18.802 17:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:14:18.802 17:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:18.802 17:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:18.802 [ 0]:0x2 00:14:18.802 17:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:18.802 17:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:18.802 17:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c0e6aa38cddb4306bb423e970294b80b 00:14:18.802 17:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c0e6aa38cddb4306bb423e970294b80b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:18.802 17:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:18.802 17:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:18.802 17:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:18.802 17:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:18.802 17:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:18.802 17:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:18.802 17:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:18.802 17:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:18.802 17:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:18.802 17:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:18.802 17:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:18.802 17:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:19.061 [2024-12-09 17:24:48.048050] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:19.061 request: 00:14:19.061 { 00:14:19.061 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:19.061 "nsid": 2, 00:14:19.061 "host": "nqn.2016-06.io.spdk:host1", 00:14:19.061 "method": "nvmf_ns_remove_host", 00:14:19.061 "req_id": 1 00:14:19.061 } 00:14:19.061 Got JSON-RPC error response 00:14:19.061 response: 00:14:19.061 { 00:14:19.061 "code": -32602, 00:14:19.061 "message": "Invalid parameters" 00:14:19.061 } 00:14:19.061 17:24:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:19.061 17:24:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:19.061 17:24:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:19.061 17:24:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:19.061 17:24:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:14:19.061 17:24:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:19.061 17:24:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:19.061 17:24:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:19.061 17:24:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:19.061 17:24:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:19.061 17:24:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:19.061 17:24:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:19.061 17:24:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:19.061 17:24:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:19.061 17:24:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:19.061 17:24:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:19.061 17:24:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:19.061 17:24:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:19.061 17:24:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:19.061 17:24:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:19.061 17:24:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:19.061 17:24:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:19.061 17:24:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:14:19.061 17:24:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:19.061 17:24:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:19.061 [ 0]:0x2 00:14:19.061 17:24:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:19.061 17:24:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:19.061 17:24:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c0e6aa38cddb4306bb423e970294b80b 00:14:19.061 17:24:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c0e6aa38cddb4306bb423e970294b80b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:19.061 17:24:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:14:19.061 17:24:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:19.061 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:19.061 17:24:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2535333 00:14:19.061 17:24:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:14:19.061 17:24:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2535333 /var/tmp/host.sock 00:14:19.061 17:24:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2535333 ']' 00:14:19.061 17:24:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:14:19.061 17:24:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:14:19.061 17:24:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:19.061 17:24:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:19.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:19.061 17:24:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:19.061 17:24:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:19.320 [2024-12-09 17:24:48.284507] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:14:19.320 [2024-12-09 17:24:48.284555] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2535333 ] 00:14:19.320 [2024-12-09 17:24:48.357019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:19.320 [2024-12-09 17:24:48.396075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:19.579 17:24:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:19.579 17:24:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:14:19.579 17:24:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:19.837 17:24:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:19.837 17:24:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid f0b9dcf0-b019-4b52-82f4-cda414a00d65 00:14:19.837 17:24:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:19.837 17:24:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g F0B9DCF0B0194B5282F4CDA414A00D65 -i 00:14:20.095 17:24:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 24c2bb2b-02f2-49d7-9ef8-bb277202bc02 00:14:20.095 17:24:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:20.095 17:24:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 24C2BB2B02F249D79EF8BB277202BC02 -i 00:14:20.354 17:24:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:20.613 17:24:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:14:20.872 17:24:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:20.872 17:24:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:21.130 nvme0n1 00:14:21.130 17:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:21.130 17:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:21.389 nvme1n2 00:14:21.389 17:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:14:21.389 17:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:21.389 17:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:14:21.389 17:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:14:21.389 17:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:14:21.647 17:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:14:21.647 17:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:14:21.647 17:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:14:21.647 17:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:14:21.906 17:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ f0b9dcf0-b019-4b52-82f4-cda414a00d65 == \f\0\b\9\d\c\f\0\-\b\0\1\9\-\4\b\5\2\-\8\2\f\4\-\c\d\a\4\1\4\a\0\0\d\6\5 ]] 00:14:21.906 17:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:14:21.906 17:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:14:21.906 17:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:14:21.906 17:24:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 24c2bb2b-02f2-49d7-9ef8-bb277202bc02 == \2\4\c\2\b\b\2\b\-\0\2\f\2\-\4\9\d\7\-\9\e\f\8\-\b\b\2\7\7\2\0\2\b\c\0\2 ]] 00:14:21.906 17:24:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:22.165 17:24:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:22.423 17:24:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid f0b9dcf0-b019-4b52-82f4-cda414a00d65 00:14:22.423 17:24:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:22.423 17:24:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g F0B9DCF0B0194B5282F4CDA414A00D65 00:14:22.423 17:24:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:22.423 17:24:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g F0B9DCF0B0194B5282F4CDA414A00D65 00:14:22.423 17:24:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:22.423 17:24:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:22.423 17:24:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:22.423 17:24:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:22.423 17:24:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:22.423 17:24:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:22.423 17:24:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:22.423 17:24:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:22.423 17:24:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g F0B9DCF0B0194B5282F4CDA414A00D65 00:14:22.423 [2024-12-09 17:24:51.585946] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:14:22.423 [2024-12-09 17:24:51.585975] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:14:22.423 [2024-12-09 17:24:51.585983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.423 request: 00:14:22.423 { 00:14:22.423 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:22.423 "namespace": { 00:14:22.423 "bdev_name": "invalid", 00:14:22.423 "nsid": 1, 00:14:22.423 "nguid": "F0B9DCF0B0194B5282F4CDA414A00D65", 00:14:22.423 "no_auto_visible": false, 00:14:22.423 "hide_metadata": false 00:14:22.423 }, 00:14:22.424 "method": "nvmf_subsystem_add_ns", 00:14:22.424 "req_id": 1 00:14:22.424 } 00:14:22.424 Got JSON-RPC error response 00:14:22.424 response: 00:14:22.424 { 00:14:22.424 "code": -32602, 00:14:22.424 "message": "Invalid parameters" 00:14:22.424 } 00:14:22.424 17:24:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:22.424 17:24:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:22.682 17:24:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:22.682 17:24:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:22.682 17:24:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid f0b9dcf0-b019-4b52-82f4-cda414a00d65 00:14:22.682 17:24:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:22.682 17:24:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g F0B9DCF0B0194B5282F4CDA414A00D65 -i 00:14:22.682 17:24:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:14:25.216 17:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:14:25.216 17:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:14:25.216 17:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:25.216 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:14:25.216 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 2535333 00:14:25.216 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2535333 ']' 00:14:25.216 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2535333 00:14:25.216 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:14:25.216 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:25.216 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2535333 00:14:25.216 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:25.216 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:25.216 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2535333' 00:14:25.216 killing process with pid 2535333 00:14:25.216 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2535333 00:14:25.216 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2535333 00:14:25.216 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:25.474 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:14:25.474 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:14:25.474 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:25.474 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:14:25.474 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:25.474 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:14:25.474 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:25.474 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:25.474 rmmod nvme_tcp 00:14:25.474 rmmod nvme_fabrics 00:14:25.474 rmmod nvme_keyring 00:14:25.474 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:25.474 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:14:25.474 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:14:25.474 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 2533566 ']' 00:14:25.474 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 2533566 00:14:25.474 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2533566 ']' 00:14:25.474 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2533566 00:14:25.474 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:14:25.474 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:25.474 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2533566 00:14:25.733 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:25.733 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:25.733 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2533566' 00:14:25.733 killing process with pid 2533566 00:14:25.733 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2533566 00:14:25.733 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2533566 00:14:25.733 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:25.733 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:25.733 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:25.733 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:14:25.733 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:14:25.733 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:25.733 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:14:25.733 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:25.733 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:25.733 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:25.733 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:25.733 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:28.268 17:24:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:28.268 00:14:28.268 real 0m25.737s 00:14:28.268 user 0m30.821s 00:14:28.268 sys 0m6.999s 00:14:28.268 17:24:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:28.268 17:24:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:28.268 ************************************ 00:14:28.268 END TEST nvmf_ns_masking 00:14:28.268 ************************************ 00:14:28.268 17:24:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:14:28.268 17:24:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:28.268 17:24:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:28.268 17:24:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:28.268 17:24:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:28.268 ************************************ 00:14:28.268 START TEST nvmf_nvme_cli 00:14:28.268 ************************************ 00:14:28.268 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:28.268 * Looking for test storage... 00:14:28.268 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:28.268 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:28.268 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:14:28.268 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:28.268 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:28.268 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:28.268 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:28.268 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:28.268 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:14:28.268 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:14:28.268 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:14:28.268 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:14:28.269 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:14:28.269 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:14:28.269 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:14:28.269 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:28.269 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:14:28.269 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:14:28.269 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:28.269 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:28.269 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:14:28.269 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:14:28.269 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:28.269 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:14:28.269 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:14:28.269 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:14:28.269 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:14:28.269 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:28.269 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:14:28.269 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:14:28.269 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:28.269 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:28.269 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:14:28.269 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:28.269 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:28.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.269 --rc genhtml_branch_coverage=1 00:14:28.269 --rc genhtml_function_coverage=1 00:14:28.269 --rc genhtml_legend=1 00:14:28.269 --rc geninfo_all_blocks=1 00:14:28.269 --rc geninfo_unexecuted_blocks=1 00:14:28.269 00:14:28.269 ' 00:14:28.269 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:28.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.269 --rc genhtml_branch_coverage=1 00:14:28.269 --rc genhtml_function_coverage=1 00:14:28.269 --rc genhtml_legend=1 00:14:28.269 --rc geninfo_all_blocks=1 00:14:28.269 --rc geninfo_unexecuted_blocks=1 00:14:28.269 00:14:28.269 ' 00:14:28.269 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:28.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.269 --rc genhtml_branch_coverage=1 00:14:28.269 --rc genhtml_function_coverage=1 00:14:28.269 --rc genhtml_legend=1 00:14:28.269 --rc geninfo_all_blocks=1 00:14:28.269 --rc geninfo_unexecuted_blocks=1 00:14:28.269 00:14:28.269 ' 00:14:28.269 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:28.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.269 --rc genhtml_branch_coverage=1 00:14:28.269 --rc genhtml_function_coverage=1 00:14:28.269 --rc genhtml_legend=1 00:14:28.269 --rc geninfo_all_blocks=1 00:14:28.269 --rc geninfo_unexecuted_blocks=1 00:14:28.269 00:14:28.269 ' 00:14:28.269 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:28.269 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:28.269 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:28.269 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:28.269 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:28.269 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:28.269 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:28.269 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:28.269 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:28.269 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:28.269 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:28.269 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:28.269 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:14:28.269 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:14:28.269 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:28.269 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:28.269 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:28.269 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:28.269 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:28.269 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:14:28.269 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:28.269 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:28.269 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:28.269 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.269 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.269 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.269 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:28.269 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.269 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:14:28.269 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:28.269 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:28.269 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:28.269 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:28.269 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:28.269 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:28.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:28.269 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:28.269 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:28.269 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:28.269 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:28.269 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:28.269 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:28.269 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:28.269 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:28.269 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:28.269 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:28.269 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:28.269 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:28.269 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:28.269 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:28.269 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:28.269 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:28.269 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:28.269 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:14:28.269 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:33.676 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:33.676 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:14:33.676 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:33.676 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:33.676 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:33.676 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:33.676 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:33.676 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:14:33.676 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:33.676 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:14:33.676 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:14:33.676 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:14:33.676 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:14:33.676 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:14:33.676 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:14:33.676 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:33.676 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:33.676 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:33.676 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:33.676 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:33.676 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:33.676 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:33.676 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:33.676 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:33.676 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:33.676 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:33.676 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:33.676 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:33.936 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:33.936 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:33.936 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:33.936 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:33.936 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:33.936 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:33.936 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:33.936 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:33.936 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:33.936 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:33.936 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:33.936 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:33.936 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:33.936 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:33.936 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:33.936 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:33.936 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:33.936 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:33.936 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:33.936 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:33.936 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:33.936 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:33.936 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:33.936 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:33.936 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:33.936 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:33.936 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:33.936 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:33.936 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:33.936 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:33.936 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:33.936 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:33.936 Found net devices under 0000:af:00.0: cvl_0_0 00:14:33.936 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:33.936 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:33.936 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:33.936 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:33.936 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:33.936 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:33.936 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:33.936 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:33.936 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:33.936 Found net devices under 0000:af:00.1: cvl_0_1 00:14:33.936 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:33.936 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:33.936 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:14:33.936 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:33.936 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:33.936 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:33.936 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:33.936 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:33.936 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:33.936 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:33.936 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:33.936 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:33.936 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:33.936 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:33.936 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:33.936 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:33.936 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:33.936 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:33.936 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:33.936 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:33.936 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:33.936 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:33.936 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:33.936 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:33.936 17:25:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:34.195 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:34.195 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:34.195 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:34.195 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:34.195 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:34.195 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.351 ms 00:14:34.195 00:14:34.195 --- 10.0.0.2 ping statistics --- 00:14:34.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:34.195 rtt min/avg/max/mdev = 0.351/0.351/0.351/0.000 ms 00:14:34.195 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:34.195 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:34.195 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:14:34.195 00:14:34.195 --- 10.0.0.1 ping statistics --- 00:14:34.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:34.195 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:14:34.195 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:34.195 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:14:34.195 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:34.195 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:34.195 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:34.195 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:34.195 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:34.195 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:34.195 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:34.195 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:34.195 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:34.195 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:34.195 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:34.196 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=2539997 00:14:34.196 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:34.196 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 2539997 00:14:34.196 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 2539997 ']' 00:14:34.196 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:34.196 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:34.196 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:34.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:34.196 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:34.196 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:34.196 [2024-12-09 17:25:03.310859] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:14:34.196 [2024-12-09 17:25:03.310903] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:34.455 [2024-12-09 17:25:03.388052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:34.455 [2024-12-09 17:25:03.427662] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:34.455 [2024-12-09 17:25:03.427701] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:34.455 [2024-12-09 17:25:03.427708] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:34.455 [2024-12-09 17:25:03.427714] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:34.455 [2024-12-09 17:25:03.427719] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:34.455 [2024-12-09 17:25:03.429179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:34.455 [2024-12-09 17:25:03.429292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:34.455 [2024-12-09 17:25:03.429326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.455 [2024-12-09 17:25:03.429328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:34.455 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:34.455 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:14:34.455 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:34.455 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:34.455 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:34.455 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:34.455 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:34.455 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.455 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:34.455 [2024-12-09 17:25:03.579103] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:34.455 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.455 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:34.455 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.455 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:34.455 Malloc0 00:14:34.455 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.455 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:34.455 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.455 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:34.714 Malloc1 00:14:34.714 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.714 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:34.714 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.714 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:34.714 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.714 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:34.714 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.714 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:34.714 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.714 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:34.714 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.714 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:34.714 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.714 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:34.714 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.714 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:34.714 [2024-12-09 17:25:03.672939] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:34.714 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.714 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:34.714 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.714 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:34.714 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.714 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:14:34.714 00:14:34.714 Discovery Log Number of Records 2, Generation counter 2 00:14:34.714 =====Discovery Log Entry 0====== 00:14:34.714 trtype: tcp 00:14:34.714 adrfam: ipv4 00:14:34.714 subtype: current discovery subsystem 00:14:34.714 treq: not required 00:14:34.714 portid: 0 00:14:34.714 trsvcid: 4420 00:14:34.714 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:34.714 traddr: 10.0.0.2 00:14:34.714 eflags: explicit discovery connections, duplicate discovery information 00:14:34.714 sectype: none 00:14:34.714 =====Discovery Log Entry 1====== 00:14:34.714 trtype: tcp 00:14:34.714 adrfam: ipv4 00:14:34.714 subtype: nvme subsystem 00:14:34.714 treq: not required 00:14:34.714 portid: 0 00:14:34.714 trsvcid: 4420 00:14:34.714 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:34.714 traddr: 10.0.0.2 00:14:34.714 eflags: none 00:14:34.714 sectype: none 00:14:34.714 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:34.714 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:34.714 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:34.714 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:34.714 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:34.714 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:34.714 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:34.714 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:34.714 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:34.714 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme1n1 == /dev/nvme* ]] 00:14:34.714 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme1n1 00:14:34.714 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:34.714 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme1n2 == /dev/nvme* ]] 00:14:34.714 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme1n2 00:14:34.714 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:34.714 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=2 00:14:34.714 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:36.089 17:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:36.089 17:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:14:36.089 17:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:36.089 17:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:14:36.089 17:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:14:36.089 17:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:14:37.990 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:37.990 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:37.990 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:37.990 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:14:37.990 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:37.990 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:14:37.990 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:37.990 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:37.990 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:37.990 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:37.990 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:37.990 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:37.990 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:37.990 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:37.990 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:37.990 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:37.990 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:37.990 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:37.990 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:37.990 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:37.990 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme1n1 == /dev/nvme* ]] 00:14:37.990 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme1n1 00:14:37.990 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:37.990 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme1n2 == /dev/nvme* ]] 00:14:37.990 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme1n2 00:14:37.990 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:37.990 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:14:37.990 /dev/nvme0n2 00:14:37.990 /dev/nvme1n1 00:14:37.990 /dev/nvme1n2 ]] 00:14:37.990 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:37.990 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:37.990 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:37.990 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:37.990 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:37.990 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:37.990 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:37.990 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:37.990 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:37.990 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:37.990 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:37.990 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:37.990 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:37.990 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:37.990 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:37.990 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme1n1 == /dev/nvme* ]] 00:14:37.990 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme1n1 00:14:37.990 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:37.990 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme1n2 == /dev/nvme* ]] 00:14:37.990 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme1n2 00:14:37.990 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:37.990 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=4 00:14:37.990 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:38.249 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:38.249 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:38.249 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:14:38.249 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:38.249 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:38.249 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:38.249 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:38.249 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:14:38.249 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:38.249 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:38.249 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.249 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:38.249 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.249 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:38.249 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:38.249 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:38.249 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:14:38.249 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:38.249 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:14:38.249 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:38.249 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:38.249 rmmod nvme_tcp 00:14:38.249 rmmod nvme_fabrics 00:14:38.249 rmmod nvme_keyring 00:14:38.249 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:38.249 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:14:38.249 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:14:38.249 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 2539997 ']' 00:14:38.249 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 2539997 00:14:38.249 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 2539997 ']' 00:14:38.249 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 2539997 00:14:38.249 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:14:38.249 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:38.249 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2539997 00:14:38.249 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:38.249 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:38.249 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2539997' 00:14:38.249 killing process with pid 2539997 00:14:38.249 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 2539997 00:14:38.249 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 2539997 00:14:38.508 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:38.508 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:38.508 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:38.508 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:14:38.508 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:14:38.508 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:38.508 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:14:38.508 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:38.508 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:38.508 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:38.508 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:38.508 17:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:41.042 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:41.042 00:14:41.042 real 0m12.623s 00:14:41.042 user 0m18.265s 00:14:41.042 sys 0m5.032s 00:14:41.042 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:41.042 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:41.042 ************************************ 00:14:41.042 END TEST nvmf_nvme_cli 00:14:41.042 ************************************ 00:14:41.042 17:25:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:14:41.042 17:25:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:41.042 17:25:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:41.042 17:25:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:41.042 17:25:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:41.042 ************************************ 00:14:41.042 START TEST nvmf_vfio_user 00:14:41.042 ************************************ 00:14:41.042 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:41.042 * Looking for test storage... 00:14:41.042 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:41.042 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:41.042 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lcov --version 00:14:41.042 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:41.042 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:41.042 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:41.042 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:41.042 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:41.042 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:14:41.042 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:14:41.042 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:14:41.042 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:14:41.042 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:14:41.042 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:14:41.042 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:14:41.042 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:41.042 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:14:41.042 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:14:41.042 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:41.042 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:41.042 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:14:41.042 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:14:41.042 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:41.043 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:14:41.043 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:14:41.043 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:14:41.043 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:14:41.043 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:41.043 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:14:41.043 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:14:41.043 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:41.043 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:41.043 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:14:41.043 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:41.043 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:41.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:41.043 --rc genhtml_branch_coverage=1 00:14:41.043 --rc genhtml_function_coverage=1 00:14:41.043 --rc genhtml_legend=1 00:14:41.043 --rc geninfo_all_blocks=1 00:14:41.043 --rc geninfo_unexecuted_blocks=1 00:14:41.043 00:14:41.043 ' 00:14:41.043 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:41.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:41.043 --rc genhtml_branch_coverage=1 00:14:41.043 --rc genhtml_function_coverage=1 00:14:41.043 --rc genhtml_legend=1 00:14:41.043 --rc geninfo_all_blocks=1 00:14:41.043 --rc geninfo_unexecuted_blocks=1 00:14:41.043 00:14:41.043 ' 00:14:41.043 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:41.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:41.043 --rc genhtml_branch_coverage=1 00:14:41.043 --rc genhtml_function_coverage=1 00:14:41.043 --rc genhtml_legend=1 00:14:41.043 --rc geninfo_all_blocks=1 00:14:41.043 --rc geninfo_unexecuted_blocks=1 00:14:41.043 00:14:41.043 ' 00:14:41.043 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:41.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:41.043 --rc genhtml_branch_coverage=1 00:14:41.043 --rc genhtml_function_coverage=1 00:14:41.043 --rc genhtml_legend=1 00:14:41.043 --rc geninfo_all_blocks=1 00:14:41.043 --rc geninfo_unexecuted_blocks=1 00:14:41.043 00:14:41.043 ' 00:14:41.043 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:41.043 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:41.043 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:41.043 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:41.043 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:41.043 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:41.043 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:41.043 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:41.043 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:41.043 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:41.043 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:41.043 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:41.043 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:14:41.043 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:14:41.043 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:41.043 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:41.043 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:41.043 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:41.043 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:41.043 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:14:41.043 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:41.043 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:41.043 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:41.043 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.043 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.043 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.043 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:41.043 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.043 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:14:41.043 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:41.043 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:41.043 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:41.043 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:41.043 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:41.043 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:41.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:41.043 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:41.043 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:41.043 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:41.043 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:41.043 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:41.043 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:41.043 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:41.043 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:41.043 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:41.043 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:41.043 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:41.043 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:41.043 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:41.043 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2541262 00:14:41.043 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2541262' 00:14:41.043 Process pid: 2541262 00:14:41.043 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:41.043 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2541262 00:14:41.043 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:41.043 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 2541262 ']' 00:14:41.043 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:41.043 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:41.043 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:41.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:41.043 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:41.043 17:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:41.044 [2024-12-09 17:25:10.000959] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:14:41.044 [2024-12-09 17:25:10.001008] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:41.044 [2024-12-09 17:25:10.081845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:41.044 [2024-12-09 17:25:10.126438] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:41.044 [2024-12-09 17:25:10.126474] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:41.044 [2024-12-09 17:25:10.126481] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:41.044 [2024-12-09 17:25:10.126487] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:41.044 [2024-12-09 17:25:10.126492] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:41.044 [2024-12-09 17:25:10.127957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:41.044 [2024-12-09 17:25:10.127981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:41.044 [2024-12-09 17:25:10.128070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:41.044 [2024-12-09 17:25:10.128071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:41.301 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:41.301 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:14:41.301 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:42.237 17:25:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:42.496 17:25:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:42.496 17:25:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:42.496 17:25:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:42.496 17:25:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:42.496 17:25:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:42.496 Malloc1 00:14:42.496 17:25:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:42.755 17:25:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:43.014 17:25:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:43.273 17:25:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:43.273 17:25:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:43.273 17:25:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:43.273 Malloc2 00:14:43.273 17:25:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:43.532 17:25:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:43.791 17:25:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:44.052 17:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:44.052 17:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:44.052 17:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:44.052 17:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:44.052 17:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:44.052 17:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:44.052 [2024-12-09 17:25:13.053895] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:14:44.052 [2024-12-09 17:25:13.053922] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2541741 ] 00:14:44.052 [2024-12-09 17:25:13.092680] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:44.052 [2024-12-09 17:25:13.103225] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:44.052 [2024-12-09 17:25:13.103246] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f2d0612d000 00:14:44.052 [2024-12-09 17:25:13.103535] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:44.052 [2024-12-09 17:25:13.104537] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:44.052 [2024-12-09 17:25:13.105542] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:44.052 [2024-12-09 17:25:13.106545] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:44.052 [2024-12-09 17:25:13.107548] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:44.052 [2024-12-09 17:25:13.108559] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:44.052 [2024-12-09 17:25:13.109566] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:44.052 [2024-12-09 17:25:13.110568] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:44.052 [2024-12-09 17:25:13.111574] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:44.052 [2024-12-09 17:25:13.111583] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f2d06122000 00:14:44.052 [2024-12-09 17:25:13.112536] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:44.052 [2024-12-09 17:25:13.120942] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:44.052 [2024-12-09 17:25:13.120963] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:14:44.052 [2024-12-09 17:25:13.125663] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:44.052 [2024-12-09 17:25:13.125698] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:44.053 [2024-12-09 17:25:13.125773] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:14:44.053 [2024-12-09 17:25:13.125787] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:14:44.053 [2024-12-09 17:25:13.125792] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:14:44.053 [2024-12-09 17:25:13.126661] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:44.053 [2024-12-09 17:25:13.126669] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:14:44.053 [2024-12-09 17:25:13.126675] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:14:44.053 [2024-12-09 17:25:13.127669] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:44.053 [2024-12-09 17:25:13.127677] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:14:44.053 [2024-12-09 17:25:13.127687] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:14:44.053 [2024-12-09 17:25:13.128675] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:44.053 [2024-12-09 17:25:13.128683] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:44.053 [2024-12-09 17:25:13.129677] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:44.053 [2024-12-09 17:25:13.129685] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:14:44.053 [2024-12-09 17:25:13.129690] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:14:44.053 [2024-12-09 17:25:13.129696] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:44.053 [2024-12-09 17:25:13.129803] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:14:44.053 [2024-12-09 17:25:13.129807] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:44.053 [2024-12-09 17:25:13.129812] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:44.053 [2024-12-09 17:25:13.130689] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:44.053 [2024-12-09 17:25:13.131691] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:44.053 [2024-12-09 17:25:13.132697] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:44.053 [2024-12-09 17:25:13.133697] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:44.053 [2024-12-09 17:25:13.133757] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:44.053 [2024-12-09 17:25:13.134706] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:44.053 [2024-12-09 17:25:13.134713] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:44.053 [2024-12-09 17:25:13.134717] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:14:44.053 [2024-12-09 17:25:13.134734] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:14:44.053 [2024-12-09 17:25:13.134740] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:14:44.053 [2024-12-09 17:25:13.134758] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:44.053 [2024-12-09 17:25:13.134763] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:44.053 [2024-12-09 17:25:13.134767] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:44.053 [2024-12-09 17:25:13.134780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:44.053 [2024-12-09 17:25:13.134824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:44.053 [2024-12-09 17:25:13.134834] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:14:44.053 [2024-12-09 17:25:13.134840] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:14:44.053 [2024-12-09 17:25:13.134844] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:14:44.053 [2024-12-09 17:25:13.134848] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:44.053 [2024-12-09 17:25:13.134853] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:14:44.053 [2024-12-09 17:25:13.134857] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:14:44.053 [2024-12-09 17:25:13.134861] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:14:44.053 [2024-12-09 17:25:13.134868] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:14:44.053 [2024-12-09 17:25:13.134878] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:44.053 [2024-12-09 17:25:13.134892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:44.053 [2024-12-09 17:25:13.134902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:44.053 [2024-12-09 17:25:13.134910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:44.053 [2024-12-09 17:25:13.134917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:44.053 [2024-12-09 17:25:13.134924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:44.053 [2024-12-09 17:25:13.134928] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:14:44.053 [2024-12-09 17:25:13.134936] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:44.053 [2024-12-09 17:25:13.134944] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:44.053 [2024-12-09 17:25:13.134952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:44.053 [2024-12-09 17:25:13.134957] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:14:44.053 [2024-12-09 17:25:13.134961] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:44.053 [2024-12-09 17:25:13.134967] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:14:44.053 [2024-12-09 17:25:13.134972] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:14:44.053 [2024-12-09 17:25:13.134980] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:44.053 [2024-12-09 17:25:13.134993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:44.053 [2024-12-09 17:25:13.135044] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:14:44.053 [2024-12-09 17:25:13.135051] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:14:44.053 [2024-12-09 17:25:13.135058] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:44.053 [2024-12-09 17:25:13.135062] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:44.053 [2024-12-09 17:25:13.135065] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:44.053 [2024-12-09 17:25:13.135070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:44.053 [2024-12-09 17:25:13.135085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:44.053 [2024-12-09 17:25:13.135094] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:14:44.053 [2024-12-09 17:25:13.135102] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:14:44.053 [2024-12-09 17:25:13.135109] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:14:44.053 [2024-12-09 17:25:13.135115] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:44.053 [2024-12-09 17:25:13.135119] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:44.053 [2024-12-09 17:25:13.135122] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:44.053 [2024-12-09 17:25:13.135127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:44.053 [2024-12-09 17:25:13.135148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:44.053 [2024-12-09 17:25:13.135160] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:44.053 [2024-12-09 17:25:13.135166] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:44.054 [2024-12-09 17:25:13.135172] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:44.054 [2024-12-09 17:25:13.135176] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:44.054 [2024-12-09 17:25:13.135179] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:44.054 [2024-12-09 17:25:13.135184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:44.054 [2024-12-09 17:25:13.135199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:44.054 [2024-12-09 17:25:13.135206] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:44.054 [2024-12-09 17:25:13.135212] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:14:44.054 [2024-12-09 17:25:13.135222] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:14:44.054 [2024-12-09 17:25:13.135229] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:14:44.054 [2024-12-09 17:25:13.135235] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:44.054 [2024-12-09 17:25:13.135240] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:14:44.054 [2024-12-09 17:25:13.135245] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:14:44.054 [2024-12-09 17:25:13.135249] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:14:44.054 [2024-12-09 17:25:13.135254] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:14:44.054 [2024-12-09 17:25:13.135270] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:44.054 [2024-12-09 17:25:13.135281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:44.054 [2024-12-09 17:25:13.135291] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:44.054 [2024-12-09 17:25:13.135302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:44.054 [2024-12-09 17:25:13.135311] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:44.054 [2024-12-09 17:25:13.135323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:44.054 [2024-12-09 17:25:13.135333] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:44.054 [2024-12-09 17:25:13.135343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:44.054 [2024-12-09 17:25:13.135356] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:44.054 [2024-12-09 17:25:13.135360] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:44.054 [2024-12-09 17:25:13.135363] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:44.054 [2024-12-09 17:25:13.135366] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:44.054 [2024-12-09 17:25:13.135369] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:44.054 [2024-12-09 17:25:13.135375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:44.054 [2024-12-09 17:25:13.135381] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:44.054 [2024-12-09 17:25:13.135385] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:44.054 [2024-12-09 17:25:13.135388] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:44.054 [2024-12-09 17:25:13.135393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:44.054 [2024-12-09 17:25:13.135399] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:44.054 [2024-12-09 17:25:13.135403] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:44.054 [2024-12-09 17:25:13.135406] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:44.054 [2024-12-09 17:25:13.135411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:44.054 [2024-12-09 17:25:13.135418] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:44.054 [2024-12-09 17:25:13.135423] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:44.054 [2024-12-09 17:25:13.135426] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:44.054 [2024-12-09 17:25:13.135431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:44.054 [2024-12-09 17:25:13.135438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:44.054 [2024-12-09 17:25:13.135447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:44.054 [2024-12-09 17:25:13.135456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:44.054 [2024-12-09 17:25:13.135463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:44.054 ===================================================== 00:14:44.054 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:44.054 ===================================================== 00:14:44.054 Controller Capabilities/Features 00:14:44.054 ================================ 00:14:44.054 Vendor ID: 4e58 00:14:44.054 Subsystem Vendor ID: 4e58 00:14:44.054 Serial Number: SPDK1 00:14:44.054 Model Number: SPDK bdev Controller 00:14:44.054 Firmware Version: 25.01 00:14:44.054 Recommended Arb Burst: 6 00:14:44.054 IEEE OUI Identifier: 8d 6b 50 00:14:44.054 Multi-path I/O 00:14:44.054 May have multiple subsystem ports: Yes 00:14:44.054 May have multiple controllers: Yes 00:14:44.054 Associated with SR-IOV VF: No 00:14:44.054 Max Data Transfer Size: 131072 00:14:44.054 Max Number of Namespaces: 32 00:14:44.054 Max Number of I/O Queues: 127 00:14:44.054 NVMe Specification Version (VS): 1.3 00:14:44.054 NVMe Specification Version (Identify): 1.3 00:14:44.054 Maximum Queue Entries: 256 00:14:44.054 Contiguous Queues Required: Yes 00:14:44.054 Arbitration Mechanisms Supported 00:14:44.054 Weighted Round Robin: Not Supported 00:14:44.054 Vendor Specific: Not Supported 00:14:44.054 Reset Timeout: 15000 ms 00:14:44.054 Doorbell Stride: 4 bytes 00:14:44.054 NVM Subsystem Reset: Not Supported 00:14:44.054 Command Sets Supported 00:14:44.054 NVM Command Set: Supported 00:14:44.054 Boot Partition: Not Supported 00:14:44.054 Memory Page Size Minimum: 4096 bytes 00:14:44.054 Memory Page Size Maximum: 4096 bytes 00:14:44.054 Persistent Memory Region: Not Supported 00:14:44.054 Optional Asynchronous Events Supported 00:14:44.054 Namespace Attribute Notices: Supported 00:14:44.054 Firmware Activation Notices: Not Supported 00:14:44.054 ANA Change Notices: Not Supported 00:14:44.054 PLE Aggregate Log Change Notices: Not Supported 00:14:44.054 LBA Status Info Alert Notices: Not Supported 00:14:44.054 EGE Aggregate Log Change Notices: Not Supported 00:14:44.054 Normal NVM Subsystem Shutdown event: Not Supported 00:14:44.054 Zone Descriptor Change Notices: Not Supported 00:14:44.054 Discovery Log Change Notices: Not Supported 00:14:44.054 Controller Attributes 00:14:44.054 128-bit Host Identifier: Supported 00:14:44.054 Non-Operational Permissive Mode: Not Supported 00:14:44.054 NVM Sets: Not Supported 00:14:44.054 Read Recovery Levels: Not Supported 00:14:44.054 Endurance Groups: Not Supported 00:14:44.054 Predictable Latency Mode: Not Supported 00:14:44.054 Traffic Based Keep ALive: Not Supported 00:14:44.054 Namespace Granularity: Not Supported 00:14:44.054 SQ Associations: Not Supported 00:14:44.054 UUID List: Not Supported 00:14:44.054 Multi-Domain Subsystem: Not Supported 00:14:44.054 Fixed Capacity Management: Not Supported 00:14:44.054 Variable Capacity Management: Not Supported 00:14:44.054 Delete Endurance Group: Not Supported 00:14:44.054 Delete NVM Set: Not Supported 00:14:44.054 Extended LBA Formats Supported: Not Supported 00:14:44.054 Flexible Data Placement Supported: Not Supported 00:14:44.054 00:14:44.054 Controller Memory Buffer Support 00:14:44.054 ================================ 00:14:44.054 Supported: No 00:14:44.055 00:14:44.055 Persistent Memory Region Support 00:14:44.055 ================================ 00:14:44.055 Supported: No 00:14:44.055 00:14:44.055 Admin Command Set Attributes 00:14:44.055 ============================ 00:14:44.055 Security Send/Receive: Not Supported 00:14:44.055 Format NVM: Not Supported 00:14:44.055 Firmware Activate/Download: Not Supported 00:14:44.055 Namespace Management: Not Supported 00:14:44.055 Device Self-Test: Not Supported 00:14:44.055 Directives: Not Supported 00:14:44.055 NVMe-MI: Not Supported 00:14:44.055 Virtualization Management: Not Supported 00:14:44.055 Doorbell Buffer Config: Not Supported 00:14:44.055 Get LBA Status Capability: Not Supported 00:14:44.055 Command & Feature Lockdown Capability: Not Supported 00:14:44.055 Abort Command Limit: 4 00:14:44.055 Async Event Request Limit: 4 00:14:44.055 Number of Firmware Slots: N/A 00:14:44.055 Firmware Slot 1 Read-Only: N/A 00:14:44.055 Firmware Activation Without Reset: N/A 00:14:44.055 Multiple Update Detection Support: N/A 00:14:44.055 Firmware Update Granularity: No Information Provided 00:14:44.055 Per-Namespace SMART Log: No 00:14:44.055 Asymmetric Namespace Access Log Page: Not Supported 00:14:44.055 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:44.055 Command Effects Log Page: Supported 00:14:44.055 Get Log Page Extended Data: Supported 00:14:44.055 Telemetry Log Pages: Not Supported 00:14:44.055 Persistent Event Log Pages: Not Supported 00:14:44.055 Supported Log Pages Log Page: May Support 00:14:44.055 Commands Supported & Effects Log Page: Not Supported 00:14:44.055 Feature Identifiers & Effects Log Page:May Support 00:14:44.055 NVMe-MI Commands & Effects Log Page: May Support 00:14:44.055 Data Area 4 for Telemetry Log: Not Supported 00:14:44.055 Error Log Page Entries Supported: 128 00:14:44.055 Keep Alive: Supported 00:14:44.055 Keep Alive Granularity: 10000 ms 00:14:44.055 00:14:44.055 NVM Command Set Attributes 00:14:44.055 ========================== 00:14:44.055 Submission Queue Entry Size 00:14:44.055 Max: 64 00:14:44.055 Min: 64 00:14:44.055 Completion Queue Entry Size 00:14:44.055 Max: 16 00:14:44.055 Min: 16 00:14:44.055 Number of Namespaces: 32 00:14:44.055 Compare Command: Supported 00:14:44.055 Write Uncorrectable Command: Not Supported 00:14:44.055 Dataset Management Command: Supported 00:14:44.055 Write Zeroes Command: Supported 00:14:44.055 Set Features Save Field: Not Supported 00:14:44.055 Reservations: Not Supported 00:14:44.055 Timestamp: Not Supported 00:14:44.055 Copy: Supported 00:14:44.055 Volatile Write Cache: Present 00:14:44.055 Atomic Write Unit (Normal): 1 00:14:44.055 Atomic Write Unit (PFail): 1 00:14:44.055 Atomic Compare & Write Unit: 1 00:14:44.055 Fused Compare & Write: Supported 00:14:44.055 Scatter-Gather List 00:14:44.055 SGL Command Set: Supported (Dword aligned) 00:14:44.055 SGL Keyed: Not Supported 00:14:44.055 SGL Bit Bucket Descriptor: Not Supported 00:14:44.055 SGL Metadata Pointer: Not Supported 00:14:44.055 Oversized SGL: Not Supported 00:14:44.055 SGL Metadata Address: Not Supported 00:14:44.055 SGL Offset: Not Supported 00:14:44.055 Transport SGL Data Block: Not Supported 00:14:44.055 Replay Protected Memory Block: Not Supported 00:14:44.055 00:14:44.055 Firmware Slot Information 00:14:44.055 ========================= 00:14:44.055 Active slot: 1 00:14:44.055 Slot 1 Firmware Revision: 25.01 00:14:44.055 00:14:44.055 00:14:44.055 Commands Supported and Effects 00:14:44.055 ============================== 00:14:44.055 Admin Commands 00:14:44.055 -------------- 00:14:44.055 Get Log Page (02h): Supported 00:14:44.055 Identify (06h): Supported 00:14:44.055 Abort (08h): Supported 00:14:44.055 Set Features (09h): Supported 00:14:44.055 Get Features (0Ah): Supported 00:14:44.055 Asynchronous Event Request (0Ch): Supported 00:14:44.055 Keep Alive (18h): Supported 00:14:44.055 I/O Commands 00:14:44.055 ------------ 00:14:44.055 Flush (00h): Supported LBA-Change 00:14:44.055 Write (01h): Supported LBA-Change 00:14:44.055 Read (02h): Supported 00:14:44.055 Compare (05h): Supported 00:14:44.055 Write Zeroes (08h): Supported LBA-Change 00:14:44.055 Dataset Management (09h): Supported LBA-Change 00:14:44.055 Copy (19h): Supported LBA-Change 00:14:44.055 00:14:44.055 Error Log 00:14:44.055 ========= 00:14:44.055 00:14:44.055 Arbitration 00:14:44.055 =========== 00:14:44.055 Arbitration Burst: 1 00:14:44.055 00:14:44.055 Power Management 00:14:44.055 ================ 00:14:44.055 Number of Power States: 1 00:14:44.055 Current Power State: Power State #0 00:14:44.055 Power State #0: 00:14:44.055 Max Power: 0.00 W 00:14:44.055 Non-Operational State: Operational 00:14:44.055 Entry Latency: Not Reported 00:14:44.055 Exit Latency: Not Reported 00:14:44.055 Relative Read Throughput: 0 00:14:44.055 Relative Read Latency: 0 00:14:44.055 Relative Write Throughput: 0 00:14:44.055 Relative Write Latency: 0 00:14:44.055 Idle Power: Not Reported 00:14:44.055 Active Power: Not Reported 00:14:44.055 Non-Operational Permissive Mode: Not Supported 00:14:44.055 00:14:44.055 Health Information 00:14:44.055 ================== 00:14:44.055 Critical Warnings: 00:14:44.055 Available Spare Space: OK 00:14:44.055 Temperature: OK 00:14:44.055 Device Reliability: OK 00:14:44.055 Read Only: No 00:14:44.055 Volatile Memory Backup: OK 00:14:44.055 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:44.055 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:44.055 Available Spare: 0% 00:14:44.055 Available Sp[2024-12-09 17:25:13.135545] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:44.055 [2024-12-09 17:25:13.135555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:44.055 [2024-12-09 17:25:13.135582] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:14:44.055 [2024-12-09 17:25:13.135591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.055 [2024-12-09 17:25:13.135596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.055 [2024-12-09 17:25:13.135602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.055 [2024-12-09 17:25:13.135607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.055 [2024-12-09 17:25:13.139224] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:44.055 [2024-12-09 17:25:13.139235] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:44.055 [2024-12-09 17:25:13.139732] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:44.055 [2024-12-09 17:25:13.139781] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:14:44.055 [2024-12-09 17:25:13.139787] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:14:44.055 [2024-12-09 17:25:13.140752] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:44.055 [2024-12-09 17:25:13.140762] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:14:44.055 [2024-12-09 17:25:13.140811] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:44.056 [2024-12-09 17:25:13.141816] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:44.056 are Threshold: 0% 00:14:44.056 Life Percentage Used: 0% 00:14:44.056 Data Units Read: 0 00:14:44.056 Data Units Written: 0 00:14:44.056 Host Read Commands: 0 00:14:44.056 Host Write Commands: 0 00:14:44.056 Controller Busy Time: 0 minutes 00:14:44.056 Power Cycles: 0 00:14:44.056 Power On Hours: 0 hours 00:14:44.056 Unsafe Shutdowns: 0 00:14:44.056 Unrecoverable Media Errors: 0 00:14:44.056 Lifetime Error Log Entries: 0 00:14:44.056 Warning Temperature Time: 0 minutes 00:14:44.056 Critical Temperature Time: 0 minutes 00:14:44.056 00:14:44.056 Number of Queues 00:14:44.056 ================ 00:14:44.056 Number of I/O Submission Queues: 127 00:14:44.056 Number of I/O Completion Queues: 127 00:14:44.056 00:14:44.056 Active Namespaces 00:14:44.056 ================= 00:14:44.056 Namespace ID:1 00:14:44.056 Error Recovery Timeout: Unlimited 00:14:44.056 Command Set Identifier: NVM (00h) 00:14:44.056 Deallocate: Supported 00:14:44.056 Deallocated/Unwritten Error: Not Supported 00:14:44.056 Deallocated Read Value: Unknown 00:14:44.056 Deallocate in Write Zeroes: Not Supported 00:14:44.056 Deallocated Guard Field: 0xFFFF 00:14:44.056 Flush: Supported 00:14:44.056 Reservation: Supported 00:14:44.056 Namespace Sharing Capabilities: Multiple Controllers 00:14:44.056 Size (in LBAs): 131072 (0GiB) 00:14:44.056 Capacity (in LBAs): 131072 (0GiB) 00:14:44.056 Utilization (in LBAs): 131072 (0GiB) 00:14:44.056 NGUID: 19882B03EB814929955B73CA6185DAEC 00:14:44.056 UUID: 19882b03-eb81-4929-955b-73ca6185daec 00:14:44.056 Thin Provisioning: Not Supported 00:14:44.056 Per-NS Atomic Units: Yes 00:14:44.056 Atomic Boundary Size (Normal): 0 00:14:44.056 Atomic Boundary Size (PFail): 0 00:14:44.056 Atomic Boundary Offset: 0 00:14:44.056 Maximum Single Source Range Length: 65535 00:14:44.056 Maximum Copy Length: 65535 00:14:44.056 Maximum Source Range Count: 1 00:14:44.056 NGUID/EUI64 Never Reused: No 00:14:44.056 Namespace Write Protected: No 00:14:44.056 Number of LBA Formats: 1 00:14:44.056 Current LBA Format: LBA Format #00 00:14:44.056 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:44.056 00:14:44.056 17:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:44.315 [2024-12-09 17:25:13.366049] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:49.587 Initializing NVMe Controllers 00:14:49.587 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:49.587 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:49.587 Initialization complete. Launching workers. 00:14:49.587 ======================================================== 00:14:49.587 Latency(us) 00:14:49.587 Device Information : IOPS MiB/s Average min max 00:14:49.587 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39880.00 155.78 3209.23 970.51 10603.57 00:14:49.587 ======================================================== 00:14:49.587 Total : 39880.00 155.78 3209.23 970.51 10603.57 00:14:49.587 00:14:49.587 [2024-12-09 17:25:18.389979] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:49.588 17:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:49.588 [2024-12-09 17:25:18.625033] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:54.861 Initializing NVMe Controllers 00:14:54.861 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:54.862 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:54.862 Initialization complete. Launching workers. 00:14:54.862 ======================================================== 00:14:54.862 Latency(us) 00:14:54.862 Device Information : IOPS MiB/s Average min max 00:14:54.862 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15922.37 62.20 8050.20 4986.21 15962.47 00:14:54.862 ======================================================== 00:14:54.862 Total : 15922.37 62.20 8050.20 4986.21 15962.47 00:14:54.862 00:14:54.862 [2024-12-09 17:25:23.662587] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:54.862 17:25:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:54.862 [2024-12-09 17:25:23.865528] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:00.131 [2024-12-09 17:25:28.934550] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:00.131 Initializing NVMe Controllers 00:15:00.131 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:00.131 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:00.131 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:00.131 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:00.131 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:00.131 Initialization complete. Launching workers. 00:15:00.131 Starting thread on core 2 00:15:00.131 Starting thread on core 3 00:15:00.131 Starting thread on core 1 00:15:00.132 17:25:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:00.132 [2024-12-09 17:25:29.229613] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:03.420 [2024-12-09 17:25:32.295063] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:03.420 Initializing NVMe Controllers 00:15:03.420 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:03.420 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:03.420 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:03.420 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:03.420 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:03.420 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:03.420 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:03.420 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:03.420 Initialization complete. Launching workers. 00:15:03.420 Starting thread on core 1 with urgent priority queue 00:15:03.420 Starting thread on core 2 with urgent priority queue 00:15:03.420 Starting thread on core 3 with urgent priority queue 00:15:03.420 Starting thread on core 0 with urgent priority queue 00:15:03.420 SPDK bdev Controller (SPDK1 ) core 0: 7810.67 IO/s 12.80 secs/100000 ios 00:15:03.420 SPDK bdev Controller (SPDK1 ) core 1: 7934.00 IO/s 12.60 secs/100000 ios 00:15:03.420 SPDK bdev Controller (SPDK1 ) core 2: 8448.67 IO/s 11.84 secs/100000 ios 00:15:03.420 SPDK bdev Controller (SPDK1 ) core 3: 10199.00 IO/s 9.80 secs/100000 ios 00:15:03.420 ======================================================== 00:15:03.420 00:15:03.420 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:03.420 [2024-12-09 17:25:32.580451] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:03.679 Initializing NVMe Controllers 00:15:03.679 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:03.679 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:03.679 Namespace ID: 1 size: 0GB 00:15:03.679 Initialization complete. 00:15:03.679 INFO: using host memory buffer for IO 00:15:03.679 Hello world! 00:15:03.679 [2024-12-09 17:25:32.613657] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:03.679 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:03.938 [2024-12-09 17:25:32.888617] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:04.874 Initializing NVMe Controllers 00:15:04.874 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:04.874 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:04.874 Initialization complete. Launching workers. 00:15:04.874 submit (in ns) avg, min, max = 8487.7, 3147.6, 4996375.2 00:15:04.874 complete (in ns) avg, min, max = 18936.4, 1715.2, 5990222.9 00:15:04.874 00:15:04.874 Submit histogram 00:15:04.874 ================ 00:15:04.874 Range in us Cumulative Count 00:15:04.874 3.139 - 3.154: 0.0060% ( 1) 00:15:04.874 3.185 - 3.200: 0.0962% ( 15) 00:15:04.874 3.200 - 3.215: 0.4510% ( 59) 00:15:04.874 3.215 - 3.230: 2.2729% ( 303) 00:15:04.874 3.230 - 3.246: 6.4819% ( 700) 00:15:04.874 3.246 - 3.261: 11.6650% ( 862) 00:15:04.874 3.261 - 3.276: 17.1667% ( 915) 00:15:04.874 3.276 - 3.291: 24.0094% ( 1138) 00:15:04.874 3.291 - 3.307: 30.4191% ( 1066) 00:15:04.874 3.307 - 3.322: 36.3658% ( 989) 00:15:04.874 3.322 - 3.337: 42.4869% ( 1018) 00:15:04.874 3.337 - 3.352: 48.4216% ( 987) 00:15:04.874 3.352 - 3.368: 53.6107% ( 863) 00:15:04.874 3.368 - 3.383: 60.6338% ( 1168) 00:15:04.874 3.383 - 3.398: 67.3201% ( 1112) 00:15:04.874 3.398 - 3.413: 72.1123% ( 797) 00:15:04.874 3.413 - 3.429: 77.5239% ( 900) 00:15:04.874 3.429 - 3.444: 81.6668% ( 689) 00:15:04.874 3.444 - 3.459: 84.3124% ( 440) 00:15:04.874 3.459 - 3.474: 85.9960% ( 280) 00:15:04.874 3.474 - 3.490: 86.9761% ( 163) 00:15:04.874 3.490 - 3.505: 87.6075% ( 105) 00:15:04.874 3.505 - 3.520: 88.1306% ( 87) 00:15:04.874 3.520 - 3.535: 88.7499% ( 103) 00:15:04.874 3.535 - 3.550: 89.6458% ( 149) 00:15:04.874 3.550 - 3.566: 90.4275% ( 130) 00:15:04.874 3.566 - 3.581: 91.2934% ( 144) 00:15:04.874 3.581 - 3.596: 92.2855% ( 165) 00:15:04.875 3.596 - 3.611: 93.2295% ( 157) 00:15:04.875 3.611 - 3.627: 94.2457% ( 169) 00:15:04.875 3.627 - 3.642: 95.2138% ( 161) 00:15:04.875 3.642 - 3.657: 96.1217% ( 151) 00:15:04.875 3.657 - 3.672: 96.8913% ( 128) 00:15:04.875 3.672 - 3.688: 97.5468% ( 109) 00:15:04.875 3.688 - 3.703: 98.0759% ( 88) 00:15:04.875 3.703 - 3.718: 98.3825% ( 51) 00:15:04.875 3.718 - 3.733: 98.7794% ( 66) 00:15:04.875 3.733 - 3.749: 99.0139% ( 39) 00:15:04.875 3.749 - 3.764: 99.1642% ( 25) 00:15:04.875 3.764 - 3.779: 99.3025% ( 23) 00:15:04.875 3.779 - 3.794: 99.3927% ( 15) 00:15:04.875 3.794 - 3.810: 99.4769% ( 14) 00:15:04.875 3.810 - 3.825: 99.5310% ( 9) 00:15:04.875 3.825 - 3.840: 99.5490% ( 3) 00:15:04.875 3.840 - 3.855: 99.5550% ( 1) 00:15:04.875 3.855 - 3.870: 99.5611% ( 1) 00:15:04.875 3.870 - 3.886: 99.5731% ( 2) 00:15:04.875 3.886 - 3.901: 99.5791% ( 1) 00:15:04.875 3.901 - 3.931: 99.6032% ( 4) 00:15:04.875 3.931 - 3.962: 99.6092% ( 1) 00:15:04.875 3.962 - 3.992: 99.6152% ( 1) 00:15:04.875 4.053 - 4.084: 99.6212% ( 1) 00:15:04.875 4.084 - 4.114: 99.6272% ( 1) 00:15:04.875 4.145 - 4.175: 99.6332% ( 1) 00:15:04.875 4.267 - 4.297: 99.6392% ( 1) 00:15:04.875 5.150 - 5.181: 99.6452% ( 1) 00:15:04.875 5.181 - 5.211: 99.6513% ( 1) 00:15:04.875 5.242 - 5.272: 99.6573% ( 1) 00:15:04.875 5.394 - 5.425: 99.6633% ( 1) 00:15:04.875 5.425 - 5.455: 99.6693% ( 1) 00:15:04.875 5.455 - 5.486: 99.6753% ( 1) 00:15:04.875 5.516 - 5.547: 99.6813% ( 1) 00:15:04.875 5.608 - 5.638: 99.6873% ( 1) 00:15:04.875 5.669 - 5.699: 99.6933% ( 1) 00:15:04.875 5.730 - 5.760: 99.6994% ( 1) 00:15:04.875 5.973 - 6.004: 99.7054% ( 1) 00:15:04.875 6.034 - 6.065: 99.7114% ( 1) 00:15:04.875 6.065 - 6.095: 99.7174% ( 1) 00:15:04.875 6.126 - 6.156: 99.7294% ( 2) 00:15:04.875 6.309 - 6.339: 99.7354% ( 1) 00:15:04.875 6.339 - 6.370: 99.7414% ( 1) 00:15:04.875 6.370 - 6.400: 99.7475% ( 1) 00:15:04.875 6.400 - 6.430: 99.7535% ( 1) 00:15:04.875 6.491 - 6.522: 99.7595% ( 1) 00:15:04.875 6.613 - 6.644: 99.7655% ( 1) 00:15:04.875 6.766 - 6.796: 99.7715% ( 1) 00:15:04.875 6.857 - 6.888: 99.7775% ( 1) 00:15:04.875 7.070 - 7.101: 99.7835% ( 1) 00:15:04.875 7.101 - 7.131: 99.7895% ( 1) 00:15:04.875 7.192 - 7.223: 99.7956% ( 1) 00:15:04.875 7.253 - 7.284: 99.8016% ( 1) 00:15:04.875 7.436 - 7.467: 99.8076% ( 1) 00:15:04.875 7.497 - 7.528: 99.8136% ( 1) 00:15:04.875 7.558 - 7.589: 99.8196% ( 1) 00:15:04.875 7.589 - 7.619: 99.8256% ( 1) 00:15:04.875 7.680 - 7.710: 99.8316% ( 1) 00:15:04.875 7.741 - 7.771: 99.8377% ( 1) 00:15:04.875 7.802 - 7.863: 99.8437% ( 1) 00:15:04.875 8.046 - 8.107: 99.8497% ( 1) 00:15:04.875 8.290 - 8.350: 99.8557% ( 1) 00:15:04.875 8.960 - 9.021: 99.8617% ( 1) 00:15:04.875 9.874 - 9.935: 99.8677% ( 1) 00:15:04.875 13.775 - 13.836: 99.8737% ( 1) 00:15:04.875 3994.575 - 4025.783: 99.9940% ( 20) 00:15:04.875 4993.219 - 5024.427: 100.0000% ( 1) 00:15:04.875 00:15:04.875 Complete histogram 00:15:04.875 ================== 00:15:04.875 Range in us Cumulative Count 00:15:04.875 1.714 - 1.722: 0.0601% ( 10) 00:15:04.875 1.722 - 1.730: 0.1203% ( 10) 00:15:04.875 1.730 - 1.737: 0.1804% ( 10) 00:15:04.875 1.737 - 1.745: 0.1864% ( 1) 00:15:04.875 1.745 - 1.752: 0.1924% ( 1) 00:15:04.875 1.752 - 1.760: 0.3127% ( 20) 00:15:04.875 1.760 - 1.768: 3.4754% ( 526) 00:15:04.875 1.768 - 1.775: 18.6820% ( 2529) 00:15:04.875 1.775 - 1.783: 38.3260% ( 3267) 00:15:04.875 1.783 - 1.790: 48.4938% ( 1691) 00:15:04.875 1.790 - 1.798: 52.4202% ( 653) 00:15:04.875 1.798 - 1.806: 54.6570% ( 372) 00:15:04.875 1.806 - 1.813: 55.8114% ( 192) 00:15:04.875 1.813 - 1.821: 57.3868% ( 262) 00:15:04.875 1.821 - 1.829: 64.0851% ( 1114) 00:15:04.875 1.829 - 1.836: 76.4296% ( 2053) 00:15:04.875 1.836 - 1.844: 87.0844% ( 1772) 00:15:04.875 1.844 - 1.851: 92.8808% ( 964) 00:15:04.875 1.851 - 1.859: 95.3280% ( 407) 00:15:04.875 1.859 - 1.867: 96.5847% ( 209) 00:15:04.875 1.867 - 1.874: 97.1078% ( 87) 00:15:04.875 1.874 - 1.882: 97.3844% ( 46) 00:15:04.875 1.882 - 1.890: 97.5648% ( 30) 00:15:04.875 1.890 - 1.897: 97.8354% ( 45) 00:15:04.875 1.897 - 1.905: 98.2322% ( 66) 00:15:04.875 1.905 - 1.912: 98.6170% ( 64) 00:15:04.875 1.912 - 1.920: 98.9718% ( 59) 00:15:04.875 1.920 - 1.928: 99.1582% ( 31) 00:15:04.875 1.928 - 1.935: 99.2243% ( 11) 00:15:04.875 1.935 - 1.943: 99.2724% ( 8) 00:15:04.875 1.943 - 1.950: 99.3025% ( 5) 00:15:04.875 1.950 - 1.966: 99.3326% ( 5) 00:15:04.875 1.966 - 1.981: 99.3386% ( 1) 00:15:04.875 1.981 - 1.996: 99.3506% ( 2) 00:15:04.875 2.027 - 2.042: 99.3566% ( 1) 00:15:04.875 2.042 - 2.057: 99.3626% ( 1) 00:15:04.875 2.118 - 2.133: 99.3686% ( 1) 00:15:04.875 2.133 - 2.149: 99.3747% ( 1) 00:15:04.875 2.164 - 2.179: 99.3807% ( 1) 00:15:04.875 2.179 - 2.194: 99.3927% ( 2) 00:15:04.875 2.210 - 2.225: 99.3987% ( 1) 00:15:04.875 2.225 - 2.240: 99.4047% ( 1) 00:15:04.875 2.240 - 2.255: 99.4107% ( 1) 00:15:04.875 2.255 - 2.270: 99.4168% ( 1) 00:15:04.875 2.301 - 2.316: 99.4228% ( 1) 00:15:04.875 2.408 - 2.423: 99.4288% ( 1) 00:15:04.875 2.530 - 2.545: 99.4348% ( 1) 00:15:04.875 3.794 - 3.810: 99.4468% ( 2) 00:15:04.875 3.855 - 3.870: 99.4528% ( 1) 00:15:04.875 3.886 - 3.901: 99.4588% ( 1) 00:15:04.875 3.962 - 3.992: 99.4649% ( 1) 00:15:04.875 3.992 - 4.023: 99.4709% ( 1) 00:15:04.875 4.510 - 4.541: 99.4769% ( 1) 00:15:04.875 4.602 - 4.632: 99.4829% ( 1) 00:15:04.875 4.663 - 4.693: 99.4889% ( 1) 00:15:04.875 4.693 - 4.724: 99.5009% ( 2) 00:15:04.875 4.815 - 4.846: 99.5069% ( 1) 00:15:04.875 5.181 - 5.211: 99.5130% ( 1) 00:15:04.875 5.394 - 5.425: 99.5190% ( 1) 00:15:04.875 5.425 - 5.455: 99.5250% ( 1) 00:15:04.875 5.608 - 5.638: 99.5310% ( 1) 00:15:04.875 5.730 - 5.760: 99.5370% ( 1) 00:15:04.875 5.973 - 6.004: 99.5430% ( 1) 00:15:04.875 6.004 - 6.034: 99.5490% ( 1) 00:15:04.875 6.065 - 6.095: 99.5550% ( 1) 00:15:04.875 6.278 - 6.309: 99.5611% ( 1) 00:15:04.875 6.827 - 6.857: 99.5671% ( 1) 00:15:04.875 11.398 - 11.459: 99.5731% ( 1) 00:15:04.875 3167.573 - 3183.177: 99.5791% ( 1) 00:15:04.875 3994.575 - 4025.7[2024-12-09 17:25:33.908494] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:04.875 83: 99.9940% ( 69) 00:15:04.875 5960.655 - 5991.863: 100.0000% ( 1) 00:15:04.875 00:15:04.875 17:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:04.875 17:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:04.875 17:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:04.875 17:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:04.875 17:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:05.134 [ 00:15:05.134 { 00:15:05.134 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:05.134 "subtype": "Discovery", 00:15:05.134 "listen_addresses": [], 00:15:05.134 "allow_any_host": true, 00:15:05.134 "hosts": [] 00:15:05.134 }, 00:15:05.134 { 00:15:05.134 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:05.134 "subtype": "NVMe", 00:15:05.134 "listen_addresses": [ 00:15:05.134 { 00:15:05.134 "trtype": "VFIOUSER", 00:15:05.134 "adrfam": "IPv4", 00:15:05.134 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:05.134 "trsvcid": "0" 00:15:05.134 } 00:15:05.134 ], 00:15:05.134 "allow_any_host": true, 00:15:05.134 "hosts": [], 00:15:05.134 "serial_number": "SPDK1", 00:15:05.134 "model_number": "SPDK bdev Controller", 00:15:05.134 "max_namespaces": 32, 00:15:05.134 "min_cntlid": 1, 00:15:05.134 "max_cntlid": 65519, 00:15:05.134 "namespaces": [ 00:15:05.134 { 00:15:05.134 "nsid": 1, 00:15:05.134 "bdev_name": "Malloc1", 00:15:05.134 "name": "Malloc1", 00:15:05.134 "nguid": "19882B03EB814929955B73CA6185DAEC", 00:15:05.134 "uuid": "19882b03-eb81-4929-955b-73ca6185daec" 00:15:05.134 } 00:15:05.134 ] 00:15:05.134 }, 00:15:05.134 { 00:15:05.134 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:05.134 "subtype": "NVMe", 00:15:05.134 "listen_addresses": [ 00:15:05.134 { 00:15:05.134 "trtype": "VFIOUSER", 00:15:05.134 "adrfam": "IPv4", 00:15:05.134 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:05.134 "trsvcid": "0" 00:15:05.134 } 00:15:05.134 ], 00:15:05.134 "allow_any_host": true, 00:15:05.134 "hosts": [], 00:15:05.134 "serial_number": "SPDK2", 00:15:05.134 "model_number": "SPDK bdev Controller", 00:15:05.134 "max_namespaces": 32, 00:15:05.134 "min_cntlid": 1, 00:15:05.134 "max_cntlid": 65519, 00:15:05.134 "namespaces": [ 00:15:05.135 { 00:15:05.135 "nsid": 1, 00:15:05.135 "bdev_name": "Malloc2", 00:15:05.135 "name": "Malloc2", 00:15:05.135 "nguid": "6769D2A6249C4CA7A92665C774BD2D1E", 00:15:05.135 "uuid": "6769d2a6-249c-4ca7-a926-65c774bd2d1e" 00:15:05.135 } 00:15:05.135 ] 00:15:05.135 } 00:15:05.135 ] 00:15:05.135 17:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:05.135 17:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:05.135 17:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2545153 00:15:05.135 17:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:05.135 17:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:15:05.135 17:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:05.135 17:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:05.135 17:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:15:05.135 17:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:05.135 17:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:05.135 [2024-12-09 17:25:34.300640] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:05.393 Malloc3 00:15:05.394 17:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:05.394 [2024-12-09 17:25:34.552531] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:05.652 17:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:05.652 Asynchronous Event Request test 00:15:05.652 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:05.652 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:05.652 Registering asynchronous event callbacks... 00:15:05.652 Starting namespace attribute notice tests for all controllers... 00:15:05.652 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:05.652 aer_cb - Changed Namespace 00:15:05.652 Cleaning up... 00:15:05.652 [ 00:15:05.652 { 00:15:05.652 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:05.652 "subtype": "Discovery", 00:15:05.652 "listen_addresses": [], 00:15:05.652 "allow_any_host": true, 00:15:05.652 "hosts": [] 00:15:05.652 }, 00:15:05.652 { 00:15:05.652 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:05.652 "subtype": "NVMe", 00:15:05.652 "listen_addresses": [ 00:15:05.652 { 00:15:05.652 "trtype": "VFIOUSER", 00:15:05.652 "adrfam": "IPv4", 00:15:05.652 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:05.652 "trsvcid": "0" 00:15:05.652 } 00:15:05.652 ], 00:15:05.652 "allow_any_host": true, 00:15:05.652 "hosts": [], 00:15:05.652 "serial_number": "SPDK1", 00:15:05.652 "model_number": "SPDK bdev Controller", 00:15:05.652 "max_namespaces": 32, 00:15:05.652 "min_cntlid": 1, 00:15:05.652 "max_cntlid": 65519, 00:15:05.652 "namespaces": [ 00:15:05.652 { 00:15:05.652 "nsid": 1, 00:15:05.652 "bdev_name": "Malloc1", 00:15:05.652 "name": "Malloc1", 00:15:05.652 "nguid": "19882B03EB814929955B73CA6185DAEC", 00:15:05.652 "uuid": "19882b03-eb81-4929-955b-73ca6185daec" 00:15:05.652 }, 00:15:05.652 { 00:15:05.652 "nsid": 2, 00:15:05.652 "bdev_name": "Malloc3", 00:15:05.652 "name": "Malloc3", 00:15:05.653 "nguid": "F61D6D6A04D742D5BF5EA38669F691CB", 00:15:05.653 "uuid": "f61d6d6a-04d7-42d5-bf5e-a38669f691cb" 00:15:05.653 } 00:15:05.653 ] 00:15:05.653 }, 00:15:05.653 { 00:15:05.653 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:05.653 "subtype": "NVMe", 00:15:05.653 "listen_addresses": [ 00:15:05.653 { 00:15:05.653 "trtype": "VFIOUSER", 00:15:05.653 "adrfam": "IPv4", 00:15:05.653 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:05.653 "trsvcid": "0" 00:15:05.653 } 00:15:05.653 ], 00:15:05.653 "allow_any_host": true, 00:15:05.653 "hosts": [], 00:15:05.653 "serial_number": "SPDK2", 00:15:05.653 "model_number": "SPDK bdev Controller", 00:15:05.653 "max_namespaces": 32, 00:15:05.653 "min_cntlid": 1, 00:15:05.653 "max_cntlid": 65519, 00:15:05.653 "namespaces": [ 00:15:05.653 { 00:15:05.653 "nsid": 1, 00:15:05.653 "bdev_name": "Malloc2", 00:15:05.653 "name": "Malloc2", 00:15:05.653 "nguid": "6769D2A6249C4CA7A92665C774BD2D1E", 00:15:05.653 "uuid": "6769d2a6-249c-4ca7-a926-65c774bd2d1e" 00:15:05.653 } 00:15:05.653 ] 00:15:05.653 } 00:15:05.653 ] 00:15:05.653 17:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2545153 00:15:05.653 17:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:05.653 17:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:05.653 17:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:05.653 17:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:05.653 [2024-12-09 17:25:34.822488] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:15:05.653 [2024-12-09 17:25:34.822536] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2545374 ] 00:15:05.913 [2024-12-09 17:25:34.860105] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:05.913 [2024-12-09 17:25:34.868375] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:05.914 [2024-12-09 17:25:34.868400] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7ff15326d000 00:15:05.914 [2024-12-09 17:25:34.869374] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:05.914 [2024-12-09 17:25:34.870377] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:05.914 [2024-12-09 17:25:34.871381] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:05.914 [2024-12-09 17:25:34.872395] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:05.914 [2024-12-09 17:25:34.873402] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:05.914 [2024-12-09 17:25:34.874414] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:05.914 [2024-12-09 17:25:34.875421] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:05.914 [2024-12-09 17:25:34.876430] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:05.914 [2024-12-09 17:25:34.877443] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:05.914 [2024-12-09 17:25:34.877453] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7ff153262000 00:15:05.914 [2024-12-09 17:25:34.878370] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:05.914 [2024-12-09 17:25:34.887721] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:05.914 [2024-12-09 17:25:34.887744] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:15:05.914 [2024-12-09 17:25:34.892832] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:05.914 [2024-12-09 17:25:34.892871] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:05.914 [2024-12-09 17:25:34.892943] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:15:05.914 [2024-12-09 17:25:34.892956] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:15:05.914 [2024-12-09 17:25:34.892960] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:15:05.914 [2024-12-09 17:25:34.893837] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:05.914 [2024-12-09 17:25:34.893847] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:15:05.914 [2024-12-09 17:25:34.893853] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:15:05.914 [2024-12-09 17:25:34.894847] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:05.914 [2024-12-09 17:25:34.894856] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:15:05.914 [2024-12-09 17:25:34.894866] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:15:05.914 [2024-12-09 17:25:34.895854] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:05.914 [2024-12-09 17:25:34.895863] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:05.914 [2024-12-09 17:25:34.896863] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:05.914 [2024-12-09 17:25:34.896872] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:15:05.914 [2024-12-09 17:25:34.896877] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:15:05.914 [2024-12-09 17:25:34.896882] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:05.914 [2024-12-09 17:25:34.896990] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:15:05.914 [2024-12-09 17:25:34.896995] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:05.914 [2024-12-09 17:25:34.896999] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:05.914 [2024-12-09 17:25:34.897878] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:05.914 [2024-12-09 17:25:34.898887] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:05.914 [2024-12-09 17:25:34.899892] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:05.914 [2024-12-09 17:25:34.900894] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:05.914 [2024-12-09 17:25:34.900931] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:05.914 [2024-12-09 17:25:34.901910] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:05.914 [2024-12-09 17:25:34.901918] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:05.914 [2024-12-09 17:25:34.901923] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:15:05.914 [2024-12-09 17:25:34.901940] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:15:05.914 [2024-12-09 17:25:34.901947] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:15:05.914 [2024-12-09 17:25:34.901961] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:05.914 [2024-12-09 17:25:34.901966] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:05.914 [2024-12-09 17:25:34.901969] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:05.914 [2024-12-09 17:25:34.901981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:05.914 [2024-12-09 17:25:34.909223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:05.914 [2024-12-09 17:25:34.909234] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:15:05.914 [2024-12-09 17:25:34.909241] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:15:05.914 [2024-12-09 17:25:34.909245] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:15:05.914 [2024-12-09 17:25:34.909250] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:05.914 [2024-12-09 17:25:34.909254] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:15:05.914 [2024-12-09 17:25:34.909258] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:15:05.914 [2024-12-09 17:25:34.909262] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:15:05.914 [2024-12-09 17:25:34.909269] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:15:05.914 [2024-12-09 17:25:34.909279] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:05.914 [2024-12-09 17:25:34.917222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:05.914 [2024-12-09 17:25:34.917234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:05.914 [2024-12-09 17:25:34.917242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:05.914 [2024-12-09 17:25:34.917249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:05.914 [2024-12-09 17:25:34.917256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:05.914 [2024-12-09 17:25:34.917260] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:15:05.914 [2024-12-09 17:25:34.917268] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:05.914 [2024-12-09 17:25:34.917277] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:05.914 [2024-12-09 17:25:34.925222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:05.914 [2024-12-09 17:25:34.925229] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:15:05.914 [2024-12-09 17:25:34.925234] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:05.914 [2024-12-09 17:25:34.925240] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:15:05.914 [2024-12-09 17:25:34.925245] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:15:05.914 [2024-12-09 17:25:34.925253] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:05.914 [2024-12-09 17:25:34.933222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:05.914 [2024-12-09 17:25:34.933280] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:15:05.914 [2024-12-09 17:25:34.933288] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:15:05.914 [2024-12-09 17:25:34.933296] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:05.914 [2024-12-09 17:25:34.933300] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:05.914 [2024-12-09 17:25:34.933303] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:05.914 [2024-12-09 17:25:34.933309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:05.914 [2024-12-09 17:25:34.941224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:05.915 [2024-12-09 17:25:34.941235] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:15:05.915 [2024-12-09 17:25:34.941246] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:15:05.915 [2024-12-09 17:25:34.941253] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:15:05.915 [2024-12-09 17:25:34.941259] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:05.915 [2024-12-09 17:25:34.941263] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:05.915 [2024-12-09 17:25:34.941267] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:05.915 [2024-12-09 17:25:34.941272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:05.915 [2024-12-09 17:25:34.949224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:05.915 [2024-12-09 17:25:34.949238] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:05.915 [2024-12-09 17:25:34.949245] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:05.915 [2024-12-09 17:25:34.949252] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:05.915 [2024-12-09 17:25:34.949256] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:05.915 [2024-12-09 17:25:34.949259] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:05.915 [2024-12-09 17:25:34.949265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:05.915 [2024-12-09 17:25:34.957223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:05.915 [2024-12-09 17:25:34.957232] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:05.915 [2024-12-09 17:25:34.957238] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:15:05.915 [2024-12-09 17:25:34.957246] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:15:05.915 [2024-12-09 17:25:34.957253] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:15:05.915 [2024-12-09 17:25:34.957260] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:05.915 [2024-12-09 17:25:34.957265] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:15:05.915 [2024-12-09 17:25:34.957269] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:15:05.915 [2024-12-09 17:25:34.957274] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:15:05.915 [2024-12-09 17:25:34.957278] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:15:05.915 [2024-12-09 17:25:34.957294] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:05.915 [2024-12-09 17:25:34.965223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:05.915 [2024-12-09 17:25:34.965236] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:05.915 [2024-12-09 17:25:34.973224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:05.915 [2024-12-09 17:25:34.973235] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:05.915 [2024-12-09 17:25:34.981223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:05.915 [2024-12-09 17:25:34.981236] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:05.915 [2024-12-09 17:25:34.989224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:05.915 [2024-12-09 17:25:34.989239] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:05.915 [2024-12-09 17:25:34.989243] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:05.915 [2024-12-09 17:25:34.989246] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:05.915 [2024-12-09 17:25:34.989250] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:05.915 [2024-12-09 17:25:34.989253] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:05.915 [2024-12-09 17:25:34.989258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:05.915 [2024-12-09 17:25:34.989265] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:05.915 [2024-12-09 17:25:34.989269] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:05.915 [2024-12-09 17:25:34.989272] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:05.915 [2024-12-09 17:25:34.989277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:05.915 [2024-12-09 17:25:34.989283] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:05.915 [2024-12-09 17:25:34.989287] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:05.915 [2024-12-09 17:25:34.989290] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:05.915 [2024-12-09 17:25:34.989296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:05.915 [2024-12-09 17:25:34.989305] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:05.915 [2024-12-09 17:25:34.989309] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:05.915 [2024-12-09 17:25:34.989312] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:05.915 [2024-12-09 17:25:34.989317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:05.915 [2024-12-09 17:25:34.996861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:05.915 [2024-12-09 17:25:34.996876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:05.915 [2024-12-09 17:25:34.996886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:05.915 [2024-12-09 17:25:34.996893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:05.915 ===================================================== 00:15:05.915 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:05.915 ===================================================== 00:15:05.915 Controller Capabilities/Features 00:15:05.915 ================================ 00:15:05.915 Vendor ID: 4e58 00:15:05.915 Subsystem Vendor ID: 4e58 00:15:05.915 Serial Number: SPDK2 00:15:05.915 Model Number: SPDK bdev Controller 00:15:05.915 Firmware Version: 25.01 00:15:05.915 Recommended Arb Burst: 6 00:15:05.915 IEEE OUI Identifier: 8d 6b 50 00:15:05.915 Multi-path I/O 00:15:05.915 May have multiple subsystem ports: Yes 00:15:05.915 May have multiple controllers: Yes 00:15:05.915 Associated with SR-IOV VF: No 00:15:05.915 Max Data Transfer Size: 131072 00:15:05.915 Max Number of Namespaces: 32 00:15:05.915 Max Number of I/O Queues: 127 00:15:05.915 NVMe Specification Version (VS): 1.3 00:15:05.915 NVMe Specification Version (Identify): 1.3 00:15:05.915 Maximum Queue Entries: 256 00:15:05.915 Contiguous Queues Required: Yes 00:15:05.915 Arbitration Mechanisms Supported 00:15:05.915 Weighted Round Robin: Not Supported 00:15:05.915 Vendor Specific: Not Supported 00:15:05.915 Reset Timeout: 15000 ms 00:15:05.915 Doorbell Stride: 4 bytes 00:15:05.915 NVM Subsystem Reset: Not Supported 00:15:05.915 Command Sets Supported 00:15:05.915 NVM Command Set: Supported 00:15:05.915 Boot Partition: Not Supported 00:15:05.915 Memory Page Size Minimum: 4096 bytes 00:15:05.915 Memory Page Size Maximum: 4096 bytes 00:15:05.915 Persistent Memory Region: Not Supported 00:15:05.915 Optional Asynchronous Events Supported 00:15:05.915 Namespace Attribute Notices: Supported 00:15:05.915 Firmware Activation Notices: Not Supported 00:15:05.915 ANA Change Notices: Not Supported 00:15:05.915 PLE Aggregate Log Change Notices: Not Supported 00:15:05.915 LBA Status Info Alert Notices: Not Supported 00:15:05.915 EGE Aggregate Log Change Notices: Not Supported 00:15:05.915 Normal NVM Subsystem Shutdown event: Not Supported 00:15:05.915 Zone Descriptor Change Notices: Not Supported 00:15:05.915 Discovery Log Change Notices: Not Supported 00:15:05.915 Controller Attributes 00:15:05.915 128-bit Host Identifier: Supported 00:15:05.915 Non-Operational Permissive Mode: Not Supported 00:15:05.915 NVM Sets: Not Supported 00:15:05.915 Read Recovery Levels: Not Supported 00:15:05.915 Endurance Groups: Not Supported 00:15:05.915 Predictable Latency Mode: Not Supported 00:15:05.915 Traffic Based Keep ALive: Not Supported 00:15:05.915 Namespace Granularity: Not Supported 00:15:05.915 SQ Associations: Not Supported 00:15:05.915 UUID List: Not Supported 00:15:05.915 Multi-Domain Subsystem: Not Supported 00:15:05.915 Fixed Capacity Management: Not Supported 00:15:05.915 Variable Capacity Management: Not Supported 00:15:05.915 Delete Endurance Group: Not Supported 00:15:05.915 Delete NVM Set: Not Supported 00:15:05.915 Extended LBA Formats Supported: Not Supported 00:15:05.915 Flexible Data Placement Supported: Not Supported 00:15:05.915 00:15:05.915 Controller Memory Buffer Support 00:15:05.915 ================================ 00:15:05.915 Supported: No 00:15:05.916 00:15:05.916 Persistent Memory Region Support 00:15:05.916 ================================ 00:15:05.916 Supported: No 00:15:05.916 00:15:05.916 Admin Command Set Attributes 00:15:05.916 ============================ 00:15:05.916 Security Send/Receive: Not Supported 00:15:05.916 Format NVM: Not Supported 00:15:05.916 Firmware Activate/Download: Not Supported 00:15:05.916 Namespace Management: Not Supported 00:15:05.916 Device Self-Test: Not Supported 00:15:05.916 Directives: Not Supported 00:15:05.916 NVMe-MI: Not Supported 00:15:05.916 Virtualization Management: Not Supported 00:15:05.916 Doorbell Buffer Config: Not Supported 00:15:05.916 Get LBA Status Capability: Not Supported 00:15:05.916 Command & Feature Lockdown Capability: Not Supported 00:15:05.916 Abort Command Limit: 4 00:15:05.916 Async Event Request Limit: 4 00:15:05.916 Number of Firmware Slots: N/A 00:15:05.916 Firmware Slot 1 Read-Only: N/A 00:15:05.916 Firmware Activation Without Reset: N/A 00:15:05.916 Multiple Update Detection Support: N/A 00:15:05.916 Firmware Update Granularity: No Information Provided 00:15:05.916 Per-Namespace SMART Log: No 00:15:05.916 Asymmetric Namespace Access Log Page: Not Supported 00:15:05.916 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:05.916 Command Effects Log Page: Supported 00:15:05.916 Get Log Page Extended Data: Supported 00:15:05.916 Telemetry Log Pages: Not Supported 00:15:05.916 Persistent Event Log Pages: Not Supported 00:15:05.916 Supported Log Pages Log Page: May Support 00:15:05.916 Commands Supported & Effects Log Page: Not Supported 00:15:05.916 Feature Identifiers & Effects Log Page:May Support 00:15:05.916 NVMe-MI Commands & Effects Log Page: May Support 00:15:05.916 Data Area 4 for Telemetry Log: Not Supported 00:15:05.916 Error Log Page Entries Supported: 128 00:15:05.916 Keep Alive: Supported 00:15:05.916 Keep Alive Granularity: 10000 ms 00:15:05.916 00:15:05.916 NVM Command Set Attributes 00:15:05.916 ========================== 00:15:05.916 Submission Queue Entry Size 00:15:05.916 Max: 64 00:15:05.916 Min: 64 00:15:05.916 Completion Queue Entry Size 00:15:05.916 Max: 16 00:15:05.916 Min: 16 00:15:05.916 Number of Namespaces: 32 00:15:05.916 Compare Command: Supported 00:15:05.916 Write Uncorrectable Command: Not Supported 00:15:05.916 Dataset Management Command: Supported 00:15:05.916 Write Zeroes Command: Supported 00:15:05.916 Set Features Save Field: Not Supported 00:15:05.916 Reservations: Not Supported 00:15:05.916 Timestamp: Not Supported 00:15:05.916 Copy: Supported 00:15:05.916 Volatile Write Cache: Present 00:15:05.916 Atomic Write Unit (Normal): 1 00:15:05.916 Atomic Write Unit (PFail): 1 00:15:05.916 Atomic Compare & Write Unit: 1 00:15:05.916 Fused Compare & Write: Supported 00:15:05.916 Scatter-Gather List 00:15:05.916 SGL Command Set: Supported (Dword aligned) 00:15:05.916 SGL Keyed: Not Supported 00:15:05.916 SGL Bit Bucket Descriptor: Not Supported 00:15:05.916 SGL Metadata Pointer: Not Supported 00:15:05.916 Oversized SGL: Not Supported 00:15:05.916 SGL Metadata Address: Not Supported 00:15:05.916 SGL Offset: Not Supported 00:15:05.916 Transport SGL Data Block: Not Supported 00:15:05.916 Replay Protected Memory Block: Not Supported 00:15:05.916 00:15:05.916 Firmware Slot Information 00:15:05.916 ========================= 00:15:05.916 Active slot: 1 00:15:05.916 Slot 1 Firmware Revision: 25.01 00:15:05.916 00:15:05.916 00:15:05.916 Commands Supported and Effects 00:15:05.916 ============================== 00:15:05.916 Admin Commands 00:15:05.916 -------------- 00:15:05.916 Get Log Page (02h): Supported 00:15:05.916 Identify (06h): Supported 00:15:05.916 Abort (08h): Supported 00:15:05.916 Set Features (09h): Supported 00:15:05.916 Get Features (0Ah): Supported 00:15:05.916 Asynchronous Event Request (0Ch): Supported 00:15:05.916 Keep Alive (18h): Supported 00:15:05.916 I/O Commands 00:15:05.916 ------------ 00:15:05.916 Flush (00h): Supported LBA-Change 00:15:05.916 Write (01h): Supported LBA-Change 00:15:05.916 Read (02h): Supported 00:15:05.916 Compare (05h): Supported 00:15:05.916 Write Zeroes (08h): Supported LBA-Change 00:15:05.916 Dataset Management (09h): Supported LBA-Change 00:15:05.916 Copy (19h): Supported LBA-Change 00:15:05.916 00:15:05.916 Error Log 00:15:05.916 ========= 00:15:05.916 00:15:05.916 Arbitration 00:15:05.916 =========== 00:15:05.916 Arbitration Burst: 1 00:15:05.916 00:15:05.916 Power Management 00:15:05.916 ================ 00:15:05.916 Number of Power States: 1 00:15:05.916 Current Power State: Power State #0 00:15:05.916 Power State #0: 00:15:05.916 Max Power: 0.00 W 00:15:05.916 Non-Operational State: Operational 00:15:05.916 Entry Latency: Not Reported 00:15:05.916 Exit Latency: Not Reported 00:15:05.916 Relative Read Throughput: 0 00:15:05.916 Relative Read Latency: 0 00:15:05.916 Relative Write Throughput: 0 00:15:05.916 Relative Write Latency: 0 00:15:05.916 Idle Power: Not Reported 00:15:05.916 Active Power: Not Reported 00:15:05.916 Non-Operational Permissive Mode: Not Supported 00:15:05.916 00:15:05.916 Health Information 00:15:05.916 ================== 00:15:05.916 Critical Warnings: 00:15:05.916 Available Spare Space: OK 00:15:05.916 Temperature: OK 00:15:05.916 Device Reliability: OK 00:15:05.916 Read Only: No 00:15:05.916 Volatile Memory Backup: OK 00:15:05.916 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:05.916 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:05.916 Available Spare: 0% 00:15:05.916 Available Sp[2024-12-09 17:25:34.996982] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:05.916 [2024-12-09 17:25:35.004222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:05.916 [2024-12-09 17:25:35.004254] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:15:05.916 [2024-12-09 17:25:35.004263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:05.916 [2024-12-09 17:25:35.004269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:05.916 [2024-12-09 17:25:35.004274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:05.916 [2024-12-09 17:25:35.004280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:05.916 [2024-12-09 17:25:35.004331] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:05.916 [2024-12-09 17:25:35.004341] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:05.916 [2024-12-09 17:25:35.005333] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:05.916 [2024-12-09 17:25:35.005377] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:15:05.916 [2024-12-09 17:25:35.005383] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:15:05.916 [2024-12-09 17:25:35.006342] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:05.916 [2024-12-09 17:25:35.006353] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:15:05.916 [2024-12-09 17:25:35.006400] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:05.916 [2024-12-09 17:25:35.007366] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:05.916 are Threshold: 0% 00:15:05.916 Life Percentage Used: 0% 00:15:05.916 Data Units Read: 0 00:15:05.916 Data Units Written: 0 00:15:05.916 Host Read Commands: 0 00:15:05.916 Host Write Commands: 0 00:15:05.916 Controller Busy Time: 0 minutes 00:15:05.916 Power Cycles: 0 00:15:05.916 Power On Hours: 0 hours 00:15:05.916 Unsafe Shutdowns: 0 00:15:05.916 Unrecoverable Media Errors: 0 00:15:05.916 Lifetime Error Log Entries: 0 00:15:05.916 Warning Temperature Time: 0 minutes 00:15:05.916 Critical Temperature Time: 0 minutes 00:15:05.916 00:15:05.916 Number of Queues 00:15:05.916 ================ 00:15:05.916 Number of I/O Submission Queues: 127 00:15:05.916 Number of I/O Completion Queues: 127 00:15:05.916 00:15:05.916 Active Namespaces 00:15:05.916 ================= 00:15:05.916 Namespace ID:1 00:15:05.916 Error Recovery Timeout: Unlimited 00:15:05.916 Command Set Identifier: NVM (00h) 00:15:05.916 Deallocate: Supported 00:15:05.916 Deallocated/Unwritten Error: Not Supported 00:15:05.916 Deallocated Read Value: Unknown 00:15:05.916 Deallocate in Write Zeroes: Not Supported 00:15:05.916 Deallocated Guard Field: 0xFFFF 00:15:05.916 Flush: Supported 00:15:05.916 Reservation: Supported 00:15:05.916 Namespace Sharing Capabilities: Multiple Controllers 00:15:05.916 Size (in LBAs): 131072 (0GiB) 00:15:05.916 Capacity (in LBAs): 131072 (0GiB) 00:15:05.916 Utilization (in LBAs): 131072 (0GiB) 00:15:05.916 NGUID: 6769D2A6249C4CA7A92665C774BD2D1E 00:15:05.916 UUID: 6769d2a6-249c-4ca7-a926-65c774bd2d1e 00:15:05.916 Thin Provisioning: Not Supported 00:15:05.916 Per-NS Atomic Units: Yes 00:15:05.917 Atomic Boundary Size (Normal): 0 00:15:05.917 Atomic Boundary Size (PFail): 0 00:15:05.917 Atomic Boundary Offset: 0 00:15:05.917 Maximum Single Source Range Length: 65535 00:15:05.917 Maximum Copy Length: 65535 00:15:05.917 Maximum Source Range Count: 1 00:15:05.917 NGUID/EUI64 Never Reused: No 00:15:05.917 Namespace Write Protected: No 00:15:05.917 Number of LBA Formats: 1 00:15:05.917 Current LBA Format: LBA Format #00 00:15:05.917 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:05.917 00:15:05.917 17:25:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:06.174 [2024-12-09 17:25:35.238450] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:11.441 Initializing NVMe Controllers 00:15:11.441 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:11.441 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:11.441 Initialization complete. Launching workers. 00:15:11.441 ======================================================== 00:15:11.441 Latency(us) 00:15:11.441 Device Information : IOPS MiB/s Average min max 00:15:11.441 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39945.11 156.04 3204.23 957.96 9357.64 00:15:11.441 ======================================================== 00:15:11.441 Total : 39945.11 156.04 3204.23 957.96 9357.64 00:15:11.441 00:15:11.441 [2024-12-09 17:25:40.342475] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:11.441 17:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:11.441 [2024-12-09 17:25:40.580165] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:16.710 Initializing NVMe Controllers 00:15:16.710 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:16.710 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:16.710 Initialization complete. Launching workers. 00:15:16.710 ======================================================== 00:15:16.710 Latency(us) 00:15:16.711 Device Information : IOPS MiB/s Average min max 00:15:16.711 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39895.35 155.84 3208.23 969.54 10375.34 00:15:16.711 ======================================================== 00:15:16.711 Total : 39895.35 155.84 3208.23 969.54 10375.34 00:15:16.711 00:15:16.711 [2024-12-09 17:25:45.599310] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:16.711 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:16.711 [2024-12-09 17:25:45.809539] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:21.980 [2024-12-09 17:25:50.944314] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:21.980 Initializing NVMe Controllers 00:15:21.980 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:21.980 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:21.980 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:21.980 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:21.980 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:21.980 Initialization complete. Launching workers. 00:15:21.980 Starting thread on core 2 00:15:21.980 Starting thread on core 3 00:15:21.980 Starting thread on core 1 00:15:21.980 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:22.239 [2024-12-09 17:25:51.240696] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:25.528 [2024-12-09 17:25:54.301196] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:25.528 Initializing NVMe Controllers 00:15:25.528 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:25.528 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:25.528 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:25.528 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:25.528 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:25.528 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:25.528 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:25.528 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:25.528 Initialization complete. Launching workers. 00:15:25.528 Starting thread on core 1 with urgent priority queue 00:15:25.528 Starting thread on core 2 with urgent priority queue 00:15:25.528 Starting thread on core 3 with urgent priority queue 00:15:25.528 Starting thread on core 0 with urgent priority queue 00:15:25.528 SPDK bdev Controller (SPDK2 ) core 0: 8756.67 IO/s 11.42 secs/100000 ios 00:15:25.528 SPDK bdev Controller (SPDK2 ) core 1: 9193.33 IO/s 10.88 secs/100000 ios 00:15:25.528 SPDK bdev Controller (SPDK2 ) core 2: 7779.67 IO/s 12.85 secs/100000 ios 00:15:25.528 SPDK bdev Controller (SPDK2 ) core 3: 10264.67 IO/s 9.74 secs/100000 ios 00:15:25.528 ======================================================== 00:15:25.528 00:15:25.528 17:25:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:25.528 [2024-12-09 17:25:54.589641] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:25.528 Initializing NVMe Controllers 00:15:25.528 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:25.528 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:25.528 Namespace ID: 1 size: 0GB 00:15:25.528 Initialization complete. 00:15:25.528 INFO: using host memory buffer for IO 00:15:25.528 Hello world! 00:15:25.528 [2024-12-09 17:25:54.602725] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:25.528 17:25:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:25.787 [2024-12-09 17:25:54.875085] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:27.162 Initializing NVMe Controllers 00:15:27.162 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:27.162 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:27.162 Initialization complete. Launching workers. 00:15:27.162 submit (in ns) avg, min, max = 5320.3, 3142.9, 4000940.0 00:15:27.162 complete (in ns) avg, min, max = 23543.4, 1717.1, 4994384.8 00:15:27.162 00:15:27.162 Submit histogram 00:15:27.162 ================ 00:15:27.162 Range in us Cumulative Count 00:15:27.162 3.139 - 3.154: 0.0121% ( 2) 00:15:27.162 3.170 - 3.185: 0.0424% ( 5) 00:15:27.162 3.185 - 3.200: 0.0666% ( 4) 00:15:27.162 3.200 - 3.215: 0.3813% ( 52) 00:15:27.162 3.215 - 3.230: 2.3183% ( 320) 00:15:27.162 3.230 - 3.246: 6.7793% ( 737) 00:15:27.162 3.246 - 3.261: 12.2511% ( 904) 00:15:27.162 3.261 - 3.276: 18.0800% ( 963) 00:15:27.162 3.276 - 3.291: 25.2103% ( 1178) 00:15:27.162 3.291 - 3.307: 31.1906% ( 988) 00:15:27.162 3.307 - 3.322: 36.8924% ( 942) 00:15:27.162 3.322 - 3.337: 42.7819% ( 973) 00:15:27.162 3.337 - 3.352: 48.0479% ( 870) 00:15:27.162 3.352 - 3.368: 53.5500% ( 909) 00:15:27.162 3.368 - 3.383: 60.6622% ( 1175) 00:15:27.162 3.383 - 3.398: 68.0225% ( 1216) 00:15:27.162 3.398 - 3.413: 73.4580% ( 898) 00:15:27.162 3.413 - 3.429: 78.5122% ( 835) 00:15:27.162 3.429 - 3.444: 82.0168% ( 579) 00:15:27.162 3.444 - 3.459: 84.5893% ( 425) 00:15:27.162 3.459 - 3.474: 86.4052% ( 300) 00:15:27.162 3.474 - 3.490: 87.1921% ( 130) 00:15:27.162 3.490 - 3.505: 87.6944% ( 83) 00:15:27.162 3.505 - 3.520: 88.1303% ( 72) 00:15:27.162 3.520 - 3.535: 88.7355% ( 100) 00:15:27.162 3.535 - 3.550: 89.5164% ( 129) 00:15:27.162 3.550 - 3.566: 90.4969% ( 162) 00:15:27.163 3.566 - 3.581: 91.3928% ( 148) 00:15:27.163 3.581 - 3.596: 92.3370% ( 156) 00:15:27.163 3.596 - 3.611: 93.3236% ( 163) 00:15:27.163 3.611 - 3.627: 94.1832% ( 142) 00:15:27.163 3.627 - 3.642: 95.2061% ( 169) 00:15:27.163 3.642 - 3.657: 96.1261% ( 152) 00:15:27.163 3.657 - 3.672: 96.9251% ( 132) 00:15:27.163 3.672 - 3.688: 97.4941% ( 94) 00:15:27.163 3.688 - 3.703: 98.0691% ( 95) 00:15:27.163 3.703 - 3.718: 98.4868% ( 69) 00:15:27.163 3.718 - 3.733: 98.7955% ( 51) 00:15:27.163 3.733 - 3.749: 99.0860% ( 48) 00:15:27.163 3.749 - 3.764: 99.2676% ( 30) 00:15:27.163 3.764 - 3.779: 99.4371% ( 28) 00:15:27.163 3.779 - 3.794: 99.5460% ( 18) 00:15:27.163 3.794 - 3.810: 99.5763% ( 5) 00:15:27.163 3.810 - 3.825: 99.6066% ( 5) 00:15:27.163 3.825 - 3.840: 99.6247% ( 3) 00:15:27.163 3.840 - 3.855: 99.6368% ( 2) 00:15:27.163 3.855 - 3.870: 99.6429% ( 1) 00:15:27.163 3.901 - 3.931: 99.6489% ( 1) 00:15:27.163 4.053 - 4.084: 99.6550% ( 1) 00:15:27.163 4.084 - 4.114: 99.6610% ( 1) 00:15:27.163 4.114 - 4.145: 99.6671% ( 1) 00:15:27.163 4.968 - 4.998: 99.6731% ( 1) 00:15:27.163 5.333 - 5.364: 99.6792% ( 1) 00:15:27.163 5.486 - 5.516: 99.6852% ( 1) 00:15:27.163 5.608 - 5.638: 99.6974% ( 2) 00:15:27.163 5.638 - 5.669: 99.7034% ( 1) 00:15:27.163 5.669 - 5.699: 99.7095% ( 1) 00:15:27.163 5.699 - 5.730: 99.7155% ( 1) 00:15:27.163 5.730 - 5.760: 99.7216% ( 1) 00:15:27.163 5.760 - 5.790: 99.7337% ( 2) 00:15:27.163 5.790 - 5.821: 99.7397% ( 1) 00:15:27.163 6.156 - 6.187: 99.7458% ( 1) 00:15:27.163 6.187 - 6.217: 99.7518% ( 1) 00:15:27.163 6.278 - 6.309: 99.7579% ( 1) 00:15:27.163 6.461 - 6.491: 99.7639% ( 1) 00:15:27.163 6.552 - 6.583: 99.7700% ( 1) 00:15:27.163 6.583 - 6.613: 99.7760% ( 1) 00:15:27.163 6.644 - 6.674: 99.7821% ( 1) 00:15:27.163 6.766 - 6.796: 99.7881% ( 1) 00:15:27.163 6.888 - 6.918: 99.7942% ( 1) 00:15:27.163 6.918 - 6.949: 99.8003% ( 1) 00:15:27.163 6.979 - 7.010: 99.8063% ( 1) 00:15:27.163 7.070 - 7.101: 99.8124% ( 1) 00:15:27.163 7.101 - 7.131: 99.8184% ( 1) 00:15:27.163 7.223 - 7.253: 99.8245% ( 1) 00:15:27.163 7.253 - 7.284: 99.8305% ( 1) 00:15:27.163 7.375 - 7.406: 99.8366% ( 1) 00:15:27.163 7.436 - 7.467: 99.8426% ( 1) 00:15:27.163 7.467 - 7.497: 99.8547% ( 2) 00:15:27.163 [2024-12-09 17:25:55.969200] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:27.163 7.497 - 7.528: 99.8608% ( 1) 00:15:27.163 7.528 - 7.558: 99.8729% ( 2) 00:15:27.163 7.558 - 7.589: 99.8789% ( 1) 00:15:27.163 7.771 - 7.802: 99.8850% ( 1) 00:15:27.163 7.863 - 7.924: 99.8910% ( 1) 00:15:27.163 8.107 - 8.168: 99.8971% ( 1) 00:15:27.163 8.229 - 8.290: 99.9032% ( 1) 00:15:27.163 8.533 - 8.594: 99.9092% ( 1) 00:15:27.163 8.594 - 8.655: 99.9153% ( 1) 00:15:27.163 8.960 - 9.021: 99.9213% ( 1) 00:15:27.163 9.265 - 9.326: 99.9334% ( 2) 00:15:27.163 9.448 - 9.509: 99.9395% ( 1) 00:15:27.163 10.118 - 10.179: 99.9455% ( 1) 00:15:27.163 19.017 - 19.139: 99.9516% ( 1) 00:15:27.163 3994.575 - 4025.783: 100.0000% ( 8) 00:15:27.163 00:15:27.163 Complete histogram 00:15:27.163 ================== 00:15:27.163 Range in us Cumulative Count 00:15:27.163 1.714 - 1.722: 0.0182% ( 3) 00:15:27.163 1.722 - 1.730: 0.1392% ( 20) 00:15:27.163 1.730 - 1.737: 0.2421% ( 17) 00:15:27.163 1.737 - 1.745: 0.2845% ( 7) 00:15:27.163 1.752 - 1.760: 0.3571% ( 12) 00:15:27.163 1.760 - 1.768: 2.8509% ( 412) 00:15:27.163 1.768 - 1.775: 20.3922% ( 2898) 00:15:27.163 1.775 - 1.783: 54.2158% ( 5588) 00:15:27.163 1.783 - 1.790: 75.4131% ( 3502) 00:15:27.163 1.790 - 1.798: 80.9576% ( 916) 00:15:27.163 1.798 - 1.806: 84.0385% ( 509) 00:15:27.163 1.806 - 1.813: 86.2054% ( 358) 00:15:27.163 1.813 - 1.821: 88.6932% ( 411) 00:15:27.163 1.821 - 1.829: 92.2947% ( 595) 00:15:27.163 1.829 - 1.836: 94.9035% ( 431) 00:15:27.163 1.836 - 1.844: 96.3077% ( 232) 00:15:27.163 1.844 - 1.851: 97.3004% ( 164) 00:15:27.163 1.851 - 1.859: 98.0389% ( 122) 00:15:27.163 1.859 - 1.867: 98.4928% ( 75) 00:15:27.163 1.867 - 1.874: 98.7410% ( 41) 00:15:27.163 1.874 - 1.882: 98.8378% ( 16) 00:15:27.163 1.882 - 1.890: 98.9347% ( 16) 00:15:27.163 1.890 - 1.897: 99.0376% ( 17) 00:15:27.163 1.897 - 1.905: 99.1163% ( 13) 00:15:27.163 1.905 - 1.912: 99.1465% ( 5) 00:15:27.163 1.912 - 1.920: 99.1708% ( 4) 00:15:27.163 1.920 - 1.928: 99.1889% ( 3) 00:15:27.163 1.928 - 1.935: 99.2192% ( 5) 00:15:27.163 1.935 - 1.943: 99.2373% ( 3) 00:15:27.163 1.950 - 1.966: 99.2434% ( 1) 00:15:27.163 1.996 - 2.011: 99.2676% ( 4) 00:15:27.163 2.027 - 2.042: 99.2737% ( 1) 00:15:27.163 2.042 - 2.057: 99.2797% ( 1) 00:15:27.163 2.164 - 2.179: 99.2858% ( 1) 00:15:27.163 2.210 - 2.225: 99.2918% ( 1) 00:15:27.163 2.225 - 2.240: 99.3039% ( 2) 00:15:27.163 2.240 - 2.255: 99.3221% ( 3) 00:15:27.163 2.316 - 2.331: 99.3281% ( 1) 00:15:27.163 2.392 - 2.408: 99.3342% ( 1) 00:15:27.163 3.429 - 3.444: 99.3402% ( 1) 00:15:27.163 4.236 - 4.267: 99.3463% ( 1) 00:15:27.163 4.480 - 4.510: 99.3523% ( 1) 00:15:27.163 4.846 - 4.876: 99.3584% ( 1) 00:15:27.163 4.998 - 5.029: 99.3644% ( 1) 00:15:27.163 5.547 - 5.577: 99.3705% ( 1) 00:15:27.163 5.851 - 5.882: 99.3766% ( 1) 00:15:27.163 5.912 - 5.943: 99.3826% ( 1) 00:15:27.163 6.400 - 6.430: 99.3887% ( 1) 00:15:27.163 6.552 - 6.583: 99.3947% ( 1) 00:15:27.163 6.644 - 6.674: 99.4068% ( 2) 00:15:27.163 6.766 - 6.796: 99.4129% ( 1) 00:15:27.163 6.796 - 6.827: 99.4189% ( 1) 00:15:27.163 6.857 - 6.888: 99.4250% ( 1) 00:15:27.163 7.101 - 7.131: 99.4310% ( 1) 00:15:27.163 7.497 - 7.528: 99.4371% ( 1) 00:15:27.163 7.528 - 7.558: 99.4431% ( 1) 00:15:27.163 8.046 - 8.107: 99.4492% ( 1) 00:15:27.163 15.055 - 15.116: 99.4552% ( 1) 00:15:27.163 2777.478 - 2793.082: 99.4613% ( 1) 00:15:27.163 3994.575 - 4025.783: 99.9939% ( 88) 00:15:27.163 4993.219 - 5024.427: 100.0000% ( 1) 00:15:27.163 00:15:27.163 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:27.163 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:27.163 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:27.163 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:27.163 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:27.163 [ 00:15:27.163 { 00:15:27.163 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:27.163 "subtype": "Discovery", 00:15:27.163 "listen_addresses": [], 00:15:27.163 "allow_any_host": true, 00:15:27.163 "hosts": [] 00:15:27.163 }, 00:15:27.163 { 00:15:27.163 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:27.163 "subtype": "NVMe", 00:15:27.163 "listen_addresses": [ 00:15:27.163 { 00:15:27.163 "trtype": "VFIOUSER", 00:15:27.163 "adrfam": "IPv4", 00:15:27.163 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:27.163 "trsvcid": "0" 00:15:27.163 } 00:15:27.163 ], 00:15:27.163 "allow_any_host": true, 00:15:27.163 "hosts": [], 00:15:27.163 "serial_number": "SPDK1", 00:15:27.163 "model_number": "SPDK bdev Controller", 00:15:27.163 "max_namespaces": 32, 00:15:27.163 "min_cntlid": 1, 00:15:27.163 "max_cntlid": 65519, 00:15:27.163 "namespaces": [ 00:15:27.163 { 00:15:27.163 "nsid": 1, 00:15:27.163 "bdev_name": "Malloc1", 00:15:27.163 "name": "Malloc1", 00:15:27.163 "nguid": "19882B03EB814929955B73CA6185DAEC", 00:15:27.163 "uuid": "19882b03-eb81-4929-955b-73ca6185daec" 00:15:27.163 }, 00:15:27.163 { 00:15:27.163 "nsid": 2, 00:15:27.163 "bdev_name": "Malloc3", 00:15:27.163 "name": "Malloc3", 00:15:27.163 "nguid": "F61D6D6A04D742D5BF5EA38669F691CB", 00:15:27.163 "uuid": "f61d6d6a-04d7-42d5-bf5e-a38669f691cb" 00:15:27.163 } 00:15:27.163 ] 00:15:27.163 }, 00:15:27.163 { 00:15:27.163 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:27.163 "subtype": "NVMe", 00:15:27.163 "listen_addresses": [ 00:15:27.163 { 00:15:27.163 "trtype": "VFIOUSER", 00:15:27.163 "adrfam": "IPv4", 00:15:27.163 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:27.163 "trsvcid": "0" 00:15:27.163 } 00:15:27.163 ], 00:15:27.163 "allow_any_host": true, 00:15:27.163 "hosts": [], 00:15:27.163 "serial_number": "SPDK2", 00:15:27.163 "model_number": "SPDK bdev Controller", 00:15:27.163 "max_namespaces": 32, 00:15:27.163 "min_cntlid": 1, 00:15:27.163 "max_cntlid": 65519, 00:15:27.163 "namespaces": [ 00:15:27.163 { 00:15:27.163 "nsid": 1, 00:15:27.163 "bdev_name": "Malloc2", 00:15:27.163 "name": "Malloc2", 00:15:27.163 "nguid": "6769D2A6249C4CA7A92665C774BD2D1E", 00:15:27.163 "uuid": "6769d2a6-249c-4ca7-a926-65c774bd2d1e" 00:15:27.163 } 00:15:27.164 ] 00:15:27.164 } 00:15:27.164 ] 00:15:27.164 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:27.164 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:27.164 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2548793 00:15:27.164 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:27.164 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:15:27.164 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:27.164 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:27.164 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:15:27.164 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:27.164 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:27.422 [2024-12-09 17:25:56.365630] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:27.422 Malloc4 00:15:27.422 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:27.681 [2024-12-09 17:25:56.608462] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:27.681 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:27.681 Asynchronous Event Request test 00:15:27.681 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:27.681 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:27.681 Registering asynchronous event callbacks... 00:15:27.681 Starting namespace attribute notice tests for all controllers... 00:15:27.681 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:27.681 aer_cb - Changed Namespace 00:15:27.681 Cleaning up... 00:15:27.681 [ 00:15:27.681 { 00:15:27.681 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:27.681 "subtype": "Discovery", 00:15:27.681 "listen_addresses": [], 00:15:27.681 "allow_any_host": true, 00:15:27.681 "hosts": [] 00:15:27.681 }, 00:15:27.681 { 00:15:27.681 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:27.681 "subtype": "NVMe", 00:15:27.681 "listen_addresses": [ 00:15:27.681 { 00:15:27.681 "trtype": "VFIOUSER", 00:15:27.681 "adrfam": "IPv4", 00:15:27.681 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:27.681 "trsvcid": "0" 00:15:27.681 } 00:15:27.681 ], 00:15:27.681 "allow_any_host": true, 00:15:27.681 "hosts": [], 00:15:27.681 "serial_number": "SPDK1", 00:15:27.681 "model_number": "SPDK bdev Controller", 00:15:27.681 "max_namespaces": 32, 00:15:27.681 "min_cntlid": 1, 00:15:27.681 "max_cntlid": 65519, 00:15:27.681 "namespaces": [ 00:15:27.681 { 00:15:27.681 "nsid": 1, 00:15:27.681 "bdev_name": "Malloc1", 00:15:27.681 "name": "Malloc1", 00:15:27.681 "nguid": "19882B03EB814929955B73CA6185DAEC", 00:15:27.681 "uuid": "19882b03-eb81-4929-955b-73ca6185daec" 00:15:27.681 }, 00:15:27.681 { 00:15:27.681 "nsid": 2, 00:15:27.681 "bdev_name": "Malloc3", 00:15:27.681 "name": "Malloc3", 00:15:27.681 "nguid": "F61D6D6A04D742D5BF5EA38669F691CB", 00:15:27.681 "uuid": "f61d6d6a-04d7-42d5-bf5e-a38669f691cb" 00:15:27.681 } 00:15:27.681 ] 00:15:27.681 }, 00:15:27.681 { 00:15:27.681 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:27.681 "subtype": "NVMe", 00:15:27.681 "listen_addresses": [ 00:15:27.681 { 00:15:27.681 "trtype": "VFIOUSER", 00:15:27.681 "adrfam": "IPv4", 00:15:27.681 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:27.681 "trsvcid": "0" 00:15:27.681 } 00:15:27.681 ], 00:15:27.681 "allow_any_host": true, 00:15:27.681 "hosts": [], 00:15:27.681 "serial_number": "SPDK2", 00:15:27.681 "model_number": "SPDK bdev Controller", 00:15:27.681 "max_namespaces": 32, 00:15:27.681 "min_cntlid": 1, 00:15:27.681 "max_cntlid": 65519, 00:15:27.681 "namespaces": [ 00:15:27.681 { 00:15:27.681 "nsid": 1, 00:15:27.681 "bdev_name": "Malloc2", 00:15:27.681 "name": "Malloc2", 00:15:27.681 "nguid": "6769D2A6249C4CA7A92665C774BD2D1E", 00:15:27.681 "uuid": "6769d2a6-249c-4ca7-a926-65c774bd2d1e" 00:15:27.681 }, 00:15:27.681 { 00:15:27.681 "nsid": 2, 00:15:27.681 "bdev_name": "Malloc4", 00:15:27.681 "name": "Malloc4", 00:15:27.681 "nguid": "CC94177C10A742479FDBF3DDB2871415", 00:15:27.681 "uuid": "cc94177c-10a7-4247-9fdb-f3ddb2871415" 00:15:27.681 } 00:15:27.681 ] 00:15:27.681 } 00:15:27.681 ] 00:15:27.681 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2548793 00:15:27.681 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:27.681 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2541262 00:15:27.681 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 2541262 ']' 00:15:27.681 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 2541262 00:15:27.681 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:15:27.681 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:27.681 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2541262 00:15:27.941 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:27.941 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:27.941 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2541262' 00:15:27.941 killing process with pid 2541262 00:15:27.941 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 2541262 00:15:27.941 17:25:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 2541262 00:15:27.941 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:28.200 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:28.200 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:28.200 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:28.200 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:28.200 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2549028 00:15:28.200 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2549028' 00:15:28.200 Process pid: 2549028 00:15:28.200 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:28.200 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:28.200 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2549028 00:15:28.200 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 2549028 ']' 00:15:28.200 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:28.200 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:28.200 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:28.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:28.200 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:28.200 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:28.200 [2024-12-09 17:25:57.169202] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:28.200 [2024-12-09 17:25:57.170030] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:15:28.200 [2024-12-09 17:25:57.170068] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:28.200 [2024-12-09 17:25:57.244570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:28.200 [2024-12-09 17:25:57.280670] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:28.200 [2024-12-09 17:25:57.280709] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:28.200 [2024-12-09 17:25:57.280716] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:28.200 [2024-12-09 17:25:57.280721] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:28.200 [2024-12-09 17:25:57.280727] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:28.200 [2024-12-09 17:25:57.282196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:28.200 [2024-12-09 17:25:57.282324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:28.200 [2024-12-09 17:25:57.282357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.200 [2024-12-09 17:25:57.282359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:28.200 [2024-12-09 17:25:57.350392] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:28.201 [2024-12-09 17:25:57.351362] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:28.201 [2024-12-09 17:25:57.351399] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:28.201 [2024-12-09 17:25:57.351567] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:28.201 [2024-12-09 17:25:57.351626] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:28.460 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:28.460 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:15:28.460 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:29.397 17:25:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:29.656 17:25:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:29.656 17:25:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:29.656 17:25:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:29.656 17:25:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:29.656 17:25:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:29.656 Malloc1 00:15:29.656 17:25:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:29.914 17:25:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:30.173 17:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:30.432 17:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:30.432 17:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:30.432 17:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:30.432 Malloc2 00:15:30.432 17:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:30.690 17:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:30.949 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:31.208 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:31.208 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2549028 00:15:31.208 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 2549028 ']' 00:15:31.208 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 2549028 00:15:31.208 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:15:31.208 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:31.208 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2549028 00:15:31.208 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:31.208 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:31.208 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2549028' 00:15:31.208 killing process with pid 2549028 00:15:31.208 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 2549028 00:15:31.208 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 2549028 00:15:31.467 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:31.467 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:31.467 00:15:31.467 real 0m50.749s 00:15:31.467 user 3m16.294s 00:15:31.467 sys 0m3.288s 00:15:31.467 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:31.467 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:31.467 ************************************ 00:15:31.467 END TEST nvmf_vfio_user 00:15:31.467 ************************************ 00:15:31.467 17:26:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:31.467 17:26:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:31.467 17:26:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:31.467 17:26:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:31.467 ************************************ 00:15:31.467 START TEST nvmf_vfio_user_nvme_compliance 00:15:31.467 ************************************ 00:15:31.467 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:31.467 * Looking for test storage... 00:15:31.467 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:31.467 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:31.727 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:31.727 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lcov --version 00:15:31.727 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:31.727 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:31.727 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:31.727 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:31.727 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:15:31.727 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:15:31.727 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:15:31.727 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:15:31.727 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:15:31.727 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:15:31.727 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:15:31.727 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:31.727 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:15:31.727 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:15:31.727 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:31.727 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:31.727 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:15:31.727 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:15:31.727 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:31.727 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:15:31.727 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:15:31.727 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:15:31.727 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:15:31.727 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:31.727 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:15:31.727 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:15:31.727 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:31.727 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:31.727 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:15:31.727 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:31.727 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:31.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:31.727 --rc genhtml_branch_coverage=1 00:15:31.727 --rc genhtml_function_coverage=1 00:15:31.727 --rc genhtml_legend=1 00:15:31.727 --rc geninfo_all_blocks=1 00:15:31.728 --rc geninfo_unexecuted_blocks=1 00:15:31.728 00:15:31.728 ' 00:15:31.728 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:31.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:31.728 --rc genhtml_branch_coverage=1 00:15:31.728 --rc genhtml_function_coverage=1 00:15:31.728 --rc genhtml_legend=1 00:15:31.728 --rc geninfo_all_blocks=1 00:15:31.728 --rc geninfo_unexecuted_blocks=1 00:15:31.728 00:15:31.728 ' 00:15:31.728 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:31.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:31.728 --rc genhtml_branch_coverage=1 00:15:31.728 --rc genhtml_function_coverage=1 00:15:31.728 --rc genhtml_legend=1 00:15:31.728 --rc geninfo_all_blocks=1 00:15:31.728 --rc geninfo_unexecuted_blocks=1 00:15:31.728 00:15:31.728 ' 00:15:31.728 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:31.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:31.728 --rc genhtml_branch_coverage=1 00:15:31.728 --rc genhtml_function_coverage=1 00:15:31.728 --rc genhtml_legend=1 00:15:31.728 --rc geninfo_all_blocks=1 00:15:31.728 --rc geninfo_unexecuted_blocks=1 00:15:31.728 00:15:31.728 ' 00:15:31.728 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:31.728 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:31.728 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:31.728 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:31.728 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:31.728 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:31.728 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:31.728 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:31.728 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:31.728 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:31.728 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:31.728 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:31.728 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:15:31.728 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:15:31.728 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:31.728 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:31.728 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:31.728 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:31.728 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:31.728 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:15:31.728 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:31.728 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:31.728 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:31.728 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.728 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.728 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.728 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:31.728 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.728 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:15:31.728 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:31.728 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:31.728 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:31.728 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:31.728 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:31.728 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:31.728 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:31.728 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:31.728 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:31.728 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:31.728 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:31.728 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:31.728 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:31.728 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:31.728 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:31.728 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2549574 00:15:31.728 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2549574' 00:15:31.728 Process pid: 2549574 00:15:31.728 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:31.728 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:31.728 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2549574 00:15:31.728 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 2549574 ']' 00:15:31.728 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:31.728 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:31.728 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:31.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:31.728 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:31.728 17:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:31.728 [2024-12-09 17:26:00.810321] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:15:31.728 [2024-12-09 17:26:00.810371] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:31.728 [2024-12-09 17:26:00.886606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:32.075 [2024-12-09 17:26:00.926029] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:32.075 [2024-12-09 17:26:00.926063] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:32.075 [2024-12-09 17:26:00.926072] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:32.075 [2024-12-09 17:26:00.926080] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:32.075 [2024-12-09 17:26:00.926085] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:32.075 [2024-12-09 17:26:00.927424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:32.075 [2024-12-09 17:26:00.927532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:32.075 [2024-12-09 17:26:00.927533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:32.075 17:26:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:32.075 17:26:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:15:32.075 17:26:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:33.055 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:33.055 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:33.055 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:33.055 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.055 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:33.055 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.055 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:33.055 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:33.055 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.055 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:33.055 malloc0 00:15:33.055 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.055 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:33.055 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.055 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:33.055 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.055 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:33.055 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.055 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:33.055 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.055 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:33.055 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.055 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:33.055 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.055 17:26:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:33.055 00:15:33.055 00:15:33.055 CUnit - A unit testing framework for C - Version 2.1-3 00:15:33.055 http://cunit.sourceforge.net/ 00:15:33.055 00:15:33.055 00:15:33.055 Suite: nvme_compliance 00:15:33.314 Test: admin_identify_ctrlr_verify_dptr ...[2024-12-09 17:26:02.263613] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:33.314 [2024-12-09 17:26:02.264957] vfio_user.c: 832:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:33.314 [2024-12-09 17:26:02.264973] vfio_user.c:5544:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:33.314 [2024-12-09 17:26:02.264979] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:33.314 [2024-12-09 17:26:02.266638] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:33.314 passed 00:15:33.314 Test: admin_identify_ctrlr_verify_fused ...[2024-12-09 17:26:02.349209] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:33.314 [2024-12-09 17:26:02.352231] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:33.314 passed 00:15:33.314 Test: admin_identify_ns ...[2024-12-09 17:26:02.431605] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:33.573 [2024-12-09 17:26:02.492228] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:33.573 [2024-12-09 17:26:02.500231] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:33.573 [2024-12-09 17:26:02.521344] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:33.573 passed 00:15:33.573 Test: admin_get_features_mandatory_features ...[2024-12-09 17:26:02.597197] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:33.573 [2024-12-09 17:26:02.600223] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:33.573 passed 00:15:33.573 Test: admin_get_features_optional_features ...[2024-12-09 17:26:02.678762] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:33.573 [2024-12-09 17:26:02.683795] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:33.573 passed 00:15:33.831 Test: admin_set_features_number_of_queues ...[2024-12-09 17:26:02.762613] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:33.831 [2024-12-09 17:26:02.867314] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:33.831 passed 00:15:33.831 Test: admin_get_log_page_mandatory_logs ...[2024-12-09 17:26:02.945080] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:33.831 [2024-12-09 17:26:02.948109] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:33.831 passed 00:15:34.090 Test: admin_get_log_page_with_lpo ...[2024-12-09 17:26:03.024837] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:34.090 [2024-12-09 17:26:03.092237] ctrlr.c:2699:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:34.090 [2024-12-09 17:26:03.105308] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:34.090 passed 00:15:34.090 Test: fabric_property_get ...[2024-12-09 17:26:03.182736] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:34.090 [2024-12-09 17:26:03.183969] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:34.090 [2024-12-09 17:26:03.185760] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:34.090 passed 00:15:34.090 Test: admin_delete_io_sq_use_admin_qid ...[2024-12-09 17:26:03.264268] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:34.090 [2024-12-09 17:26:03.265497] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:34.090 [2024-12-09 17:26:03.267285] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:34.349 passed 00:15:34.349 Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-09 17:26:03.347072] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:34.349 [2024-12-09 17:26:03.431234] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:34.349 [2024-12-09 17:26:03.447226] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:34.349 [2024-12-09 17:26:03.452303] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:34.349 passed 00:15:34.608 Test: admin_delete_io_cq_use_admin_qid ...[2024-12-09 17:26:03.528104] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:34.608 [2024-12-09 17:26:03.529348] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:34.608 [2024-12-09 17:26:03.531126] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:34.608 passed 00:15:34.608 Test: admin_delete_io_cq_delete_cq_first ...[2024-12-09 17:26:03.607838] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:34.608 [2024-12-09 17:26:03.683223] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:34.608 [2024-12-09 17:26:03.707224] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:34.608 [2024-12-09 17:26:03.712319] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:34.608 passed 00:15:34.867 Test: admin_create_io_cq_verify_iv_pc ...[2024-12-09 17:26:03.788959] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:34.867 [2024-12-09 17:26:03.790196] vfio_user.c:2178:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:34.867 [2024-12-09 17:26:03.790225] vfio_user.c:2172:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:34.867 [2024-12-09 17:26:03.791984] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:34.867 passed 00:15:34.867 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-09 17:26:03.869772] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:34.867 [2024-12-09 17:26:03.962231] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:34.867 [2024-12-09 17:26:03.970232] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:34.867 [2024-12-09 17:26:03.978226] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:34.867 [2024-12-09 17:26:03.986230] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:34.867 [2024-12-09 17:26:04.015312] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:34.867 passed 00:15:35.126 Test: admin_create_io_sq_verify_pc ...[2024-12-09 17:26:04.089114] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:35.126 [2024-12-09 17:26:04.104230] vfio_user.c:2071:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:35.126 [2024-12-09 17:26:04.124272] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:35.126 passed 00:15:35.126 Test: admin_create_io_qp_max_qps ...[2024-12-09 17:26:04.199782] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:36.505 [2024-12-09 17:26:05.309230] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:15:36.763 [2024-12-09 17:26:05.697620] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:36.763 passed 00:15:36.763 Test: admin_create_io_sq_shared_cq ...[2024-12-09 17:26:05.775529] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:36.763 [2024-12-09 17:26:05.908224] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:37.022 [2024-12-09 17:26:05.945266] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:37.022 passed 00:15:37.022 00:15:37.022 Run Summary: Type Total Ran Passed Failed Inactive 00:15:37.022 suites 1 1 n/a 0 0 00:15:37.023 tests 18 18 18 0 0 00:15:37.023 asserts 360 360 360 0 n/a 00:15:37.023 00:15:37.023 Elapsed time = 1.512 seconds 00:15:37.023 17:26:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2549574 00:15:37.023 17:26:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 2549574 ']' 00:15:37.023 17:26:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 2549574 00:15:37.023 17:26:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:15:37.023 17:26:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:37.023 17:26:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2549574 00:15:37.023 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:37.023 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:37.023 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2549574' 00:15:37.023 killing process with pid 2549574 00:15:37.023 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 2549574 00:15:37.023 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 2549574 00:15:37.282 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:37.282 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:37.282 00:15:37.282 real 0m5.669s 00:15:37.282 user 0m15.841s 00:15:37.282 sys 0m0.528s 00:15:37.282 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:37.282 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:37.282 ************************************ 00:15:37.282 END TEST nvmf_vfio_user_nvme_compliance 00:15:37.282 ************************************ 00:15:37.282 17:26:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:37.282 17:26:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:37.282 17:26:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:37.282 17:26:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:37.282 ************************************ 00:15:37.282 START TEST nvmf_vfio_user_fuzz 00:15:37.282 ************************************ 00:15:37.282 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:37.282 * Looking for test storage... 00:15:37.282 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:37.282 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:37.282 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:15:37.282 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:37.282 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:37.282 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:37.282 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:37.282 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:37.282 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:15:37.282 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:15:37.282 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:15:37.282 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:15:37.282 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:15:37.282 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:15:37.282 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:15:37.282 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:37.282 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:15:37.282 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:15:37.282 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:37.282 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:37.282 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:15:37.282 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:15:37.282 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:37.282 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:15:37.282 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:15:37.282 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:15:37.282 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:15:37.282 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:37.542 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:15:37.542 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:15:37.542 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:37.542 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:37.542 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:15:37.542 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:37.542 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:37.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.542 --rc genhtml_branch_coverage=1 00:15:37.542 --rc genhtml_function_coverage=1 00:15:37.542 --rc genhtml_legend=1 00:15:37.542 --rc geninfo_all_blocks=1 00:15:37.542 --rc geninfo_unexecuted_blocks=1 00:15:37.542 00:15:37.542 ' 00:15:37.542 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:37.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.542 --rc genhtml_branch_coverage=1 00:15:37.542 --rc genhtml_function_coverage=1 00:15:37.542 --rc genhtml_legend=1 00:15:37.542 --rc geninfo_all_blocks=1 00:15:37.542 --rc geninfo_unexecuted_blocks=1 00:15:37.542 00:15:37.542 ' 00:15:37.542 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:37.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.542 --rc genhtml_branch_coverage=1 00:15:37.542 --rc genhtml_function_coverage=1 00:15:37.542 --rc genhtml_legend=1 00:15:37.542 --rc geninfo_all_blocks=1 00:15:37.542 --rc geninfo_unexecuted_blocks=1 00:15:37.542 00:15:37.542 ' 00:15:37.542 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:37.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.542 --rc genhtml_branch_coverage=1 00:15:37.542 --rc genhtml_function_coverage=1 00:15:37.542 --rc genhtml_legend=1 00:15:37.542 --rc geninfo_all_blocks=1 00:15:37.542 --rc geninfo_unexecuted_blocks=1 00:15:37.542 00:15:37.542 ' 00:15:37.542 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:37.542 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:37.542 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:37.542 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:37.542 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:37.542 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:37.542 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:37.542 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:37.542 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:37.542 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:37.543 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:37.543 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:37.543 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:15:37.543 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:15:37.543 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:37.543 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:37.543 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:37.543 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:37.543 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:37.543 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:15:37.543 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:37.543 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:37.543 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:37.543 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.543 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.543 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.543 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:37.543 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.543 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:15:37.543 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:37.543 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:37.543 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:37.543 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:37.543 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:37.543 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:37.543 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:37.543 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:37.543 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:37.543 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:37.543 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:37.543 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:37.543 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:37.543 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:37.543 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:37.543 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:37.543 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:37.543 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2550555 00:15:37.543 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2550555' 00:15:37.543 Process pid: 2550555 00:15:37.543 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:37.543 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2550555 00:15:37.543 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:37.543 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 2550555 ']' 00:15:37.543 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:37.543 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:37.543 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:37.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:37.543 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:37.543 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:37.802 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:37.802 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:15:37.802 17:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:38.740 17:26:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:38.740 17:26:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.740 17:26:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:38.740 17:26:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.740 17:26:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:38.740 17:26:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:38.740 17:26:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.740 17:26:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:38.740 malloc0 00:15:38.740 17:26:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.740 17:26:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:38.740 17:26:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.740 17:26:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:38.740 17:26:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.740 17:26:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:38.740 17:26:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.740 17:26:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:38.740 17:26:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.740 17:26:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:38.740 17:26:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.740 17:26:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:38.740 17:26:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.740 17:26:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:38.740 17:26:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:10.815 Fuzzing completed. Shutting down the fuzz application 00:16:10.815 00:16:10.815 Dumping successful admin opcodes: 00:16:10.815 9, 10, 00:16:10.815 Dumping successful io opcodes: 00:16:10.815 0, 00:16:10.815 NS: 0x20000081ef00 I/O qp, Total commands completed: 1010252, total successful commands: 3960, random_seed: 593214784 00:16:10.815 NS: 0x20000081ef00 admin qp, Total commands completed: 243776, total successful commands: 57, random_seed: 3101221440 00:16:10.815 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:10.815 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.815 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:10.815 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.815 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2550555 00:16:10.815 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 2550555 ']' 00:16:10.815 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 2550555 00:16:10.815 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:16:10.815 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:10.815 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2550555 00:16:10.815 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:10.815 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:10.815 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2550555' 00:16:10.815 killing process with pid 2550555 00:16:10.815 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 2550555 00:16:10.815 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 2550555 00:16:10.815 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:10.815 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:10.815 00:16:10.815 real 0m32.218s 00:16:10.815 user 0m28.983s 00:16:10.815 sys 0m32.187s 00:16:10.815 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:10.815 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:10.815 ************************************ 00:16:10.815 END TEST nvmf_vfio_user_fuzz 00:16:10.815 ************************************ 00:16:10.815 17:26:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:10.815 17:26:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:10.815 17:26:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:10.816 17:26:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:10.816 ************************************ 00:16:10.816 START TEST nvmf_auth_target 00:16:10.816 ************************************ 00:16:10.816 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:10.816 * Looking for test storage... 00:16:10.816 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:10.816 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:10.816 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:16:10.816 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:10.816 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:10.816 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:10.816 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:10.816 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:10.816 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:16:10.816 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:16:10.816 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:16:10.816 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:16:10.816 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:16:10.816 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:16:10.816 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:16:10.816 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:10.816 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:16:10.816 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:16:10.816 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:10.816 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:10.816 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:16:10.816 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:16:10.816 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:10.816 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:16:10.816 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:16:10.816 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:16:10.816 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:16:10.816 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:10.816 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:16:10.816 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:16:10.816 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:10.816 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:10.816 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:16:10.816 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:10.816 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:10.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:10.816 --rc genhtml_branch_coverage=1 00:16:10.816 --rc genhtml_function_coverage=1 00:16:10.816 --rc genhtml_legend=1 00:16:10.816 --rc geninfo_all_blocks=1 00:16:10.816 --rc geninfo_unexecuted_blocks=1 00:16:10.816 00:16:10.816 ' 00:16:10.816 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:10.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:10.816 --rc genhtml_branch_coverage=1 00:16:10.816 --rc genhtml_function_coverage=1 00:16:10.816 --rc genhtml_legend=1 00:16:10.816 --rc geninfo_all_blocks=1 00:16:10.816 --rc geninfo_unexecuted_blocks=1 00:16:10.816 00:16:10.816 ' 00:16:10.816 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:10.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:10.816 --rc genhtml_branch_coverage=1 00:16:10.816 --rc genhtml_function_coverage=1 00:16:10.816 --rc genhtml_legend=1 00:16:10.816 --rc geninfo_all_blocks=1 00:16:10.816 --rc geninfo_unexecuted_blocks=1 00:16:10.816 00:16:10.816 ' 00:16:10.816 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:10.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:10.816 --rc genhtml_branch_coverage=1 00:16:10.816 --rc genhtml_function_coverage=1 00:16:10.816 --rc genhtml_legend=1 00:16:10.816 --rc geninfo_all_blocks=1 00:16:10.816 --rc geninfo_unexecuted_blocks=1 00:16:10.816 00:16:10.816 ' 00:16:10.816 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:10.816 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:10.816 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:10.816 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:10.816 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:10.816 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:10.816 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:10.816 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:10.816 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:10.816 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:10.816 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:10.816 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:10.816 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:10.816 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:16:10.816 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:10.816 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:10.816 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:10.816 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:10.816 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:10.816 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:16:10.816 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:10.816 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:10.816 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:10.816 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.816 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.816 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.816 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:10.816 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.816 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:16:10.816 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:10.816 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:10.816 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:10.816 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:10.816 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:10.816 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:10.816 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:10.816 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:10.816 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:10.816 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:10.816 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:10.817 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:10.817 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:10.817 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:10.817 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:10.817 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:10.817 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:10.817 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:16:10.817 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:10.817 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:10.817 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:10.817 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:10.817 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:10.817 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:10.817 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:10.817 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:10.817 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:10.817 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:10.817 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:16:10.817 17:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:16.092 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:16.092 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:16.092 Found net devices under 0000:af:00.0: cvl_0_0 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:16.092 Found net devices under 0000:af:00.1: cvl_0_1 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:16.092 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:16.092 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.366 ms 00:16:16.092 00:16:16.092 --- 10.0.0.2 ping statistics --- 00:16:16.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:16.092 rtt min/avg/max/mdev = 0.366/0.366/0.366/0.000 ms 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:16.092 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:16.092 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:16:16.092 00:16:16.092 --- 10.0.0.1 ping statistics --- 00:16:16.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:16.092 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:16.092 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:16.093 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:16.093 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:16.093 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:16:16.093 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:16.093 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:16.093 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.093 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2558966 00:16:16.093 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2558966 00:16:16.093 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:16.093 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2558966 ']' 00:16:16.093 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:16.093 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:16.093 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:16.093 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:16.093 17:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=2558988 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=64b01d05abfe39df57b94fd7726f049e389a3f7fc2539e96 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Wv6 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 64b01d05abfe39df57b94fd7726f049e389a3f7fc2539e96 0 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 64b01d05abfe39df57b94fd7726f049e389a3f7fc2539e96 0 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=64b01d05abfe39df57b94fd7726f049e389a3f7fc2539e96 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Wv6 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Wv6 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.Wv6 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=c02e58b05ca5cad7244aeb55db1ae00eadc595e0b3531d90329ee9dca5347298 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.hTm 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key c02e58b05ca5cad7244aeb55db1ae00eadc595e0b3531d90329ee9dca5347298 3 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 c02e58b05ca5cad7244aeb55db1ae00eadc595e0b3531d90329ee9dca5347298 3 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=c02e58b05ca5cad7244aeb55db1ae00eadc595e0b3531d90329ee9dca5347298 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.hTm 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.hTm 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.hTm 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=44930aca1bda7dbea03ad72276d5b10b 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.rQ6 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 44930aca1bda7dbea03ad72276d5b10b 1 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 44930aca1bda7dbea03ad72276d5b10b 1 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=44930aca1bda7dbea03ad72276d5b10b 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.rQ6 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.rQ6 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.rQ6 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=9be7b068e5e65cf74750b0a70b95d92c316fc559ada13f2b 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.33M 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 9be7b068e5e65cf74750b0a70b95d92c316fc559ada13f2b 2 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 9be7b068e5e65cf74750b0a70b95d92c316fc559ada13f2b 2 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=9be7b068e5e65cf74750b0a70b95d92c316fc559ada13f2b 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:16:16.093 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:16.352 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.33M 00:16:16.352 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.33M 00:16:16.352 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.33M 00:16:16.352 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:16:16.352 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:16.352 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:16.352 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:16.352 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:16:16.352 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:16.352 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:16.352 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=77e4c4572bf5704f401d9518e6f2ab50a1dfe17ff0ee0502 00:16:16.352 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:16.352 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.FUw 00:16:16.352 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 77e4c4572bf5704f401d9518e6f2ab50a1dfe17ff0ee0502 2 00:16:16.352 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 77e4c4572bf5704f401d9518e6f2ab50a1dfe17ff0ee0502 2 00:16:16.352 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:16.352 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:16.352 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=77e4c4572bf5704f401d9518e6f2ab50a1dfe17ff0ee0502 00:16:16.352 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:16:16.352 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:16.352 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.FUw 00:16:16.352 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.FUw 00:16:16.352 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.FUw 00:16:16.352 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:16:16.352 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:16.352 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:16.352 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:16.352 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:16:16.352 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:16:16.352 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:16.352 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=57d56b1c5b6601704d030fa352eb53e3 00:16:16.352 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:16.352 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.APp 00:16:16.352 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 57d56b1c5b6601704d030fa352eb53e3 1 00:16:16.352 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 57d56b1c5b6601704d030fa352eb53e3 1 00:16:16.352 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:16.352 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:16.352 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=57d56b1c5b6601704d030fa352eb53e3 00:16:16.352 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:16:16.352 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:16.352 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.APp 00:16:16.352 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.APp 00:16:16.352 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.APp 00:16:16.352 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:16:16.352 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:16.352 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:16.352 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:16.352 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:16:16.352 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:16:16.352 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:16.352 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=610f20830a6b51b46302f9685520d944c3a9fb0b0ec41668de3aaad09e7be683 00:16:16.352 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:16.352 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.yZz 00:16:16.352 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 610f20830a6b51b46302f9685520d944c3a9fb0b0ec41668de3aaad09e7be683 3 00:16:16.352 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 610f20830a6b51b46302f9685520d944c3a9fb0b0ec41668de3aaad09e7be683 3 00:16:16.352 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:16.352 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:16.352 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=610f20830a6b51b46302f9685520d944c3a9fb0b0ec41668de3aaad09e7be683 00:16:16.352 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:16:16.352 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:16.352 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.yZz 00:16:16.352 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.yZz 00:16:16.352 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.yZz 00:16:16.352 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:16:16.352 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 2558966 00:16:16.352 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2558966 ']' 00:16:16.352 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:16.352 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:16.352 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:16.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:16.352 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:16.352 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.611 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:16.611 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:16.611 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 2558988 /var/tmp/host.sock 00:16:16.611 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2558988 ']' 00:16:16.611 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:16:16.611 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:16.611 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:16.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:16.611 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:16.611 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.870 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:16.870 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:16.870 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:16:16.870 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.870 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.870 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.870 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:16.870 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Wv6 00:16:16.870 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.870 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.870 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.870 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.Wv6 00:16:16.870 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.Wv6 00:16:17.129 17:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.hTm ]] 00:16:17.129 17:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.hTm 00:16:17.129 17:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.129 17:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.129 17:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.129 17:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.hTm 00:16:17.129 17:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.hTm 00:16:17.388 17:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:17.388 17:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.rQ6 00:16:17.388 17:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.388 17:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.388 17:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.388 17:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.rQ6 00:16:17.388 17:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.rQ6 00:16:17.388 17:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.33M ]] 00:16:17.388 17:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.33M 00:16:17.388 17:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.388 17:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.388 17:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.388 17:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.33M 00:16:17.388 17:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.33M 00:16:17.647 17:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:17.647 17:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.FUw 00:16:17.647 17:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.647 17:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.647 17:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.647 17:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.FUw 00:16:17.647 17:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.FUw 00:16:17.906 17:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.APp ]] 00:16:17.906 17:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.APp 00:16:17.906 17:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.906 17:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.906 17:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.906 17:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.APp 00:16:17.906 17:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.APp 00:16:18.165 17:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:18.165 17:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.yZz 00:16:18.165 17:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.165 17:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.165 17:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.165 17:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.yZz 00:16:18.165 17:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.yZz 00:16:18.165 17:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:16:18.165 17:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:18.165 17:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:18.165 17:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:18.165 17:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:18.165 17:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:18.424 17:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:16:18.424 17:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:18.424 17:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:18.424 17:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:18.424 17:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:18.424 17:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.424 17:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:18.424 17:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.424 17:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.424 17:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.424 17:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:18.424 17:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:18.424 17:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:18.683 00:16:18.683 17:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:18.683 17:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:18.683 17:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.942 17:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.942 17:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.942 17:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.942 17:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.942 17:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.942 17:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:18.942 { 00:16:18.942 "cntlid": 1, 00:16:18.942 "qid": 0, 00:16:18.942 "state": "enabled", 00:16:18.942 "thread": "nvmf_tgt_poll_group_000", 00:16:18.942 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:18.942 "listen_address": { 00:16:18.942 "trtype": "TCP", 00:16:18.942 "adrfam": "IPv4", 00:16:18.942 "traddr": "10.0.0.2", 00:16:18.942 "trsvcid": "4420" 00:16:18.942 }, 00:16:18.942 "peer_address": { 00:16:18.942 "trtype": "TCP", 00:16:18.942 "adrfam": "IPv4", 00:16:18.942 "traddr": "10.0.0.1", 00:16:18.942 "trsvcid": "40734" 00:16:18.942 }, 00:16:18.942 "auth": { 00:16:18.942 "state": "completed", 00:16:18.942 "digest": "sha256", 00:16:18.942 "dhgroup": "null" 00:16:18.942 } 00:16:18.942 } 00:16:18.942 ]' 00:16:18.942 17:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:18.942 17:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:18.942 17:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:18.942 17:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:18.942 17:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:19.201 17:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.201 17:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.201 17:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.201 17:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjRiMDFkMDVhYmZlMzlkZjU3Yjk0ZmQ3NzI2ZjA0OWUzODlhM2Y3ZmMyNTM5ZTk2/3DGMQ==: --dhchap-ctrl-secret DHHC-1:03:YzAyZTU4YjA1Y2E1Y2FkNzI0NGFlYjU1ZGIxYWUwMGVhZGM1OTVlMGIzNTMxZDkwMzI5ZWU5ZGNhNTM0NzI5OEba8j4=: 00:16:19.201 17:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NjRiMDFkMDVhYmZlMzlkZjU3Yjk0ZmQ3NzI2ZjA0OWUzODlhM2Y3ZmMyNTM5ZTk2/3DGMQ==: --dhchap-ctrl-secret DHHC-1:03:YzAyZTU4YjA1Y2E1Y2FkNzI0NGFlYjU1ZGIxYWUwMGVhZGM1OTVlMGIzNTMxZDkwMzI5ZWU5ZGNhNTM0NzI5OEba8j4=: 00:16:19.768 17:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:19.768 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:19.768 17:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:19.768 17:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.768 17:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.768 17:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.768 17:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:19.768 17:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:19.768 17:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:20.027 17:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:16:20.027 17:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:20.027 17:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:20.027 17:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:20.027 17:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:20.027 17:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.027 17:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.027 17:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.027 17:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.027 17:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.027 17:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.027 17:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.027 17:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.286 00:16:20.286 17:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:20.286 17:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:20.286 17:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.545 17:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.545 17:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.545 17:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.545 17:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.545 17:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.545 17:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:20.545 { 00:16:20.545 "cntlid": 3, 00:16:20.545 "qid": 0, 00:16:20.545 "state": "enabled", 00:16:20.545 "thread": "nvmf_tgt_poll_group_000", 00:16:20.545 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:20.545 "listen_address": { 00:16:20.545 "trtype": "TCP", 00:16:20.545 "adrfam": "IPv4", 00:16:20.545 "traddr": "10.0.0.2", 00:16:20.545 "trsvcid": "4420" 00:16:20.545 }, 00:16:20.545 "peer_address": { 00:16:20.545 "trtype": "TCP", 00:16:20.545 "adrfam": "IPv4", 00:16:20.545 "traddr": "10.0.0.1", 00:16:20.545 "trsvcid": "40772" 00:16:20.545 }, 00:16:20.545 "auth": { 00:16:20.545 "state": "completed", 00:16:20.545 "digest": "sha256", 00:16:20.545 "dhgroup": "null" 00:16:20.545 } 00:16:20.545 } 00:16:20.545 ]' 00:16:20.545 17:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:20.545 17:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:20.545 17:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:20.545 17:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:20.545 17:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:20.804 17:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.804 17:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.804 17:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.804 17:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDQ5MzBhY2ExYmRhN2RiZWEwM2FkNzIyNzZkNWIxMGIfBWUs: --dhchap-ctrl-secret DHHC-1:02:OWJlN2IwNjhlNWU2NWNmNzQ3NTBiMGE3MGI5NWQ5MmMzMTZmYzU1OWFkYTEzZjJidK3ztA==: 00:16:20.804 17:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDQ5MzBhY2ExYmRhN2RiZWEwM2FkNzIyNzZkNWIxMGIfBWUs: --dhchap-ctrl-secret DHHC-1:02:OWJlN2IwNjhlNWU2NWNmNzQ3NTBiMGE3MGI5NWQ5MmMzMTZmYzU1OWFkYTEzZjJidK3ztA==: 00:16:21.371 17:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.371 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.371 17:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:21.371 17:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.371 17:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.371 17:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.371 17:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:21.371 17:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:21.371 17:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:21.630 17:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:16:21.630 17:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:21.630 17:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:21.630 17:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:21.630 17:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:21.630 17:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.630 17:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:21.630 17:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.630 17:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.630 17:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.630 17:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:21.630 17:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:21.630 17:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:21.888 00:16:21.888 17:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:21.889 17:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:21.889 17:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.147 17:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.147 17:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.147 17:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.147 17:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.147 17:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.147 17:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:22.147 { 00:16:22.147 "cntlid": 5, 00:16:22.147 "qid": 0, 00:16:22.147 "state": "enabled", 00:16:22.147 "thread": "nvmf_tgt_poll_group_000", 00:16:22.147 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:22.147 "listen_address": { 00:16:22.147 "trtype": "TCP", 00:16:22.147 "adrfam": "IPv4", 00:16:22.147 "traddr": "10.0.0.2", 00:16:22.147 "trsvcid": "4420" 00:16:22.147 }, 00:16:22.147 "peer_address": { 00:16:22.147 "trtype": "TCP", 00:16:22.147 "adrfam": "IPv4", 00:16:22.147 "traddr": "10.0.0.1", 00:16:22.147 "trsvcid": "40802" 00:16:22.147 }, 00:16:22.147 "auth": { 00:16:22.147 "state": "completed", 00:16:22.147 "digest": "sha256", 00:16:22.147 "dhgroup": "null" 00:16:22.147 } 00:16:22.147 } 00:16:22.147 ]' 00:16:22.147 17:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:22.147 17:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:22.147 17:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:22.147 17:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:22.147 17:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:22.147 17:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.147 17:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.147 17:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.406 17:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzdlNGM0NTcyYmY1NzA0ZjQwMWQ5NTE4ZTZmMmFiNTBhMWRmZTE3ZmYwZWUwNTAyGQjLOA==: --dhchap-ctrl-secret DHHC-1:01:NTdkNTZiMWM1YjY2MDE3MDRkMDMwZmEzNTJlYjUzZTMf15o7: 00:16:22.406 17:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzdlNGM0NTcyYmY1NzA0ZjQwMWQ5NTE4ZTZmMmFiNTBhMWRmZTE3ZmYwZWUwNTAyGQjLOA==: --dhchap-ctrl-secret DHHC-1:01:NTdkNTZiMWM1YjY2MDE3MDRkMDMwZmEzNTJlYjUzZTMf15o7: 00:16:22.974 17:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.974 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.974 17:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:22.974 17:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.974 17:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.974 17:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.974 17:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:22.974 17:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:22.974 17:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:23.234 17:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:16:23.234 17:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:23.234 17:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:23.234 17:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:23.234 17:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:23.234 17:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.234 17:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:16:23.234 17:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.234 17:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.234 17:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.234 17:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:23.234 17:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:23.234 17:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:23.493 00:16:23.493 17:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:23.493 17:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:23.493 17:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.752 17:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.752 17:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.752 17:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.752 17:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.752 17:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.752 17:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:23.752 { 00:16:23.752 "cntlid": 7, 00:16:23.752 "qid": 0, 00:16:23.752 "state": "enabled", 00:16:23.752 "thread": "nvmf_tgt_poll_group_000", 00:16:23.752 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:23.752 "listen_address": { 00:16:23.752 "trtype": "TCP", 00:16:23.752 "adrfam": "IPv4", 00:16:23.752 "traddr": "10.0.0.2", 00:16:23.752 "trsvcid": "4420" 00:16:23.752 }, 00:16:23.752 "peer_address": { 00:16:23.752 "trtype": "TCP", 00:16:23.752 "adrfam": "IPv4", 00:16:23.752 "traddr": "10.0.0.1", 00:16:23.752 "trsvcid": "40818" 00:16:23.752 }, 00:16:23.752 "auth": { 00:16:23.752 "state": "completed", 00:16:23.752 "digest": "sha256", 00:16:23.752 "dhgroup": "null" 00:16:23.752 } 00:16:23.752 } 00:16:23.752 ]' 00:16:23.752 17:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:23.752 17:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:23.752 17:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:23.752 17:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:23.752 17:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:23.752 17:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.752 17:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.752 17:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.011 17:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjEwZjIwODMwYTZiNTFiNDYzMDJmOTY4NTUyMGQ5NDRjM2E5ZmIwYjBlYzQxNjY4ZGUzYWFhZDA5ZTdiZTY4M+BeRSc=: 00:16:24.011 17:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NjEwZjIwODMwYTZiNTFiNDYzMDJmOTY4NTUyMGQ5NDRjM2E5ZmIwYjBlYzQxNjY4ZGUzYWFhZDA5ZTdiZTY4M+BeRSc=: 00:16:24.578 17:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.578 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.578 17:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:24.578 17:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.578 17:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.578 17:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.578 17:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:24.578 17:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:24.578 17:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:24.578 17:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:24.837 17:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:16:24.837 17:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:24.837 17:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:24.837 17:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:24.837 17:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:24.837 17:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.837 17:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.837 17:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.837 17:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.837 17:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.838 17:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.838 17:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.838 17:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:25.097 00:16:25.097 17:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:25.097 17:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:25.097 17:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.097 17:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.097 17:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.097 17:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.097 17:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.097 17:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.097 17:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:25.097 { 00:16:25.097 "cntlid": 9, 00:16:25.097 "qid": 0, 00:16:25.097 "state": "enabled", 00:16:25.097 "thread": "nvmf_tgt_poll_group_000", 00:16:25.097 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:25.097 "listen_address": { 00:16:25.097 "trtype": "TCP", 00:16:25.097 "adrfam": "IPv4", 00:16:25.097 "traddr": "10.0.0.2", 00:16:25.097 "trsvcid": "4420" 00:16:25.097 }, 00:16:25.097 "peer_address": { 00:16:25.097 "trtype": "TCP", 00:16:25.097 "adrfam": "IPv4", 00:16:25.097 "traddr": "10.0.0.1", 00:16:25.097 "trsvcid": "40838" 00:16:25.097 }, 00:16:25.097 "auth": { 00:16:25.097 "state": "completed", 00:16:25.097 "digest": "sha256", 00:16:25.097 "dhgroup": "ffdhe2048" 00:16:25.097 } 00:16:25.097 } 00:16:25.097 ]' 00:16:25.097 17:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:25.355 17:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:25.356 17:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:25.356 17:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:25.356 17:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:25.356 17:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.356 17:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.356 17:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.614 17:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjRiMDFkMDVhYmZlMzlkZjU3Yjk0ZmQ3NzI2ZjA0OWUzODlhM2Y3ZmMyNTM5ZTk2/3DGMQ==: --dhchap-ctrl-secret DHHC-1:03:YzAyZTU4YjA1Y2E1Y2FkNzI0NGFlYjU1ZGIxYWUwMGVhZGM1OTVlMGIzNTMxZDkwMzI5ZWU5ZGNhNTM0NzI5OEba8j4=: 00:16:25.614 17:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NjRiMDFkMDVhYmZlMzlkZjU3Yjk0ZmQ3NzI2ZjA0OWUzODlhM2Y3ZmMyNTM5ZTk2/3DGMQ==: --dhchap-ctrl-secret DHHC-1:03:YzAyZTU4YjA1Y2E1Y2FkNzI0NGFlYjU1ZGIxYWUwMGVhZGM1OTVlMGIzNTMxZDkwMzI5ZWU5ZGNhNTM0NzI5OEba8j4=: 00:16:26.178 17:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.178 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.178 17:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:26.178 17:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.178 17:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.178 17:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.178 17:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:26.178 17:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:26.178 17:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:26.178 17:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:16:26.178 17:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:26.178 17:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:26.178 17:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:26.178 17:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:26.178 17:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.178 17:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:26.178 17:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.178 17:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.436 17:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.436 17:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:26.436 17:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:26.436 17:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:26.436 00:16:26.436 17:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:26.436 17:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:26.436 17:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.693 17:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.694 17:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.694 17:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.694 17:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.694 17:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.694 17:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:26.694 { 00:16:26.694 "cntlid": 11, 00:16:26.694 "qid": 0, 00:16:26.694 "state": "enabled", 00:16:26.694 "thread": "nvmf_tgt_poll_group_000", 00:16:26.694 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:26.694 "listen_address": { 00:16:26.694 "trtype": "TCP", 00:16:26.694 "adrfam": "IPv4", 00:16:26.694 "traddr": "10.0.0.2", 00:16:26.694 "trsvcid": "4420" 00:16:26.694 }, 00:16:26.694 "peer_address": { 00:16:26.694 "trtype": "TCP", 00:16:26.694 "adrfam": "IPv4", 00:16:26.694 "traddr": "10.0.0.1", 00:16:26.694 "trsvcid": "50890" 00:16:26.694 }, 00:16:26.694 "auth": { 00:16:26.694 "state": "completed", 00:16:26.694 "digest": "sha256", 00:16:26.694 "dhgroup": "ffdhe2048" 00:16:26.694 } 00:16:26.694 } 00:16:26.694 ]' 00:16:26.694 17:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:26.694 17:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:26.694 17:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:26.952 17:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:26.952 17:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:26.952 17:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.952 17:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.952 17:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.210 17:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDQ5MzBhY2ExYmRhN2RiZWEwM2FkNzIyNzZkNWIxMGIfBWUs: --dhchap-ctrl-secret DHHC-1:02:OWJlN2IwNjhlNWU2NWNmNzQ3NTBiMGE3MGI5NWQ5MmMzMTZmYzU1OWFkYTEzZjJidK3ztA==: 00:16:27.211 17:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDQ5MzBhY2ExYmRhN2RiZWEwM2FkNzIyNzZkNWIxMGIfBWUs: --dhchap-ctrl-secret DHHC-1:02:OWJlN2IwNjhlNWU2NWNmNzQ3NTBiMGE3MGI5NWQ5MmMzMTZmYzU1OWFkYTEzZjJidK3ztA==: 00:16:27.778 17:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.778 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.778 17:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:27.778 17:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.778 17:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.778 17:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.778 17:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:27.778 17:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:27.778 17:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:27.778 17:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:16:27.778 17:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:27.778 17:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:27.778 17:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:27.778 17:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:27.778 17:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.779 17:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.779 17:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.779 17:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.779 17:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.779 17:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.779 17:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.779 17:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:28.037 00:16:28.037 17:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:28.037 17:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:28.037 17:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.296 17:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.296 17:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.296 17:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.296 17:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.296 17:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.296 17:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:28.296 { 00:16:28.296 "cntlid": 13, 00:16:28.296 "qid": 0, 00:16:28.296 "state": "enabled", 00:16:28.296 "thread": "nvmf_tgt_poll_group_000", 00:16:28.296 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:28.296 "listen_address": { 00:16:28.296 "trtype": "TCP", 00:16:28.296 "adrfam": "IPv4", 00:16:28.296 "traddr": "10.0.0.2", 00:16:28.296 "trsvcid": "4420" 00:16:28.296 }, 00:16:28.296 "peer_address": { 00:16:28.296 "trtype": "TCP", 00:16:28.296 "adrfam": "IPv4", 00:16:28.296 "traddr": "10.0.0.1", 00:16:28.296 "trsvcid": "50898" 00:16:28.296 }, 00:16:28.296 "auth": { 00:16:28.296 "state": "completed", 00:16:28.296 "digest": "sha256", 00:16:28.296 "dhgroup": "ffdhe2048" 00:16:28.296 } 00:16:28.296 } 00:16:28.296 ]' 00:16:28.296 17:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:28.296 17:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:28.296 17:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:28.296 17:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:28.296 17:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:28.555 17:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.555 17:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.555 17:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.555 17:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzdlNGM0NTcyYmY1NzA0ZjQwMWQ5NTE4ZTZmMmFiNTBhMWRmZTE3ZmYwZWUwNTAyGQjLOA==: --dhchap-ctrl-secret DHHC-1:01:NTdkNTZiMWM1YjY2MDE3MDRkMDMwZmEzNTJlYjUzZTMf15o7: 00:16:28.555 17:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzdlNGM0NTcyYmY1NzA0ZjQwMWQ5NTE4ZTZmMmFiNTBhMWRmZTE3ZmYwZWUwNTAyGQjLOA==: --dhchap-ctrl-secret DHHC-1:01:NTdkNTZiMWM1YjY2MDE3MDRkMDMwZmEzNTJlYjUzZTMf15o7: 00:16:29.123 17:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.123 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.123 17:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:29.123 17:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.123 17:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.123 17:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.123 17:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:29.123 17:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:29.123 17:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:29.382 17:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:16:29.382 17:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:29.382 17:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:29.382 17:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:29.382 17:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:29.382 17:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.382 17:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:16:29.382 17:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.382 17:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.382 17:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.382 17:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:29.382 17:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:29.382 17:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:29.641 00:16:29.641 17:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:29.641 17:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:29.641 17:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.900 17:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.900 17:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.900 17:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.900 17:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.900 17:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.900 17:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:29.900 { 00:16:29.900 "cntlid": 15, 00:16:29.900 "qid": 0, 00:16:29.900 "state": "enabled", 00:16:29.900 "thread": "nvmf_tgt_poll_group_000", 00:16:29.900 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:29.900 "listen_address": { 00:16:29.900 "trtype": "TCP", 00:16:29.900 "adrfam": "IPv4", 00:16:29.900 "traddr": "10.0.0.2", 00:16:29.900 "trsvcid": "4420" 00:16:29.900 }, 00:16:29.900 "peer_address": { 00:16:29.900 "trtype": "TCP", 00:16:29.900 "adrfam": "IPv4", 00:16:29.900 "traddr": "10.0.0.1", 00:16:29.900 "trsvcid": "50922" 00:16:29.900 }, 00:16:29.900 "auth": { 00:16:29.900 "state": "completed", 00:16:29.900 "digest": "sha256", 00:16:29.900 "dhgroup": "ffdhe2048" 00:16:29.900 } 00:16:29.900 } 00:16:29.900 ]' 00:16:29.900 17:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:29.900 17:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:29.900 17:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:29.900 17:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:29.900 17:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:29.900 17:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.900 17:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.900 17:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.159 17:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjEwZjIwODMwYTZiNTFiNDYzMDJmOTY4NTUyMGQ5NDRjM2E5ZmIwYjBlYzQxNjY4ZGUzYWFhZDA5ZTdiZTY4M+BeRSc=: 00:16:30.159 17:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NjEwZjIwODMwYTZiNTFiNDYzMDJmOTY4NTUyMGQ5NDRjM2E5ZmIwYjBlYzQxNjY4ZGUzYWFhZDA5ZTdiZTY4M+BeRSc=: 00:16:30.726 17:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.726 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.726 17:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:30.726 17:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.726 17:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.726 17:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.726 17:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:30.726 17:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:30.726 17:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:30.726 17:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:30.985 17:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:16:30.985 17:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:30.985 17:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:30.985 17:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:30.985 17:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:30.985 17:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:30.985 17:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:30.985 17:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.985 17:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.985 17:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.985 17:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:30.985 17:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:30.985 17:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.243 00:16:31.243 17:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:31.243 17:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:31.243 17:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.502 17:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.502 17:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.502 17:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.502 17:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.502 17:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.502 17:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:31.502 { 00:16:31.502 "cntlid": 17, 00:16:31.502 "qid": 0, 00:16:31.502 "state": "enabled", 00:16:31.502 "thread": "nvmf_tgt_poll_group_000", 00:16:31.502 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:31.502 "listen_address": { 00:16:31.502 "trtype": "TCP", 00:16:31.502 "adrfam": "IPv4", 00:16:31.502 "traddr": "10.0.0.2", 00:16:31.502 "trsvcid": "4420" 00:16:31.502 }, 00:16:31.502 "peer_address": { 00:16:31.502 "trtype": "TCP", 00:16:31.502 "adrfam": "IPv4", 00:16:31.502 "traddr": "10.0.0.1", 00:16:31.502 "trsvcid": "50946" 00:16:31.502 }, 00:16:31.502 "auth": { 00:16:31.502 "state": "completed", 00:16:31.502 "digest": "sha256", 00:16:31.502 "dhgroup": "ffdhe3072" 00:16:31.502 } 00:16:31.502 } 00:16:31.502 ]' 00:16:31.502 17:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:31.502 17:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:31.502 17:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:31.502 17:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:31.502 17:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:31.502 17:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.502 17:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.502 17:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.760 17:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjRiMDFkMDVhYmZlMzlkZjU3Yjk0ZmQ3NzI2ZjA0OWUzODlhM2Y3ZmMyNTM5ZTk2/3DGMQ==: --dhchap-ctrl-secret DHHC-1:03:YzAyZTU4YjA1Y2E1Y2FkNzI0NGFlYjU1ZGIxYWUwMGVhZGM1OTVlMGIzNTMxZDkwMzI5ZWU5ZGNhNTM0NzI5OEba8j4=: 00:16:31.760 17:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NjRiMDFkMDVhYmZlMzlkZjU3Yjk0ZmQ3NzI2ZjA0OWUzODlhM2Y3ZmMyNTM5ZTk2/3DGMQ==: --dhchap-ctrl-secret DHHC-1:03:YzAyZTU4YjA1Y2E1Y2FkNzI0NGFlYjU1ZGIxYWUwMGVhZGM1OTVlMGIzNTMxZDkwMzI5ZWU5ZGNhNTM0NzI5OEba8j4=: 00:16:32.327 17:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.327 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.327 17:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:32.327 17:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.327 17:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.327 17:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.327 17:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:32.327 17:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:32.327 17:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:32.586 17:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:16:32.586 17:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:32.586 17:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:32.586 17:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:32.586 17:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:32.586 17:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.586 17:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.586 17:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.586 17:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.586 17:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.586 17:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.586 17:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.586 17:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.845 00:16:32.845 17:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:32.845 17:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.845 17:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:33.104 17:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.104 17:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.104 17:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.104 17:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.104 17:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.104 17:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:33.104 { 00:16:33.104 "cntlid": 19, 00:16:33.104 "qid": 0, 00:16:33.104 "state": "enabled", 00:16:33.104 "thread": "nvmf_tgt_poll_group_000", 00:16:33.104 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:33.104 "listen_address": { 00:16:33.104 "trtype": "TCP", 00:16:33.104 "adrfam": "IPv4", 00:16:33.104 "traddr": "10.0.0.2", 00:16:33.104 "trsvcid": "4420" 00:16:33.104 }, 00:16:33.104 "peer_address": { 00:16:33.104 "trtype": "TCP", 00:16:33.104 "adrfam": "IPv4", 00:16:33.104 "traddr": "10.0.0.1", 00:16:33.104 "trsvcid": "50974" 00:16:33.104 }, 00:16:33.104 "auth": { 00:16:33.104 "state": "completed", 00:16:33.104 "digest": "sha256", 00:16:33.104 "dhgroup": "ffdhe3072" 00:16:33.104 } 00:16:33.104 } 00:16:33.104 ]' 00:16:33.104 17:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:33.104 17:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:33.104 17:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:33.104 17:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:33.104 17:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:33.104 17:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.104 17:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.104 17:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.363 17:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDQ5MzBhY2ExYmRhN2RiZWEwM2FkNzIyNzZkNWIxMGIfBWUs: --dhchap-ctrl-secret DHHC-1:02:OWJlN2IwNjhlNWU2NWNmNzQ3NTBiMGE3MGI5NWQ5MmMzMTZmYzU1OWFkYTEzZjJidK3ztA==: 00:16:33.363 17:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDQ5MzBhY2ExYmRhN2RiZWEwM2FkNzIyNzZkNWIxMGIfBWUs: --dhchap-ctrl-secret DHHC-1:02:OWJlN2IwNjhlNWU2NWNmNzQ3NTBiMGE3MGI5NWQ5MmMzMTZmYzU1OWFkYTEzZjJidK3ztA==: 00:16:33.931 17:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.931 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.931 17:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:33.931 17:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.931 17:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.931 17:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.931 17:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:33.931 17:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:33.931 17:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:34.190 17:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:16:34.190 17:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:34.190 17:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:34.190 17:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:34.190 17:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:34.190 17:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.190 17:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:34.190 17:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.190 17:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.190 17:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.190 17:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:34.190 17:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:34.190 17:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:34.450 00:16:34.450 17:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:34.450 17:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:34.450 17:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.709 17:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.709 17:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.709 17:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.709 17:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.709 17:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.709 17:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:34.709 { 00:16:34.709 "cntlid": 21, 00:16:34.709 "qid": 0, 00:16:34.709 "state": "enabled", 00:16:34.709 "thread": "nvmf_tgt_poll_group_000", 00:16:34.709 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:34.709 "listen_address": { 00:16:34.709 "trtype": "TCP", 00:16:34.709 "adrfam": "IPv4", 00:16:34.709 "traddr": "10.0.0.2", 00:16:34.709 "trsvcid": "4420" 00:16:34.709 }, 00:16:34.709 "peer_address": { 00:16:34.709 "trtype": "TCP", 00:16:34.709 "adrfam": "IPv4", 00:16:34.709 "traddr": "10.0.0.1", 00:16:34.709 "trsvcid": "51012" 00:16:34.709 }, 00:16:34.709 "auth": { 00:16:34.709 "state": "completed", 00:16:34.709 "digest": "sha256", 00:16:34.709 "dhgroup": "ffdhe3072" 00:16:34.709 } 00:16:34.709 } 00:16:34.709 ]' 00:16:34.709 17:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:34.709 17:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:34.709 17:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:34.709 17:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:34.709 17:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:34.709 17:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.709 17:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.709 17:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.968 17:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzdlNGM0NTcyYmY1NzA0ZjQwMWQ5NTE4ZTZmMmFiNTBhMWRmZTE3ZmYwZWUwNTAyGQjLOA==: --dhchap-ctrl-secret DHHC-1:01:NTdkNTZiMWM1YjY2MDE3MDRkMDMwZmEzNTJlYjUzZTMf15o7: 00:16:34.968 17:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzdlNGM0NTcyYmY1NzA0ZjQwMWQ5NTE4ZTZmMmFiNTBhMWRmZTE3ZmYwZWUwNTAyGQjLOA==: --dhchap-ctrl-secret DHHC-1:01:NTdkNTZiMWM1YjY2MDE3MDRkMDMwZmEzNTJlYjUzZTMf15o7: 00:16:35.535 17:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.535 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.535 17:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:35.535 17:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.535 17:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.535 17:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.535 17:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:35.535 17:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:35.535 17:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:35.794 17:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:16:35.794 17:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:35.794 17:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:35.794 17:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:35.794 17:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:35.794 17:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.794 17:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:16:35.794 17:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.794 17:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.794 17:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.794 17:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:35.794 17:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:35.794 17:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:36.053 00:16:36.053 17:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:36.053 17:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:36.053 17:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.312 17:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.312 17:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.312 17:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.312 17:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.312 17:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.312 17:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:36.312 { 00:16:36.312 "cntlid": 23, 00:16:36.312 "qid": 0, 00:16:36.312 "state": "enabled", 00:16:36.312 "thread": "nvmf_tgt_poll_group_000", 00:16:36.312 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:36.312 "listen_address": { 00:16:36.312 "trtype": "TCP", 00:16:36.312 "adrfam": "IPv4", 00:16:36.312 "traddr": "10.0.0.2", 00:16:36.312 "trsvcid": "4420" 00:16:36.312 }, 00:16:36.312 "peer_address": { 00:16:36.312 "trtype": "TCP", 00:16:36.312 "adrfam": "IPv4", 00:16:36.312 "traddr": "10.0.0.1", 00:16:36.312 "trsvcid": "36414" 00:16:36.312 }, 00:16:36.312 "auth": { 00:16:36.312 "state": "completed", 00:16:36.312 "digest": "sha256", 00:16:36.312 "dhgroup": "ffdhe3072" 00:16:36.312 } 00:16:36.312 } 00:16:36.312 ]' 00:16:36.312 17:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:36.312 17:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:36.312 17:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:36.312 17:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:36.312 17:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:36.312 17:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.312 17:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.312 17:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.570 17:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjEwZjIwODMwYTZiNTFiNDYzMDJmOTY4NTUyMGQ5NDRjM2E5ZmIwYjBlYzQxNjY4ZGUzYWFhZDA5ZTdiZTY4M+BeRSc=: 00:16:36.570 17:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NjEwZjIwODMwYTZiNTFiNDYzMDJmOTY4NTUyMGQ5NDRjM2E5ZmIwYjBlYzQxNjY4ZGUzYWFhZDA5ZTdiZTY4M+BeRSc=: 00:16:37.138 17:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.138 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.138 17:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:37.138 17:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.138 17:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.138 17:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.138 17:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:37.138 17:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:37.138 17:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:37.138 17:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:37.397 17:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:16:37.397 17:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:37.397 17:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:37.397 17:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:37.397 17:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:37.397 17:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.397 17:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:37.397 17:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.397 17:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.397 17:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.397 17:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:37.397 17:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:37.397 17:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:37.656 00:16:37.656 17:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:37.656 17:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:37.656 17:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.916 17:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.916 17:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.916 17:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.916 17:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.916 17:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.916 17:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:37.916 { 00:16:37.916 "cntlid": 25, 00:16:37.916 "qid": 0, 00:16:37.916 "state": "enabled", 00:16:37.916 "thread": "nvmf_tgt_poll_group_000", 00:16:37.916 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:37.916 "listen_address": { 00:16:37.916 "trtype": "TCP", 00:16:37.916 "adrfam": "IPv4", 00:16:37.916 "traddr": "10.0.0.2", 00:16:37.916 "trsvcid": "4420" 00:16:37.916 }, 00:16:37.916 "peer_address": { 00:16:37.916 "trtype": "TCP", 00:16:37.916 "adrfam": "IPv4", 00:16:37.916 "traddr": "10.0.0.1", 00:16:37.916 "trsvcid": "36440" 00:16:37.916 }, 00:16:37.916 "auth": { 00:16:37.916 "state": "completed", 00:16:37.916 "digest": "sha256", 00:16:37.916 "dhgroup": "ffdhe4096" 00:16:37.916 } 00:16:37.916 } 00:16:37.916 ]' 00:16:37.916 17:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:37.916 17:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:37.916 17:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:37.916 17:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:37.916 17:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:37.916 17:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.916 17:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.916 17:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.174 17:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjRiMDFkMDVhYmZlMzlkZjU3Yjk0ZmQ3NzI2ZjA0OWUzODlhM2Y3ZmMyNTM5ZTk2/3DGMQ==: --dhchap-ctrl-secret DHHC-1:03:YzAyZTU4YjA1Y2E1Y2FkNzI0NGFlYjU1ZGIxYWUwMGVhZGM1OTVlMGIzNTMxZDkwMzI5ZWU5ZGNhNTM0NzI5OEba8j4=: 00:16:38.174 17:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NjRiMDFkMDVhYmZlMzlkZjU3Yjk0ZmQ3NzI2ZjA0OWUzODlhM2Y3ZmMyNTM5ZTk2/3DGMQ==: --dhchap-ctrl-secret DHHC-1:03:YzAyZTU4YjA1Y2E1Y2FkNzI0NGFlYjU1ZGIxYWUwMGVhZGM1OTVlMGIzNTMxZDkwMzI5ZWU5ZGNhNTM0NzI5OEba8j4=: 00:16:38.740 17:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.740 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.740 17:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:38.740 17:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.740 17:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.740 17:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.740 17:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:38.740 17:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:38.740 17:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:38.999 17:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:16:38.999 17:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:38.999 17:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:38.999 17:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:38.999 17:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:38.999 17:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.999 17:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.999 17:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.999 17:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.999 17:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.999 17:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.999 17:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.999 17:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:39.259 00:16:39.259 17:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:39.259 17:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.259 17:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:39.519 17:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.519 17:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.519 17:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.519 17:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.519 17:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.519 17:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:39.519 { 00:16:39.519 "cntlid": 27, 00:16:39.519 "qid": 0, 00:16:39.519 "state": "enabled", 00:16:39.519 "thread": "nvmf_tgt_poll_group_000", 00:16:39.519 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:39.519 "listen_address": { 00:16:39.519 "trtype": "TCP", 00:16:39.519 "adrfam": "IPv4", 00:16:39.519 "traddr": "10.0.0.2", 00:16:39.519 "trsvcid": "4420" 00:16:39.519 }, 00:16:39.519 "peer_address": { 00:16:39.519 "trtype": "TCP", 00:16:39.519 "adrfam": "IPv4", 00:16:39.519 "traddr": "10.0.0.1", 00:16:39.519 "trsvcid": "36474" 00:16:39.519 }, 00:16:39.519 "auth": { 00:16:39.519 "state": "completed", 00:16:39.519 "digest": "sha256", 00:16:39.519 "dhgroup": "ffdhe4096" 00:16:39.519 } 00:16:39.519 } 00:16:39.519 ]' 00:16:39.519 17:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:39.519 17:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:39.519 17:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:39.519 17:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:39.519 17:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:39.519 17:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.519 17:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.519 17:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.829 17:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDQ5MzBhY2ExYmRhN2RiZWEwM2FkNzIyNzZkNWIxMGIfBWUs: --dhchap-ctrl-secret DHHC-1:02:OWJlN2IwNjhlNWU2NWNmNzQ3NTBiMGE3MGI5NWQ5MmMzMTZmYzU1OWFkYTEzZjJidK3ztA==: 00:16:39.829 17:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDQ5MzBhY2ExYmRhN2RiZWEwM2FkNzIyNzZkNWIxMGIfBWUs: --dhchap-ctrl-secret DHHC-1:02:OWJlN2IwNjhlNWU2NWNmNzQ3NTBiMGE3MGI5NWQ5MmMzMTZmYzU1OWFkYTEzZjJidK3ztA==: 00:16:40.432 17:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.432 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.432 17:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:40.432 17:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.432 17:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.432 17:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.432 17:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:40.432 17:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:40.432 17:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:40.691 17:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:16:40.691 17:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:40.691 17:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:40.691 17:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:40.691 17:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:40.691 17:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.691 17:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.691 17:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.691 17:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.691 17:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.691 17:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.691 17:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.691 17:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.950 00:16:40.950 17:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:40.950 17:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:40.950 17:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.209 17:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.209 17:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.209 17:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.209 17:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.209 17:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.209 17:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:41.209 { 00:16:41.209 "cntlid": 29, 00:16:41.209 "qid": 0, 00:16:41.209 "state": "enabled", 00:16:41.209 "thread": "nvmf_tgt_poll_group_000", 00:16:41.209 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:41.209 "listen_address": { 00:16:41.209 "trtype": "TCP", 00:16:41.209 "adrfam": "IPv4", 00:16:41.209 "traddr": "10.0.0.2", 00:16:41.209 "trsvcid": "4420" 00:16:41.209 }, 00:16:41.209 "peer_address": { 00:16:41.209 "trtype": "TCP", 00:16:41.209 "adrfam": "IPv4", 00:16:41.209 "traddr": "10.0.0.1", 00:16:41.209 "trsvcid": "36500" 00:16:41.209 }, 00:16:41.209 "auth": { 00:16:41.209 "state": "completed", 00:16:41.209 "digest": "sha256", 00:16:41.209 "dhgroup": "ffdhe4096" 00:16:41.209 } 00:16:41.209 } 00:16:41.209 ]' 00:16:41.209 17:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:41.209 17:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:41.209 17:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:41.209 17:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:41.209 17:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:41.209 17:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.209 17:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.209 17:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.468 17:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzdlNGM0NTcyYmY1NzA0ZjQwMWQ5NTE4ZTZmMmFiNTBhMWRmZTE3ZmYwZWUwNTAyGQjLOA==: --dhchap-ctrl-secret DHHC-1:01:NTdkNTZiMWM1YjY2MDE3MDRkMDMwZmEzNTJlYjUzZTMf15o7: 00:16:41.468 17:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzdlNGM0NTcyYmY1NzA0ZjQwMWQ5NTE4ZTZmMmFiNTBhMWRmZTE3ZmYwZWUwNTAyGQjLOA==: --dhchap-ctrl-secret DHHC-1:01:NTdkNTZiMWM1YjY2MDE3MDRkMDMwZmEzNTJlYjUzZTMf15o7: 00:16:42.036 17:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.036 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.036 17:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:42.036 17:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.036 17:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.036 17:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.036 17:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:42.036 17:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:42.036 17:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:42.295 17:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:16:42.295 17:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:42.295 17:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:42.295 17:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:42.295 17:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:42.295 17:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.295 17:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:16:42.295 17:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.295 17:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.295 17:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.295 17:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:42.295 17:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:42.295 17:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:42.553 00:16:42.553 17:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:42.553 17:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:42.553 17:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.813 17:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.813 17:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.813 17:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.813 17:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.813 17:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.813 17:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:42.813 { 00:16:42.813 "cntlid": 31, 00:16:42.813 "qid": 0, 00:16:42.813 "state": "enabled", 00:16:42.813 "thread": "nvmf_tgt_poll_group_000", 00:16:42.813 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:42.813 "listen_address": { 00:16:42.813 "trtype": "TCP", 00:16:42.813 "adrfam": "IPv4", 00:16:42.813 "traddr": "10.0.0.2", 00:16:42.813 "trsvcid": "4420" 00:16:42.813 }, 00:16:42.813 "peer_address": { 00:16:42.813 "trtype": "TCP", 00:16:42.813 "adrfam": "IPv4", 00:16:42.813 "traddr": "10.0.0.1", 00:16:42.813 "trsvcid": "36528" 00:16:42.813 }, 00:16:42.813 "auth": { 00:16:42.813 "state": "completed", 00:16:42.813 "digest": "sha256", 00:16:42.813 "dhgroup": "ffdhe4096" 00:16:42.813 } 00:16:42.813 } 00:16:42.813 ]' 00:16:42.813 17:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:42.813 17:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:42.813 17:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:42.813 17:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:42.813 17:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:42.813 17:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.813 17:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.813 17:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.072 17:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjEwZjIwODMwYTZiNTFiNDYzMDJmOTY4NTUyMGQ5NDRjM2E5ZmIwYjBlYzQxNjY4ZGUzYWFhZDA5ZTdiZTY4M+BeRSc=: 00:16:43.072 17:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NjEwZjIwODMwYTZiNTFiNDYzMDJmOTY4NTUyMGQ5NDRjM2E5ZmIwYjBlYzQxNjY4ZGUzYWFhZDA5ZTdiZTY4M+BeRSc=: 00:16:43.639 17:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.639 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.639 17:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:43.639 17:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.639 17:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.639 17:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.639 17:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:43.639 17:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:43.639 17:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:43.639 17:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:43.898 17:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:16:43.898 17:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:43.898 17:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:43.898 17:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:43.898 17:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:43.898 17:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.898 17:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.898 17:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.898 17:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.898 17:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.898 17:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.898 17:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.898 17:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:44.157 00:16:44.157 17:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:44.157 17:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:44.157 17:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.416 17:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.416 17:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.416 17:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.416 17:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.416 17:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.416 17:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:44.416 { 00:16:44.416 "cntlid": 33, 00:16:44.416 "qid": 0, 00:16:44.416 "state": "enabled", 00:16:44.416 "thread": "nvmf_tgt_poll_group_000", 00:16:44.416 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:44.416 "listen_address": { 00:16:44.416 "trtype": "TCP", 00:16:44.416 "adrfam": "IPv4", 00:16:44.416 "traddr": "10.0.0.2", 00:16:44.416 "trsvcid": "4420" 00:16:44.416 }, 00:16:44.416 "peer_address": { 00:16:44.416 "trtype": "TCP", 00:16:44.416 "adrfam": "IPv4", 00:16:44.416 "traddr": "10.0.0.1", 00:16:44.416 "trsvcid": "36548" 00:16:44.416 }, 00:16:44.416 "auth": { 00:16:44.416 "state": "completed", 00:16:44.416 "digest": "sha256", 00:16:44.416 "dhgroup": "ffdhe6144" 00:16:44.416 } 00:16:44.416 } 00:16:44.416 ]' 00:16:44.416 17:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:44.416 17:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:44.416 17:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:44.675 17:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:44.675 17:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:44.675 17:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.675 17:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.675 17:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.934 17:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjRiMDFkMDVhYmZlMzlkZjU3Yjk0ZmQ3NzI2ZjA0OWUzODlhM2Y3ZmMyNTM5ZTk2/3DGMQ==: --dhchap-ctrl-secret DHHC-1:03:YzAyZTU4YjA1Y2E1Y2FkNzI0NGFlYjU1ZGIxYWUwMGVhZGM1OTVlMGIzNTMxZDkwMzI5ZWU5ZGNhNTM0NzI5OEba8j4=: 00:16:44.934 17:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NjRiMDFkMDVhYmZlMzlkZjU3Yjk0ZmQ3NzI2ZjA0OWUzODlhM2Y3ZmMyNTM5ZTk2/3DGMQ==: --dhchap-ctrl-secret DHHC-1:03:YzAyZTU4YjA1Y2E1Y2FkNzI0NGFlYjU1ZGIxYWUwMGVhZGM1OTVlMGIzNTMxZDkwMzI5ZWU5ZGNhNTM0NzI5OEba8j4=: 00:16:45.501 17:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.501 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.501 17:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:45.501 17:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.501 17:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.501 17:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.501 17:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:45.501 17:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:45.501 17:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:45.501 17:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:16:45.501 17:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:45.501 17:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:45.501 17:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:45.501 17:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:45.501 17:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.501 17:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.501 17:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.501 17:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.501 17:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.501 17:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.501 17:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.501 17:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:46.073 00:16:46.074 17:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:46.074 17:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:46.074 17:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.074 17:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.074 17:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.074 17:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.074 17:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.074 17:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.074 17:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:46.074 { 00:16:46.074 "cntlid": 35, 00:16:46.074 "qid": 0, 00:16:46.074 "state": "enabled", 00:16:46.074 "thread": "nvmf_tgt_poll_group_000", 00:16:46.074 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:46.074 "listen_address": { 00:16:46.074 "trtype": "TCP", 00:16:46.074 "adrfam": "IPv4", 00:16:46.074 "traddr": "10.0.0.2", 00:16:46.074 "trsvcid": "4420" 00:16:46.074 }, 00:16:46.074 "peer_address": { 00:16:46.074 "trtype": "TCP", 00:16:46.074 "adrfam": "IPv4", 00:16:46.074 "traddr": "10.0.0.1", 00:16:46.074 "trsvcid": "54528" 00:16:46.074 }, 00:16:46.074 "auth": { 00:16:46.074 "state": "completed", 00:16:46.074 "digest": "sha256", 00:16:46.074 "dhgroup": "ffdhe6144" 00:16:46.074 } 00:16:46.074 } 00:16:46.074 ]' 00:16:46.074 17:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:46.074 17:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:46.074 17:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:46.335 17:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:46.335 17:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:46.335 17:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.335 17:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.335 17:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.335 17:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDQ5MzBhY2ExYmRhN2RiZWEwM2FkNzIyNzZkNWIxMGIfBWUs: --dhchap-ctrl-secret DHHC-1:02:OWJlN2IwNjhlNWU2NWNmNzQ3NTBiMGE3MGI5NWQ5MmMzMTZmYzU1OWFkYTEzZjJidK3ztA==: 00:16:46.335 17:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDQ5MzBhY2ExYmRhN2RiZWEwM2FkNzIyNzZkNWIxMGIfBWUs: --dhchap-ctrl-secret DHHC-1:02:OWJlN2IwNjhlNWU2NWNmNzQ3NTBiMGE3MGI5NWQ5MmMzMTZmYzU1OWFkYTEzZjJidK3ztA==: 00:16:46.902 17:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.162 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.162 17:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:47.162 17:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.162 17:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.162 17:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.162 17:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:47.162 17:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:47.162 17:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:47.162 17:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:16:47.162 17:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:47.162 17:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:47.162 17:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:47.162 17:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:47.162 17:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.162 17:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.162 17:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.162 17:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.162 17:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.162 17:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.162 17:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.162 17:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.730 00:16:47.730 17:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:47.730 17:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:47.730 17:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.730 17:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.730 17:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.730 17:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.730 17:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.730 17:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.730 17:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:47.730 { 00:16:47.730 "cntlid": 37, 00:16:47.730 "qid": 0, 00:16:47.730 "state": "enabled", 00:16:47.730 "thread": "nvmf_tgt_poll_group_000", 00:16:47.730 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:47.730 "listen_address": { 00:16:47.730 "trtype": "TCP", 00:16:47.730 "adrfam": "IPv4", 00:16:47.730 "traddr": "10.0.0.2", 00:16:47.730 "trsvcid": "4420" 00:16:47.730 }, 00:16:47.730 "peer_address": { 00:16:47.730 "trtype": "TCP", 00:16:47.730 "adrfam": "IPv4", 00:16:47.730 "traddr": "10.0.0.1", 00:16:47.730 "trsvcid": "54558" 00:16:47.730 }, 00:16:47.730 "auth": { 00:16:47.730 "state": "completed", 00:16:47.730 "digest": "sha256", 00:16:47.730 "dhgroup": "ffdhe6144" 00:16:47.730 } 00:16:47.730 } 00:16:47.730 ]' 00:16:47.730 17:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:47.989 17:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:47.989 17:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:47.989 17:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:47.989 17:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:47.989 17:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.989 17:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.989 17:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.248 17:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzdlNGM0NTcyYmY1NzA0ZjQwMWQ5NTE4ZTZmMmFiNTBhMWRmZTE3ZmYwZWUwNTAyGQjLOA==: --dhchap-ctrl-secret DHHC-1:01:NTdkNTZiMWM1YjY2MDE3MDRkMDMwZmEzNTJlYjUzZTMf15o7: 00:16:48.248 17:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzdlNGM0NTcyYmY1NzA0ZjQwMWQ5NTE4ZTZmMmFiNTBhMWRmZTE3ZmYwZWUwNTAyGQjLOA==: --dhchap-ctrl-secret DHHC-1:01:NTdkNTZiMWM1YjY2MDE3MDRkMDMwZmEzNTJlYjUzZTMf15o7: 00:16:48.816 17:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.816 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.816 17:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:48.816 17:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.816 17:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.816 17:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.816 17:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:48.816 17:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:48.816 17:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:48.816 17:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:16:48.816 17:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:48.816 17:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:48.816 17:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:48.816 17:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:48.816 17:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.816 17:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:16:48.816 17:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.816 17:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.816 17:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.816 17:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:48.816 17:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:48.816 17:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:49.384 00:16:49.384 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:49.384 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:49.384 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.384 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.384 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.384 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.384 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.384 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.384 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:49.384 { 00:16:49.384 "cntlid": 39, 00:16:49.384 "qid": 0, 00:16:49.384 "state": "enabled", 00:16:49.384 "thread": "nvmf_tgt_poll_group_000", 00:16:49.384 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:49.384 "listen_address": { 00:16:49.384 "trtype": "TCP", 00:16:49.384 "adrfam": "IPv4", 00:16:49.384 "traddr": "10.0.0.2", 00:16:49.384 "trsvcid": "4420" 00:16:49.384 }, 00:16:49.384 "peer_address": { 00:16:49.384 "trtype": "TCP", 00:16:49.385 "adrfam": "IPv4", 00:16:49.385 "traddr": "10.0.0.1", 00:16:49.385 "trsvcid": "54588" 00:16:49.385 }, 00:16:49.385 "auth": { 00:16:49.385 "state": "completed", 00:16:49.385 "digest": "sha256", 00:16:49.385 "dhgroup": "ffdhe6144" 00:16:49.385 } 00:16:49.385 } 00:16:49.385 ]' 00:16:49.385 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:49.643 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:49.643 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:49.643 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:49.643 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:49.643 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.643 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.643 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.903 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjEwZjIwODMwYTZiNTFiNDYzMDJmOTY4NTUyMGQ5NDRjM2E5ZmIwYjBlYzQxNjY4ZGUzYWFhZDA5ZTdiZTY4M+BeRSc=: 00:16:49.903 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NjEwZjIwODMwYTZiNTFiNDYzMDJmOTY4NTUyMGQ5NDRjM2E5ZmIwYjBlYzQxNjY4ZGUzYWFhZDA5ZTdiZTY4M+BeRSc=: 00:16:50.471 17:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.471 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.471 17:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:50.471 17:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.471 17:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.471 17:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.471 17:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:50.471 17:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:50.471 17:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:50.471 17:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:50.730 17:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:16:50.730 17:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:50.730 17:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:50.730 17:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:50.730 17:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:50.730 17:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.730 17:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.730 17:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.730 17:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.730 17:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.730 17:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.730 17:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.730 17:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.989 00:16:50.989 17:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:50.989 17:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:50.989 17:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.248 17:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.248 17:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.248 17:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.248 17:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.248 17:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.248 17:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:51.248 { 00:16:51.248 "cntlid": 41, 00:16:51.248 "qid": 0, 00:16:51.248 "state": "enabled", 00:16:51.248 "thread": "nvmf_tgt_poll_group_000", 00:16:51.248 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:51.248 "listen_address": { 00:16:51.248 "trtype": "TCP", 00:16:51.248 "adrfam": "IPv4", 00:16:51.248 "traddr": "10.0.0.2", 00:16:51.248 "trsvcid": "4420" 00:16:51.248 }, 00:16:51.248 "peer_address": { 00:16:51.248 "trtype": "TCP", 00:16:51.248 "adrfam": "IPv4", 00:16:51.248 "traddr": "10.0.0.1", 00:16:51.248 "trsvcid": "54612" 00:16:51.248 }, 00:16:51.248 "auth": { 00:16:51.248 "state": "completed", 00:16:51.248 "digest": "sha256", 00:16:51.248 "dhgroup": "ffdhe8192" 00:16:51.248 } 00:16:51.248 } 00:16:51.248 ]' 00:16:51.248 17:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:51.248 17:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:51.248 17:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:51.507 17:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:51.507 17:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:51.507 17:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.507 17:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.507 17:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.766 17:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjRiMDFkMDVhYmZlMzlkZjU3Yjk0ZmQ3NzI2ZjA0OWUzODlhM2Y3ZmMyNTM5ZTk2/3DGMQ==: --dhchap-ctrl-secret DHHC-1:03:YzAyZTU4YjA1Y2E1Y2FkNzI0NGFlYjU1ZGIxYWUwMGVhZGM1OTVlMGIzNTMxZDkwMzI5ZWU5ZGNhNTM0NzI5OEba8j4=: 00:16:51.766 17:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NjRiMDFkMDVhYmZlMzlkZjU3Yjk0ZmQ3NzI2ZjA0OWUzODlhM2Y3ZmMyNTM5ZTk2/3DGMQ==: --dhchap-ctrl-secret DHHC-1:03:YzAyZTU4YjA1Y2E1Y2FkNzI0NGFlYjU1ZGIxYWUwMGVhZGM1OTVlMGIzNTMxZDkwMzI5ZWU5ZGNhNTM0NzI5OEba8j4=: 00:16:52.334 17:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.334 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.334 17:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:52.334 17:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.334 17:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.334 17:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.334 17:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:52.334 17:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:52.334 17:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:52.593 17:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:16:52.593 17:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:52.593 17:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:52.593 17:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:52.593 17:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:52.593 17:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.593 17:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.593 17:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.593 17:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.593 17:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.593 17:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.593 17:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.593 17:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.852 00:16:52.852 17:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:52.852 17:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:52.852 17:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.111 17:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.111 17:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.111 17:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.111 17:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.111 17:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.111 17:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:53.111 { 00:16:53.111 "cntlid": 43, 00:16:53.111 "qid": 0, 00:16:53.111 "state": "enabled", 00:16:53.111 "thread": "nvmf_tgt_poll_group_000", 00:16:53.111 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:53.111 "listen_address": { 00:16:53.111 "trtype": "TCP", 00:16:53.111 "adrfam": "IPv4", 00:16:53.111 "traddr": "10.0.0.2", 00:16:53.111 "trsvcid": "4420" 00:16:53.111 }, 00:16:53.111 "peer_address": { 00:16:53.111 "trtype": "TCP", 00:16:53.111 "adrfam": "IPv4", 00:16:53.111 "traddr": "10.0.0.1", 00:16:53.111 "trsvcid": "54642" 00:16:53.111 }, 00:16:53.111 "auth": { 00:16:53.111 "state": "completed", 00:16:53.111 "digest": "sha256", 00:16:53.111 "dhgroup": "ffdhe8192" 00:16:53.111 } 00:16:53.111 } 00:16:53.111 ]' 00:16:53.111 17:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:53.111 17:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:53.111 17:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:53.369 17:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:53.369 17:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:53.369 17:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.369 17:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.369 17:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.627 17:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDQ5MzBhY2ExYmRhN2RiZWEwM2FkNzIyNzZkNWIxMGIfBWUs: --dhchap-ctrl-secret DHHC-1:02:OWJlN2IwNjhlNWU2NWNmNzQ3NTBiMGE3MGI5NWQ5MmMzMTZmYzU1OWFkYTEzZjJidK3ztA==: 00:16:53.627 17:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDQ5MzBhY2ExYmRhN2RiZWEwM2FkNzIyNzZkNWIxMGIfBWUs: --dhchap-ctrl-secret DHHC-1:02:OWJlN2IwNjhlNWU2NWNmNzQ3NTBiMGE3MGI5NWQ5MmMzMTZmYzU1OWFkYTEzZjJidK3ztA==: 00:16:54.195 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.195 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.195 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:54.195 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.195 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.195 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.195 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:54.195 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:54.195 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:54.195 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:16:54.195 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:54.195 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:54.195 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:54.195 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:54.195 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.195 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:54.195 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.195 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.195 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.195 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:54.195 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:54.195 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:54.762 00:16:54.762 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:54.762 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:54.762 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.021 17:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.021 17:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.021 17:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.021 17:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.021 17:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.021 17:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:55.021 { 00:16:55.021 "cntlid": 45, 00:16:55.021 "qid": 0, 00:16:55.021 "state": "enabled", 00:16:55.021 "thread": "nvmf_tgt_poll_group_000", 00:16:55.021 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:55.021 "listen_address": { 00:16:55.021 "trtype": "TCP", 00:16:55.021 "adrfam": "IPv4", 00:16:55.021 "traddr": "10.0.0.2", 00:16:55.021 "trsvcid": "4420" 00:16:55.021 }, 00:16:55.021 "peer_address": { 00:16:55.021 "trtype": "TCP", 00:16:55.021 "adrfam": "IPv4", 00:16:55.021 "traddr": "10.0.0.1", 00:16:55.021 "trsvcid": "54682" 00:16:55.021 }, 00:16:55.021 "auth": { 00:16:55.021 "state": "completed", 00:16:55.021 "digest": "sha256", 00:16:55.021 "dhgroup": "ffdhe8192" 00:16:55.021 } 00:16:55.021 } 00:16:55.021 ]' 00:16:55.021 17:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:55.021 17:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:55.021 17:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:55.021 17:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:55.021 17:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:55.021 17:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.021 17:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.021 17:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.280 17:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzdlNGM0NTcyYmY1NzA0ZjQwMWQ5NTE4ZTZmMmFiNTBhMWRmZTE3ZmYwZWUwNTAyGQjLOA==: --dhchap-ctrl-secret DHHC-1:01:NTdkNTZiMWM1YjY2MDE3MDRkMDMwZmEzNTJlYjUzZTMf15o7: 00:16:55.280 17:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzdlNGM0NTcyYmY1NzA0ZjQwMWQ5NTE4ZTZmMmFiNTBhMWRmZTE3ZmYwZWUwNTAyGQjLOA==: --dhchap-ctrl-secret DHHC-1:01:NTdkNTZiMWM1YjY2MDE3MDRkMDMwZmEzNTJlYjUzZTMf15o7: 00:16:55.848 17:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.848 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.848 17:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:55.848 17:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.848 17:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.848 17:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.848 17:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:55.848 17:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:55.848 17:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:56.107 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:16:56.107 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:56.107 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:56.107 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:56.107 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:56.107 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.107 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:16:56.107 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.107 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.107 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.107 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:56.107 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:56.107 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:56.674 00:16:56.674 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:56.674 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:56.674 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.674 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.674 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.674 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.674 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.674 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.674 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:56.674 { 00:16:56.674 "cntlid": 47, 00:16:56.674 "qid": 0, 00:16:56.674 "state": "enabled", 00:16:56.674 "thread": "nvmf_tgt_poll_group_000", 00:16:56.674 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:56.674 "listen_address": { 00:16:56.674 "trtype": "TCP", 00:16:56.674 "adrfam": "IPv4", 00:16:56.674 "traddr": "10.0.0.2", 00:16:56.674 "trsvcid": "4420" 00:16:56.674 }, 00:16:56.674 "peer_address": { 00:16:56.674 "trtype": "TCP", 00:16:56.674 "adrfam": "IPv4", 00:16:56.674 "traddr": "10.0.0.1", 00:16:56.675 "trsvcid": "50312" 00:16:56.675 }, 00:16:56.675 "auth": { 00:16:56.675 "state": "completed", 00:16:56.675 "digest": "sha256", 00:16:56.675 "dhgroup": "ffdhe8192" 00:16:56.675 } 00:16:56.675 } 00:16:56.675 ]' 00:16:56.675 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:56.933 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:56.933 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:56.933 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:56.933 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:56.933 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.933 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.934 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.192 17:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjEwZjIwODMwYTZiNTFiNDYzMDJmOTY4NTUyMGQ5NDRjM2E5ZmIwYjBlYzQxNjY4ZGUzYWFhZDA5ZTdiZTY4M+BeRSc=: 00:16:57.192 17:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NjEwZjIwODMwYTZiNTFiNDYzMDJmOTY4NTUyMGQ5NDRjM2E5ZmIwYjBlYzQxNjY4ZGUzYWFhZDA5ZTdiZTY4M+BeRSc=: 00:16:57.760 17:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.760 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.760 17:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:57.760 17:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.760 17:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.760 17:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.760 17:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:57.760 17:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:57.760 17:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:57.760 17:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:57.760 17:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:57.760 17:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:16:57.760 17:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:57.760 17:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:57.760 17:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:57.760 17:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:57.760 17:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.760 17:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.760 17:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.760 17:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.760 17:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.760 17:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.760 17:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.760 17:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.019 00:16:58.019 17:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:58.019 17:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:58.019 17:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.278 17:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.278 17:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.278 17:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.278 17:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.278 17:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.278 17:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:58.278 { 00:16:58.278 "cntlid": 49, 00:16:58.278 "qid": 0, 00:16:58.278 "state": "enabled", 00:16:58.278 "thread": "nvmf_tgt_poll_group_000", 00:16:58.278 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:58.278 "listen_address": { 00:16:58.278 "trtype": "TCP", 00:16:58.278 "adrfam": "IPv4", 00:16:58.278 "traddr": "10.0.0.2", 00:16:58.278 "trsvcid": "4420" 00:16:58.278 }, 00:16:58.278 "peer_address": { 00:16:58.278 "trtype": "TCP", 00:16:58.278 "adrfam": "IPv4", 00:16:58.278 "traddr": "10.0.0.1", 00:16:58.278 "trsvcid": "50330" 00:16:58.278 }, 00:16:58.278 "auth": { 00:16:58.278 "state": "completed", 00:16:58.278 "digest": "sha384", 00:16:58.278 "dhgroup": "null" 00:16:58.278 } 00:16:58.278 } 00:16:58.278 ]' 00:16:58.278 17:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:58.278 17:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:58.278 17:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:58.537 17:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:58.537 17:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:58.537 17:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.537 17:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.537 17:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.537 17:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjRiMDFkMDVhYmZlMzlkZjU3Yjk0ZmQ3NzI2ZjA0OWUzODlhM2Y3ZmMyNTM5ZTk2/3DGMQ==: --dhchap-ctrl-secret DHHC-1:03:YzAyZTU4YjA1Y2E1Y2FkNzI0NGFlYjU1ZGIxYWUwMGVhZGM1OTVlMGIzNTMxZDkwMzI5ZWU5ZGNhNTM0NzI5OEba8j4=: 00:16:58.537 17:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NjRiMDFkMDVhYmZlMzlkZjU3Yjk0ZmQ3NzI2ZjA0OWUzODlhM2Y3ZmMyNTM5ZTk2/3DGMQ==: --dhchap-ctrl-secret DHHC-1:03:YzAyZTU4YjA1Y2E1Y2FkNzI0NGFlYjU1ZGIxYWUwMGVhZGM1OTVlMGIzNTMxZDkwMzI5ZWU5ZGNhNTM0NzI5OEba8j4=: 00:16:59.104 17:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.363 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.363 17:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:59.363 17:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.363 17:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.363 17:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.363 17:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:59.363 17:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:59.363 17:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:59.363 17:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:16:59.363 17:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:59.363 17:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:59.363 17:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:59.363 17:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:59.363 17:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.363 17:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.363 17:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.363 17:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.363 17:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.363 17:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.363 17:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.363 17:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.622 00:16:59.622 17:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:59.622 17:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:59.622 17:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.880 17:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.880 17:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.880 17:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.880 17:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.880 17:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.880 17:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:59.880 { 00:16:59.880 "cntlid": 51, 00:16:59.880 "qid": 0, 00:16:59.880 "state": "enabled", 00:16:59.880 "thread": "nvmf_tgt_poll_group_000", 00:16:59.880 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:59.880 "listen_address": { 00:16:59.880 "trtype": "TCP", 00:16:59.880 "adrfam": "IPv4", 00:16:59.880 "traddr": "10.0.0.2", 00:16:59.880 "trsvcid": "4420" 00:16:59.880 }, 00:16:59.880 "peer_address": { 00:16:59.880 "trtype": "TCP", 00:16:59.880 "adrfam": "IPv4", 00:16:59.880 "traddr": "10.0.0.1", 00:16:59.880 "trsvcid": "50342" 00:16:59.880 }, 00:16:59.880 "auth": { 00:16:59.880 "state": "completed", 00:16:59.880 "digest": "sha384", 00:16:59.880 "dhgroup": "null" 00:16:59.880 } 00:16:59.880 } 00:16:59.880 ]' 00:16:59.880 17:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:59.880 17:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:59.880 17:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:59.880 17:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:59.880 17:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:00.139 17:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.139 17:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.139 17:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.139 17:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDQ5MzBhY2ExYmRhN2RiZWEwM2FkNzIyNzZkNWIxMGIfBWUs: --dhchap-ctrl-secret DHHC-1:02:OWJlN2IwNjhlNWU2NWNmNzQ3NTBiMGE3MGI5NWQ5MmMzMTZmYzU1OWFkYTEzZjJidK3ztA==: 00:17:00.139 17:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDQ5MzBhY2ExYmRhN2RiZWEwM2FkNzIyNzZkNWIxMGIfBWUs: --dhchap-ctrl-secret DHHC-1:02:OWJlN2IwNjhlNWU2NWNmNzQ3NTBiMGE3MGI5NWQ5MmMzMTZmYzU1OWFkYTEzZjJidK3ztA==: 00:17:00.707 17:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.707 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.707 17:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:00.707 17:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.707 17:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.707 17:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.707 17:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:00.707 17:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:00.707 17:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:00.965 17:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:17:00.965 17:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:00.965 17:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:00.965 17:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:00.965 17:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:00.965 17:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.965 17:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.965 17:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.965 17:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.965 17:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.965 17:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.965 17:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.965 17:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.224 00:17:01.224 17:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:01.224 17:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:01.224 17:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.482 17:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.482 17:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.482 17:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.482 17:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.482 17:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.482 17:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:01.482 { 00:17:01.482 "cntlid": 53, 00:17:01.482 "qid": 0, 00:17:01.482 "state": "enabled", 00:17:01.482 "thread": "nvmf_tgt_poll_group_000", 00:17:01.482 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:01.482 "listen_address": { 00:17:01.482 "trtype": "TCP", 00:17:01.482 "adrfam": "IPv4", 00:17:01.482 "traddr": "10.0.0.2", 00:17:01.482 "trsvcid": "4420" 00:17:01.482 }, 00:17:01.482 "peer_address": { 00:17:01.482 "trtype": "TCP", 00:17:01.482 "adrfam": "IPv4", 00:17:01.482 "traddr": "10.0.0.1", 00:17:01.482 "trsvcid": "50354" 00:17:01.482 }, 00:17:01.482 "auth": { 00:17:01.482 "state": "completed", 00:17:01.482 "digest": "sha384", 00:17:01.482 "dhgroup": "null" 00:17:01.482 } 00:17:01.482 } 00:17:01.482 ]' 00:17:01.483 17:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:01.483 17:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:01.483 17:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:01.483 17:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:01.483 17:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:01.483 17:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.483 17:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.483 17:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.741 17:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzdlNGM0NTcyYmY1NzA0ZjQwMWQ5NTE4ZTZmMmFiNTBhMWRmZTE3ZmYwZWUwNTAyGQjLOA==: --dhchap-ctrl-secret DHHC-1:01:NTdkNTZiMWM1YjY2MDE3MDRkMDMwZmEzNTJlYjUzZTMf15o7: 00:17:01.741 17:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzdlNGM0NTcyYmY1NzA0ZjQwMWQ5NTE4ZTZmMmFiNTBhMWRmZTE3ZmYwZWUwNTAyGQjLOA==: --dhchap-ctrl-secret DHHC-1:01:NTdkNTZiMWM1YjY2MDE3MDRkMDMwZmEzNTJlYjUzZTMf15o7: 00:17:02.308 17:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.308 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.308 17:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:02.308 17:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.308 17:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.308 17:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.308 17:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:02.308 17:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:02.308 17:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:02.567 17:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:17:02.567 17:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:02.567 17:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:02.567 17:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:02.567 17:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:02.567 17:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.567 17:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:17:02.567 17:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.567 17:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.567 17:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.567 17:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:02.567 17:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:02.567 17:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:02.826 00:17:02.826 17:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:02.826 17:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:02.826 17:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.085 17:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.085 17:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.085 17:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.085 17:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.086 17:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.086 17:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:03.086 { 00:17:03.086 "cntlid": 55, 00:17:03.086 "qid": 0, 00:17:03.086 "state": "enabled", 00:17:03.086 "thread": "nvmf_tgt_poll_group_000", 00:17:03.086 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:03.086 "listen_address": { 00:17:03.086 "trtype": "TCP", 00:17:03.086 "adrfam": "IPv4", 00:17:03.086 "traddr": "10.0.0.2", 00:17:03.086 "trsvcid": "4420" 00:17:03.086 }, 00:17:03.086 "peer_address": { 00:17:03.086 "trtype": "TCP", 00:17:03.086 "adrfam": "IPv4", 00:17:03.086 "traddr": "10.0.0.1", 00:17:03.086 "trsvcid": "50386" 00:17:03.086 }, 00:17:03.086 "auth": { 00:17:03.086 "state": "completed", 00:17:03.086 "digest": "sha384", 00:17:03.086 "dhgroup": "null" 00:17:03.086 } 00:17:03.086 } 00:17:03.086 ]' 00:17:03.086 17:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:03.086 17:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:03.086 17:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:03.086 17:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:03.086 17:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:03.086 17:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.086 17:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.086 17:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.345 17:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjEwZjIwODMwYTZiNTFiNDYzMDJmOTY4NTUyMGQ5NDRjM2E5ZmIwYjBlYzQxNjY4ZGUzYWFhZDA5ZTdiZTY4M+BeRSc=: 00:17:03.345 17:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NjEwZjIwODMwYTZiNTFiNDYzMDJmOTY4NTUyMGQ5NDRjM2E5ZmIwYjBlYzQxNjY4ZGUzYWFhZDA5ZTdiZTY4M+BeRSc=: 00:17:03.912 17:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.912 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.912 17:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:03.912 17:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.913 17:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.913 17:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.913 17:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:03.913 17:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:03.913 17:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:03.913 17:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:04.171 17:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:17:04.171 17:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:04.171 17:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:04.171 17:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:04.171 17:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:04.171 17:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:04.172 17:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:04.172 17:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.172 17:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.172 17:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.172 17:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:04.172 17:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:04.172 17:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:04.430 00:17:04.430 17:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:04.430 17:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:04.430 17:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.689 17:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.689 17:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.689 17:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.689 17:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.689 17:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.689 17:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:04.689 { 00:17:04.689 "cntlid": 57, 00:17:04.689 "qid": 0, 00:17:04.689 "state": "enabled", 00:17:04.689 "thread": "nvmf_tgt_poll_group_000", 00:17:04.689 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:04.689 "listen_address": { 00:17:04.689 "trtype": "TCP", 00:17:04.689 "adrfam": "IPv4", 00:17:04.689 "traddr": "10.0.0.2", 00:17:04.689 "trsvcid": "4420" 00:17:04.689 }, 00:17:04.689 "peer_address": { 00:17:04.689 "trtype": "TCP", 00:17:04.689 "adrfam": "IPv4", 00:17:04.689 "traddr": "10.0.0.1", 00:17:04.689 "trsvcid": "50418" 00:17:04.689 }, 00:17:04.689 "auth": { 00:17:04.689 "state": "completed", 00:17:04.689 "digest": "sha384", 00:17:04.689 "dhgroup": "ffdhe2048" 00:17:04.689 } 00:17:04.689 } 00:17:04.689 ]' 00:17:04.689 17:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:04.689 17:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:04.689 17:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:04.689 17:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:04.689 17:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:04.689 17:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.689 17:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.689 17:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.948 17:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjRiMDFkMDVhYmZlMzlkZjU3Yjk0ZmQ3NzI2ZjA0OWUzODlhM2Y3ZmMyNTM5ZTk2/3DGMQ==: --dhchap-ctrl-secret DHHC-1:03:YzAyZTU4YjA1Y2E1Y2FkNzI0NGFlYjU1ZGIxYWUwMGVhZGM1OTVlMGIzNTMxZDkwMzI5ZWU5ZGNhNTM0NzI5OEba8j4=: 00:17:04.948 17:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NjRiMDFkMDVhYmZlMzlkZjU3Yjk0ZmQ3NzI2ZjA0OWUzODlhM2Y3ZmMyNTM5ZTk2/3DGMQ==: --dhchap-ctrl-secret DHHC-1:03:YzAyZTU4YjA1Y2E1Y2FkNzI0NGFlYjU1ZGIxYWUwMGVhZGM1OTVlMGIzNTMxZDkwMzI5ZWU5ZGNhNTM0NzI5OEba8j4=: 00:17:05.516 17:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.516 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.516 17:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:05.516 17:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.516 17:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.516 17:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.516 17:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:05.516 17:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:05.516 17:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:05.775 17:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:17:05.775 17:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:05.775 17:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:05.775 17:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:05.775 17:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:05.775 17:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.775 17:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.775 17:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.775 17:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.775 17:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.775 17:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.775 17:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.775 17:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.034 00:17:06.034 17:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:06.034 17:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:06.034 17:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.292 17:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.292 17:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.292 17:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.292 17:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.292 17:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.292 17:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:06.292 { 00:17:06.292 "cntlid": 59, 00:17:06.292 "qid": 0, 00:17:06.292 "state": "enabled", 00:17:06.292 "thread": "nvmf_tgt_poll_group_000", 00:17:06.292 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:06.292 "listen_address": { 00:17:06.292 "trtype": "TCP", 00:17:06.292 "adrfam": "IPv4", 00:17:06.292 "traddr": "10.0.0.2", 00:17:06.292 "trsvcid": "4420" 00:17:06.292 }, 00:17:06.292 "peer_address": { 00:17:06.292 "trtype": "TCP", 00:17:06.292 "adrfam": "IPv4", 00:17:06.292 "traddr": "10.0.0.1", 00:17:06.292 "trsvcid": "35252" 00:17:06.292 }, 00:17:06.292 "auth": { 00:17:06.292 "state": "completed", 00:17:06.292 "digest": "sha384", 00:17:06.292 "dhgroup": "ffdhe2048" 00:17:06.292 } 00:17:06.292 } 00:17:06.292 ]' 00:17:06.292 17:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:06.292 17:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:06.292 17:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:06.292 17:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:06.292 17:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:06.292 17:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.292 17:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.292 17:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.550 17:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDQ5MzBhY2ExYmRhN2RiZWEwM2FkNzIyNzZkNWIxMGIfBWUs: --dhchap-ctrl-secret DHHC-1:02:OWJlN2IwNjhlNWU2NWNmNzQ3NTBiMGE3MGI5NWQ5MmMzMTZmYzU1OWFkYTEzZjJidK3ztA==: 00:17:06.550 17:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDQ5MzBhY2ExYmRhN2RiZWEwM2FkNzIyNzZkNWIxMGIfBWUs: --dhchap-ctrl-secret DHHC-1:02:OWJlN2IwNjhlNWU2NWNmNzQ3NTBiMGE3MGI5NWQ5MmMzMTZmYzU1OWFkYTEzZjJidK3ztA==: 00:17:07.118 17:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.118 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.118 17:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:07.118 17:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.118 17:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.118 17:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.118 17:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:07.118 17:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:07.118 17:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:07.377 17:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:17:07.377 17:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:07.377 17:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:07.377 17:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:07.377 17:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:07.377 17:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.377 17:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.377 17:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.377 17:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.377 17:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.377 17:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.377 17:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.377 17:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.636 00:17:07.636 17:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:07.636 17:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:07.636 17:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.636 17:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.636 17:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.636 17:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.636 17:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.895 17:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.895 17:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:07.895 { 00:17:07.895 "cntlid": 61, 00:17:07.895 "qid": 0, 00:17:07.895 "state": "enabled", 00:17:07.895 "thread": "nvmf_tgt_poll_group_000", 00:17:07.895 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:07.895 "listen_address": { 00:17:07.895 "trtype": "TCP", 00:17:07.895 "adrfam": "IPv4", 00:17:07.895 "traddr": "10.0.0.2", 00:17:07.895 "trsvcid": "4420" 00:17:07.895 }, 00:17:07.895 "peer_address": { 00:17:07.895 "trtype": "TCP", 00:17:07.895 "adrfam": "IPv4", 00:17:07.895 "traddr": "10.0.0.1", 00:17:07.895 "trsvcid": "35286" 00:17:07.895 }, 00:17:07.895 "auth": { 00:17:07.895 "state": "completed", 00:17:07.895 "digest": "sha384", 00:17:07.895 "dhgroup": "ffdhe2048" 00:17:07.895 } 00:17:07.895 } 00:17:07.895 ]' 00:17:07.895 17:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:07.895 17:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:07.895 17:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:07.895 17:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:07.895 17:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:07.895 17:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.895 17:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.895 17:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.154 17:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzdlNGM0NTcyYmY1NzA0ZjQwMWQ5NTE4ZTZmMmFiNTBhMWRmZTE3ZmYwZWUwNTAyGQjLOA==: --dhchap-ctrl-secret DHHC-1:01:NTdkNTZiMWM1YjY2MDE3MDRkMDMwZmEzNTJlYjUzZTMf15o7: 00:17:08.154 17:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzdlNGM0NTcyYmY1NzA0ZjQwMWQ5NTE4ZTZmMmFiNTBhMWRmZTE3ZmYwZWUwNTAyGQjLOA==: --dhchap-ctrl-secret DHHC-1:01:NTdkNTZiMWM1YjY2MDE3MDRkMDMwZmEzNTJlYjUzZTMf15o7: 00:17:08.719 17:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.719 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.719 17:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:08.719 17:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.719 17:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.719 17:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.719 17:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:08.719 17:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:08.719 17:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:08.977 17:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:17:08.977 17:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:08.977 17:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:08.977 17:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:08.977 17:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:08.977 17:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.977 17:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:17:08.977 17:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.977 17:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.977 17:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.978 17:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:08.978 17:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:08.978 17:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:09.237 00:17:09.237 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:09.237 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:09.237 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.496 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.496 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.496 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.496 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.496 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.496 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:09.496 { 00:17:09.496 "cntlid": 63, 00:17:09.496 "qid": 0, 00:17:09.496 "state": "enabled", 00:17:09.496 "thread": "nvmf_tgt_poll_group_000", 00:17:09.496 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:09.496 "listen_address": { 00:17:09.496 "trtype": "TCP", 00:17:09.496 "adrfam": "IPv4", 00:17:09.496 "traddr": "10.0.0.2", 00:17:09.496 "trsvcid": "4420" 00:17:09.496 }, 00:17:09.496 "peer_address": { 00:17:09.496 "trtype": "TCP", 00:17:09.496 "adrfam": "IPv4", 00:17:09.496 "traddr": "10.0.0.1", 00:17:09.496 "trsvcid": "35314" 00:17:09.496 }, 00:17:09.496 "auth": { 00:17:09.496 "state": "completed", 00:17:09.496 "digest": "sha384", 00:17:09.496 "dhgroup": "ffdhe2048" 00:17:09.496 } 00:17:09.496 } 00:17:09.496 ]' 00:17:09.496 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:09.496 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:09.496 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:09.496 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:09.496 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:09.496 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.496 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.496 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.754 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjEwZjIwODMwYTZiNTFiNDYzMDJmOTY4NTUyMGQ5NDRjM2E5ZmIwYjBlYzQxNjY4ZGUzYWFhZDA5ZTdiZTY4M+BeRSc=: 00:17:09.754 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NjEwZjIwODMwYTZiNTFiNDYzMDJmOTY4NTUyMGQ5NDRjM2E5ZmIwYjBlYzQxNjY4ZGUzYWFhZDA5ZTdiZTY4M+BeRSc=: 00:17:10.322 17:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.322 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.322 17:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:10.322 17:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.322 17:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.322 17:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.322 17:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:10.322 17:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:10.322 17:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:10.322 17:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:10.581 17:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:17:10.581 17:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:10.581 17:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:10.581 17:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:10.581 17:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:10.581 17:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.581 17:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.581 17:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.581 17:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.581 17:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.581 17:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.581 17:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.581 17:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.839 00:17:10.839 17:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:10.839 17:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:10.839 17:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.839 17:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.839 17:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.839 17:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.839 17:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.839 17:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.839 17:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:10.839 { 00:17:10.839 "cntlid": 65, 00:17:10.839 "qid": 0, 00:17:10.839 "state": "enabled", 00:17:10.839 "thread": "nvmf_tgt_poll_group_000", 00:17:10.839 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:10.839 "listen_address": { 00:17:10.839 "trtype": "TCP", 00:17:10.839 "adrfam": "IPv4", 00:17:10.839 "traddr": "10.0.0.2", 00:17:10.839 "trsvcid": "4420" 00:17:10.839 }, 00:17:10.839 "peer_address": { 00:17:10.839 "trtype": "TCP", 00:17:10.839 "adrfam": "IPv4", 00:17:10.839 "traddr": "10.0.0.1", 00:17:10.839 "trsvcid": "35332" 00:17:10.839 }, 00:17:10.839 "auth": { 00:17:10.839 "state": "completed", 00:17:10.839 "digest": "sha384", 00:17:10.839 "dhgroup": "ffdhe3072" 00:17:10.839 } 00:17:10.839 } 00:17:10.839 ]' 00:17:10.839 17:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:11.098 17:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:11.098 17:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:11.098 17:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:11.098 17:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:11.098 17:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.098 17:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.098 17:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.357 17:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjRiMDFkMDVhYmZlMzlkZjU3Yjk0ZmQ3NzI2ZjA0OWUzODlhM2Y3ZmMyNTM5ZTk2/3DGMQ==: --dhchap-ctrl-secret DHHC-1:03:YzAyZTU4YjA1Y2E1Y2FkNzI0NGFlYjU1ZGIxYWUwMGVhZGM1OTVlMGIzNTMxZDkwMzI5ZWU5ZGNhNTM0NzI5OEba8j4=: 00:17:11.357 17:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NjRiMDFkMDVhYmZlMzlkZjU3Yjk0ZmQ3NzI2ZjA0OWUzODlhM2Y3ZmMyNTM5ZTk2/3DGMQ==: --dhchap-ctrl-secret DHHC-1:03:YzAyZTU4YjA1Y2E1Y2FkNzI0NGFlYjU1ZGIxYWUwMGVhZGM1OTVlMGIzNTMxZDkwMzI5ZWU5ZGNhNTM0NzI5OEba8j4=: 00:17:11.924 17:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.924 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.924 17:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:11.924 17:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.925 17:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.925 17:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.925 17:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:11.925 17:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:11.925 17:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:12.183 17:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:17:12.183 17:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:12.183 17:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:12.183 17:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:12.183 17:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:12.183 17:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.183 17:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.183 17:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.183 17:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.183 17:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.183 17:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.183 17:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.183 17:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.442 00:17:12.442 17:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:12.443 17:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:12.443 17:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.443 17:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.443 17:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.443 17:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.443 17:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.701 17:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.701 17:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:12.701 { 00:17:12.701 "cntlid": 67, 00:17:12.701 "qid": 0, 00:17:12.701 "state": "enabled", 00:17:12.701 "thread": "nvmf_tgt_poll_group_000", 00:17:12.701 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:12.701 "listen_address": { 00:17:12.701 "trtype": "TCP", 00:17:12.701 "adrfam": "IPv4", 00:17:12.701 "traddr": "10.0.0.2", 00:17:12.701 "trsvcid": "4420" 00:17:12.701 }, 00:17:12.701 "peer_address": { 00:17:12.701 "trtype": "TCP", 00:17:12.701 "adrfam": "IPv4", 00:17:12.701 "traddr": "10.0.0.1", 00:17:12.701 "trsvcid": "35374" 00:17:12.701 }, 00:17:12.701 "auth": { 00:17:12.701 "state": "completed", 00:17:12.701 "digest": "sha384", 00:17:12.701 "dhgroup": "ffdhe3072" 00:17:12.701 } 00:17:12.701 } 00:17:12.701 ]' 00:17:12.701 17:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:12.701 17:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:12.701 17:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:12.702 17:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:12.702 17:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:12.702 17:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.702 17:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.702 17:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.960 17:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDQ5MzBhY2ExYmRhN2RiZWEwM2FkNzIyNzZkNWIxMGIfBWUs: --dhchap-ctrl-secret DHHC-1:02:OWJlN2IwNjhlNWU2NWNmNzQ3NTBiMGE3MGI5NWQ5MmMzMTZmYzU1OWFkYTEzZjJidK3ztA==: 00:17:12.961 17:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDQ5MzBhY2ExYmRhN2RiZWEwM2FkNzIyNzZkNWIxMGIfBWUs: --dhchap-ctrl-secret DHHC-1:02:OWJlN2IwNjhlNWU2NWNmNzQ3NTBiMGE3MGI5NWQ5MmMzMTZmYzU1OWFkYTEzZjJidK3ztA==: 00:17:13.528 17:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.528 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.528 17:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:13.528 17:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.528 17:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.528 17:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.528 17:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:13.528 17:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:13.528 17:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:13.787 17:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:17:13.787 17:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:13.787 17:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:13.787 17:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:13.787 17:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:13.787 17:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.787 17:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.787 17:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.787 17:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.787 17:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.787 17:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.787 17:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.788 17:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.046 00:17:14.046 17:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:14.046 17:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:14.046 17:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.046 17:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.046 17:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.046 17:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.046 17:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.046 17:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.046 17:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:14.046 { 00:17:14.046 "cntlid": 69, 00:17:14.046 "qid": 0, 00:17:14.046 "state": "enabled", 00:17:14.046 "thread": "nvmf_tgt_poll_group_000", 00:17:14.046 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:14.046 "listen_address": { 00:17:14.046 "trtype": "TCP", 00:17:14.046 "adrfam": "IPv4", 00:17:14.046 "traddr": "10.0.0.2", 00:17:14.046 "trsvcid": "4420" 00:17:14.046 }, 00:17:14.046 "peer_address": { 00:17:14.046 "trtype": "TCP", 00:17:14.046 "adrfam": "IPv4", 00:17:14.046 "traddr": "10.0.0.1", 00:17:14.046 "trsvcid": "35412" 00:17:14.046 }, 00:17:14.046 "auth": { 00:17:14.046 "state": "completed", 00:17:14.046 "digest": "sha384", 00:17:14.046 "dhgroup": "ffdhe3072" 00:17:14.046 } 00:17:14.046 } 00:17:14.046 ]' 00:17:14.046 17:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:14.305 17:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:14.305 17:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:14.305 17:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:14.305 17:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:14.305 17:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.305 17:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.305 17:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.566 17:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzdlNGM0NTcyYmY1NzA0ZjQwMWQ5NTE4ZTZmMmFiNTBhMWRmZTE3ZmYwZWUwNTAyGQjLOA==: --dhchap-ctrl-secret DHHC-1:01:NTdkNTZiMWM1YjY2MDE3MDRkMDMwZmEzNTJlYjUzZTMf15o7: 00:17:14.566 17:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzdlNGM0NTcyYmY1NzA0ZjQwMWQ5NTE4ZTZmMmFiNTBhMWRmZTE3ZmYwZWUwNTAyGQjLOA==: --dhchap-ctrl-secret DHHC-1:01:NTdkNTZiMWM1YjY2MDE3MDRkMDMwZmEzNTJlYjUzZTMf15o7: 00:17:15.134 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.134 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.134 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:15.134 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.134 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.134 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.134 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:15.134 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:15.134 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:15.134 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:17:15.134 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:15.134 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:15.134 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:15.134 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:15.134 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.134 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:17:15.134 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.134 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.393 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.393 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:15.393 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:15.393 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:15.393 00:17:15.393 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:15.393 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:15.393 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.652 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.652 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.652 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.652 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.652 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.652 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:15.652 { 00:17:15.652 "cntlid": 71, 00:17:15.652 "qid": 0, 00:17:15.652 "state": "enabled", 00:17:15.652 "thread": "nvmf_tgt_poll_group_000", 00:17:15.652 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:15.652 "listen_address": { 00:17:15.652 "trtype": "TCP", 00:17:15.652 "adrfam": "IPv4", 00:17:15.652 "traddr": "10.0.0.2", 00:17:15.652 "trsvcid": "4420" 00:17:15.652 }, 00:17:15.652 "peer_address": { 00:17:15.652 "trtype": "TCP", 00:17:15.652 "adrfam": "IPv4", 00:17:15.652 "traddr": "10.0.0.1", 00:17:15.652 "trsvcid": "56422" 00:17:15.652 }, 00:17:15.652 "auth": { 00:17:15.652 "state": "completed", 00:17:15.652 "digest": "sha384", 00:17:15.652 "dhgroup": "ffdhe3072" 00:17:15.652 } 00:17:15.652 } 00:17:15.652 ]' 00:17:15.652 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:15.652 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:15.652 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:15.911 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:15.911 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:15.911 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.911 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.911 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.911 17:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjEwZjIwODMwYTZiNTFiNDYzMDJmOTY4NTUyMGQ5NDRjM2E5ZmIwYjBlYzQxNjY4ZGUzYWFhZDA5ZTdiZTY4M+BeRSc=: 00:17:16.169 17:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NjEwZjIwODMwYTZiNTFiNDYzMDJmOTY4NTUyMGQ5NDRjM2E5ZmIwYjBlYzQxNjY4ZGUzYWFhZDA5ZTdiZTY4M+BeRSc=: 00:17:16.740 17:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.740 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.740 17:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:16.740 17:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.740 17:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.740 17:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.740 17:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:16.740 17:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:16.740 17:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:16.740 17:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:16.740 17:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:17:16.740 17:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:16.740 17:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:16.740 17:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:16.740 17:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:16.740 17:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.740 17:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.740 17:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.740 17:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.740 17:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.740 17:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.740 17:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.740 17:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.045 00:17:17.045 17:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:17.045 17:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:17.045 17:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.331 17:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.331 17:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.332 17:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.332 17:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.332 17:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.332 17:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:17.332 { 00:17:17.332 "cntlid": 73, 00:17:17.332 "qid": 0, 00:17:17.332 "state": "enabled", 00:17:17.332 "thread": "nvmf_tgt_poll_group_000", 00:17:17.332 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:17.332 "listen_address": { 00:17:17.332 "trtype": "TCP", 00:17:17.332 "adrfam": "IPv4", 00:17:17.332 "traddr": "10.0.0.2", 00:17:17.332 "trsvcid": "4420" 00:17:17.332 }, 00:17:17.332 "peer_address": { 00:17:17.332 "trtype": "TCP", 00:17:17.332 "adrfam": "IPv4", 00:17:17.332 "traddr": "10.0.0.1", 00:17:17.332 "trsvcid": "56450" 00:17:17.332 }, 00:17:17.332 "auth": { 00:17:17.332 "state": "completed", 00:17:17.332 "digest": "sha384", 00:17:17.332 "dhgroup": "ffdhe4096" 00:17:17.332 } 00:17:17.332 } 00:17:17.332 ]' 00:17:17.332 17:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:17.332 17:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:17.332 17:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:17.332 17:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:17.332 17:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:17.595 17:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:17.595 17:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.595 17:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.595 17:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjRiMDFkMDVhYmZlMzlkZjU3Yjk0ZmQ3NzI2ZjA0OWUzODlhM2Y3ZmMyNTM5ZTk2/3DGMQ==: --dhchap-ctrl-secret DHHC-1:03:YzAyZTU4YjA1Y2E1Y2FkNzI0NGFlYjU1ZGIxYWUwMGVhZGM1OTVlMGIzNTMxZDkwMzI5ZWU5ZGNhNTM0NzI5OEba8j4=: 00:17:17.595 17:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NjRiMDFkMDVhYmZlMzlkZjU3Yjk0ZmQ3NzI2ZjA0OWUzODlhM2Y3ZmMyNTM5ZTk2/3DGMQ==: --dhchap-ctrl-secret DHHC-1:03:YzAyZTU4YjA1Y2E1Y2FkNzI0NGFlYjU1ZGIxYWUwMGVhZGM1OTVlMGIzNTMxZDkwMzI5ZWU5ZGNhNTM0NzI5OEba8j4=: 00:17:18.163 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.163 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.163 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:18.163 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.163 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.163 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.163 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:18.163 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:18.163 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:18.422 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:17:18.422 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:18.422 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:18.422 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:18.422 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:18.422 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.422 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.422 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.422 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.422 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.422 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.422 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.422 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.682 00:17:18.682 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:18.682 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:18.682 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.940 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.940 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.940 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.940 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.940 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.940 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:18.940 { 00:17:18.940 "cntlid": 75, 00:17:18.940 "qid": 0, 00:17:18.940 "state": "enabled", 00:17:18.940 "thread": "nvmf_tgt_poll_group_000", 00:17:18.940 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:18.940 "listen_address": { 00:17:18.940 "trtype": "TCP", 00:17:18.940 "adrfam": "IPv4", 00:17:18.940 "traddr": "10.0.0.2", 00:17:18.940 "trsvcid": "4420" 00:17:18.940 }, 00:17:18.940 "peer_address": { 00:17:18.940 "trtype": "TCP", 00:17:18.940 "adrfam": "IPv4", 00:17:18.940 "traddr": "10.0.0.1", 00:17:18.940 "trsvcid": "56482" 00:17:18.940 }, 00:17:18.940 "auth": { 00:17:18.940 "state": "completed", 00:17:18.940 "digest": "sha384", 00:17:18.940 "dhgroup": "ffdhe4096" 00:17:18.940 } 00:17:18.940 } 00:17:18.940 ]' 00:17:18.940 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:18.940 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:18.940 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:18.940 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:18.940 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:18.940 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.941 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.941 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.199 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDQ5MzBhY2ExYmRhN2RiZWEwM2FkNzIyNzZkNWIxMGIfBWUs: --dhchap-ctrl-secret DHHC-1:02:OWJlN2IwNjhlNWU2NWNmNzQ3NTBiMGE3MGI5NWQ5MmMzMTZmYzU1OWFkYTEzZjJidK3ztA==: 00:17:19.199 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDQ5MzBhY2ExYmRhN2RiZWEwM2FkNzIyNzZkNWIxMGIfBWUs: --dhchap-ctrl-secret DHHC-1:02:OWJlN2IwNjhlNWU2NWNmNzQ3NTBiMGE3MGI5NWQ5MmMzMTZmYzU1OWFkYTEzZjJidK3ztA==: 00:17:19.767 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.767 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.767 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:19.767 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.767 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.767 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.767 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:19.767 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:19.767 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:20.026 17:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:17:20.026 17:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:20.026 17:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:20.026 17:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:20.026 17:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:20.026 17:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.026 17:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.026 17:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.026 17:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.026 17:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.026 17:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.026 17:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.026 17:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.285 00:17:20.285 17:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:20.285 17:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:20.285 17:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.543 17:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.543 17:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.544 17:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.544 17:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.544 17:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.544 17:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:20.544 { 00:17:20.544 "cntlid": 77, 00:17:20.544 "qid": 0, 00:17:20.544 "state": "enabled", 00:17:20.544 "thread": "nvmf_tgt_poll_group_000", 00:17:20.544 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:20.544 "listen_address": { 00:17:20.544 "trtype": "TCP", 00:17:20.544 "adrfam": "IPv4", 00:17:20.544 "traddr": "10.0.0.2", 00:17:20.544 "trsvcid": "4420" 00:17:20.544 }, 00:17:20.544 "peer_address": { 00:17:20.544 "trtype": "TCP", 00:17:20.544 "adrfam": "IPv4", 00:17:20.544 "traddr": "10.0.0.1", 00:17:20.544 "trsvcid": "56518" 00:17:20.544 }, 00:17:20.544 "auth": { 00:17:20.544 "state": "completed", 00:17:20.544 "digest": "sha384", 00:17:20.544 "dhgroup": "ffdhe4096" 00:17:20.544 } 00:17:20.544 } 00:17:20.544 ]' 00:17:20.544 17:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:20.544 17:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:20.544 17:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:20.544 17:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:20.544 17:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:20.802 17:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.802 17:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.802 17:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.802 17:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzdlNGM0NTcyYmY1NzA0ZjQwMWQ5NTE4ZTZmMmFiNTBhMWRmZTE3ZmYwZWUwNTAyGQjLOA==: --dhchap-ctrl-secret DHHC-1:01:NTdkNTZiMWM1YjY2MDE3MDRkMDMwZmEzNTJlYjUzZTMf15o7: 00:17:20.802 17:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzdlNGM0NTcyYmY1NzA0ZjQwMWQ5NTE4ZTZmMmFiNTBhMWRmZTE3ZmYwZWUwNTAyGQjLOA==: --dhchap-ctrl-secret DHHC-1:01:NTdkNTZiMWM1YjY2MDE3MDRkMDMwZmEzNTJlYjUzZTMf15o7: 00:17:21.369 17:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.369 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.369 17:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:21.369 17:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.369 17:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.369 17:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.369 17:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:21.369 17:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:21.369 17:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:21.628 17:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:17:21.628 17:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:21.628 17:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:21.628 17:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:21.628 17:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:21.628 17:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.628 17:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:17:21.628 17:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.628 17:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.628 17:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.628 17:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:21.628 17:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:21.628 17:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:21.887 00:17:21.887 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:21.887 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:21.887 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.146 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.146 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.146 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.146 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.146 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.146 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:22.146 { 00:17:22.146 "cntlid": 79, 00:17:22.146 "qid": 0, 00:17:22.146 "state": "enabled", 00:17:22.146 "thread": "nvmf_tgt_poll_group_000", 00:17:22.146 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:22.146 "listen_address": { 00:17:22.146 "trtype": "TCP", 00:17:22.146 "adrfam": "IPv4", 00:17:22.146 "traddr": "10.0.0.2", 00:17:22.146 "trsvcid": "4420" 00:17:22.146 }, 00:17:22.146 "peer_address": { 00:17:22.146 "trtype": "TCP", 00:17:22.146 "adrfam": "IPv4", 00:17:22.146 "traddr": "10.0.0.1", 00:17:22.146 "trsvcid": "56552" 00:17:22.146 }, 00:17:22.146 "auth": { 00:17:22.146 "state": "completed", 00:17:22.146 "digest": "sha384", 00:17:22.146 "dhgroup": "ffdhe4096" 00:17:22.146 } 00:17:22.146 } 00:17:22.146 ]' 00:17:22.146 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:22.146 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:22.146 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:22.146 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:22.146 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:22.405 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.405 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.405 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.405 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjEwZjIwODMwYTZiNTFiNDYzMDJmOTY4NTUyMGQ5NDRjM2E5ZmIwYjBlYzQxNjY4ZGUzYWFhZDA5ZTdiZTY4M+BeRSc=: 00:17:22.405 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NjEwZjIwODMwYTZiNTFiNDYzMDJmOTY4NTUyMGQ5NDRjM2E5ZmIwYjBlYzQxNjY4ZGUzYWFhZDA5ZTdiZTY4M+BeRSc=: 00:17:22.972 17:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.972 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.972 17:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:22.972 17:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.972 17:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.972 17:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.972 17:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:22.972 17:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:22.972 17:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:22.972 17:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:23.231 17:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:17:23.231 17:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:23.231 17:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:23.231 17:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:23.231 17:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:23.231 17:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.231 17:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:23.231 17:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.231 17:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.231 17:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.231 17:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:23.231 17:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:23.231 17:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:23.489 00:17:23.489 17:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:23.489 17:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:23.489 17:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.748 17:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.748 17:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.748 17:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.748 17:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.748 17:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.748 17:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:23.748 { 00:17:23.748 "cntlid": 81, 00:17:23.748 "qid": 0, 00:17:23.748 "state": "enabled", 00:17:23.748 "thread": "nvmf_tgt_poll_group_000", 00:17:23.748 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:23.748 "listen_address": { 00:17:23.748 "trtype": "TCP", 00:17:23.748 "adrfam": "IPv4", 00:17:23.748 "traddr": "10.0.0.2", 00:17:23.748 "trsvcid": "4420" 00:17:23.748 }, 00:17:23.748 "peer_address": { 00:17:23.748 "trtype": "TCP", 00:17:23.748 "adrfam": "IPv4", 00:17:23.748 "traddr": "10.0.0.1", 00:17:23.748 "trsvcid": "56580" 00:17:23.748 }, 00:17:23.748 "auth": { 00:17:23.748 "state": "completed", 00:17:23.748 "digest": "sha384", 00:17:23.748 "dhgroup": "ffdhe6144" 00:17:23.748 } 00:17:23.748 } 00:17:23.748 ]' 00:17:23.748 17:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:23.748 17:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:23.748 17:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:24.007 17:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:24.007 17:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:24.007 17:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.007 17:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.007 17:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.265 17:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjRiMDFkMDVhYmZlMzlkZjU3Yjk0ZmQ3NzI2ZjA0OWUzODlhM2Y3ZmMyNTM5ZTk2/3DGMQ==: --dhchap-ctrl-secret DHHC-1:03:YzAyZTU4YjA1Y2E1Y2FkNzI0NGFlYjU1ZGIxYWUwMGVhZGM1OTVlMGIzNTMxZDkwMzI5ZWU5ZGNhNTM0NzI5OEba8j4=: 00:17:24.265 17:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NjRiMDFkMDVhYmZlMzlkZjU3Yjk0ZmQ3NzI2ZjA0OWUzODlhM2Y3ZmMyNTM5ZTk2/3DGMQ==: --dhchap-ctrl-secret DHHC-1:03:YzAyZTU4YjA1Y2E1Y2FkNzI0NGFlYjU1ZGIxYWUwMGVhZGM1OTVlMGIzNTMxZDkwMzI5ZWU5ZGNhNTM0NzI5OEba8j4=: 00:17:24.832 17:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.832 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.832 17:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:24.832 17:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.832 17:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.832 17:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.832 17:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:24.832 17:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:24.833 17:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:24.833 17:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:17:24.833 17:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:24.833 17:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:24.833 17:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:24.833 17:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:24.833 17:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.833 17:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.833 17:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.833 17:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.833 17:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.833 17:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.833 17:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.833 17:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.400 00:17:25.400 17:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:25.400 17:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.400 17:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:25.400 17:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.400 17:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.400 17:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.400 17:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.400 17:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.400 17:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:25.400 { 00:17:25.400 "cntlid": 83, 00:17:25.400 "qid": 0, 00:17:25.400 "state": "enabled", 00:17:25.400 "thread": "nvmf_tgt_poll_group_000", 00:17:25.400 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:25.400 "listen_address": { 00:17:25.400 "trtype": "TCP", 00:17:25.400 "adrfam": "IPv4", 00:17:25.400 "traddr": "10.0.0.2", 00:17:25.400 "trsvcid": "4420" 00:17:25.400 }, 00:17:25.400 "peer_address": { 00:17:25.400 "trtype": "TCP", 00:17:25.400 "adrfam": "IPv4", 00:17:25.400 "traddr": "10.0.0.1", 00:17:25.400 "trsvcid": "53940" 00:17:25.400 }, 00:17:25.400 "auth": { 00:17:25.400 "state": "completed", 00:17:25.400 "digest": "sha384", 00:17:25.400 "dhgroup": "ffdhe6144" 00:17:25.400 } 00:17:25.400 } 00:17:25.400 ]' 00:17:25.400 17:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:25.659 17:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:25.659 17:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:25.659 17:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:25.659 17:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:25.659 17:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.659 17:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.659 17:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.917 17:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDQ5MzBhY2ExYmRhN2RiZWEwM2FkNzIyNzZkNWIxMGIfBWUs: --dhchap-ctrl-secret DHHC-1:02:OWJlN2IwNjhlNWU2NWNmNzQ3NTBiMGE3MGI5NWQ5MmMzMTZmYzU1OWFkYTEzZjJidK3ztA==: 00:17:25.917 17:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDQ5MzBhY2ExYmRhN2RiZWEwM2FkNzIyNzZkNWIxMGIfBWUs: --dhchap-ctrl-secret DHHC-1:02:OWJlN2IwNjhlNWU2NWNmNzQ3NTBiMGE3MGI5NWQ5MmMzMTZmYzU1OWFkYTEzZjJidK3ztA==: 00:17:26.483 17:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.483 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.483 17:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:26.483 17:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.483 17:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.483 17:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.483 17:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:26.483 17:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:26.483 17:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:26.483 17:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:17:26.483 17:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:26.483 17:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:26.742 17:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:26.742 17:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:26.742 17:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.742 17:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.742 17:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.742 17:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.742 17:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.742 17:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.742 17:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.742 17:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.001 00:17:27.001 17:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:27.001 17:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:27.001 17:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.259 17:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.259 17:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.259 17:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.259 17:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.259 17:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.259 17:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:27.259 { 00:17:27.259 "cntlid": 85, 00:17:27.259 "qid": 0, 00:17:27.259 "state": "enabled", 00:17:27.259 "thread": "nvmf_tgt_poll_group_000", 00:17:27.259 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:27.259 "listen_address": { 00:17:27.259 "trtype": "TCP", 00:17:27.259 "adrfam": "IPv4", 00:17:27.259 "traddr": "10.0.0.2", 00:17:27.259 "trsvcid": "4420" 00:17:27.259 }, 00:17:27.259 "peer_address": { 00:17:27.259 "trtype": "TCP", 00:17:27.259 "adrfam": "IPv4", 00:17:27.259 "traddr": "10.0.0.1", 00:17:27.259 "trsvcid": "53974" 00:17:27.259 }, 00:17:27.259 "auth": { 00:17:27.259 "state": "completed", 00:17:27.259 "digest": "sha384", 00:17:27.259 "dhgroup": "ffdhe6144" 00:17:27.259 } 00:17:27.259 } 00:17:27.259 ]' 00:17:27.260 17:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:27.260 17:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:27.260 17:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:27.260 17:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:27.260 17:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:27.260 17:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.260 17:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.260 17:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.518 17:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzdlNGM0NTcyYmY1NzA0ZjQwMWQ5NTE4ZTZmMmFiNTBhMWRmZTE3ZmYwZWUwNTAyGQjLOA==: --dhchap-ctrl-secret DHHC-1:01:NTdkNTZiMWM1YjY2MDE3MDRkMDMwZmEzNTJlYjUzZTMf15o7: 00:17:27.518 17:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzdlNGM0NTcyYmY1NzA0ZjQwMWQ5NTE4ZTZmMmFiNTBhMWRmZTE3ZmYwZWUwNTAyGQjLOA==: --dhchap-ctrl-secret DHHC-1:01:NTdkNTZiMWM1YjY2MDE3MDRkMDMwZmEzNTJlYjUzZTMf15o7: 00:17:28.085 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.085 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.085 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:28.085 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.085 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.085 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.086 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:28.086 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:28.086 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:28.344 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:17:28.344 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:28.344 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:28.344 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:28.344 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:28.344 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.344 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:17:28.344 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.344 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.344 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.344 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:28.344 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:28.344 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:28.603 00:17:28.603 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:28.603 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:28.603 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.862 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.862 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.862 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.862 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.862 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.862 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:28.862 { 00:17:28.862 "cntlid": 87, 00:17:28.862 "qid": 0, 00:17:28.862 "state": "enabled", 00:17:28.862 "thread": "nvmf_tgt_poll_group_000", 00:17:28.862 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:28.862 "listen_address": { 00:17:28.862 "trtype": "TCP", 00:17:28.862 "adrfam": "IPv4", 00:17:28.862 "traddr": "10.0.0.2", 00:17:28.862 "trsvcid": "4420" 00:17:28.862 }, 00:17:28.862 "peer_address": { 00:17:28.862 "trtype": "TCP", 00:17:28.862 "adrfam": "IPv4", 00:17:28.862 "traddr": "10.0.0.1", 00:17:28.862 "trsvcid": "53998" 00:17:28.862 }, 00:17:28.862 "auth": { 00:17:28.862 "state": "completed", 00:17:28.862 "digest": "sha384", 00:17:28.862 "dhgroup": "ffdhe6144" 00:17:28.862 } 00:17:28.862 } 00:17:28.862 ]' 00:17:28.862 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:28.862 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:28.862 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:28.862 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:28.862 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:28.862 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.862 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.862 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.121 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjEwZjIwODMwYTZiNTFiNDYzMDJmOTY4NTUyMGQ5NDRjM2E5ZmIwYjBlYzQxNjY4ZGUzYWFhZDA5ZTdiZTY4M+BeRSc=: 00:17:29.121 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NjEwZjIwODMwYTZiNTFiNDYzMDJmOTY4NTUyMGQ5NDRjM2E5ZmIwYjBlYzQxNjY4ZGUzYWFhZDA5ZTdiZTY4M+BeRSc=: 00:17:29.688 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.688 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.688 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:29.688 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.688 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.688 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.688 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:29.688 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:29.688 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:29.688 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:29.947 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:17:29.947 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:29.947 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:29.947 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:29.947 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:29.947 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.947 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.947 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.947 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.947 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.947 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.948 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.948 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.515 00:17:30.515 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:30.515 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:30.515 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.773 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.773 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.773 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.773 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.773 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.773 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:30.774 { 00:17:30.774 "cntlid": 89, 00:17:30.774 "qid": 0, 00:17:30.774 "state": "enabled", 00:17:30.774 "thread": "nvmf_tgt_poll_group_000", 00:17:30.774 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:30.774 "listen_address": { 00:17:30.774 "trtype": "TCP", 00:17:30.774 "adrfam": "IPv4", 00:17:30.774 "traddr": "10.0.0.2", 00:17:30.774 "trsvcid": "4420" 00:17:30.774 }, 00:17:30.774 "peer_address": { 00:17:30.774 "trtype": "TCP", 00:17:30.774 "adrfam": "IPv4", 00:17:30.774 "traddr": "10.0.0.1", 00:17:30.774 "trsvcid": "54030" 00:17:30.774 }, 00:17:30.774 "auth": { 00:17:30.774 "state": "completed", 00:17:30.774 "digest": "sha384", 00:17:30.774 "dhgroup": "ffdhe8192" 00:17:30.774 } 00:17:30.774 } 00:17:30.774 ]' 00:17:30.774 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:30.774 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:30.774 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:30.774 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:30.774 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:30.774 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.774 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.774 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.032 17:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjRiMDFkMDVhYmZlMzlkZjU3Yjk0ZmQ3NzI2ZjA0OWUzODlhM2Y3ZmMyNTM5ZTk2/3DGMQ==: --dhchap-ctrl-secret DHHC-1:03:YzAyZTU4YjA1Y2E1Y2FkNzI0NGFlYjU1ZGIxYWUwMGVhZGM1OTVlMGIzNTMxZDkwMzI5ZWU5ZGNhNTM0NzI5OEba8j4=: 00:17:31.032 17:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NjRiMDFkMDVhYmZlMzlkZjU3Yjk0ZmQ3NzI2ZjA0OWUzODlhM2Y3ZmMyNTM5ZTk2/3DGMQ==: --dhchap-ctrl-secret DHHC-1:03:YzAyZTU4YjA1Y2E1Y2FkNzI0NGFlYjU1ZGIxYWUwMGVhZGM1OTVlMGIzNTMxZDkwMzI5ZWU5ZGNhNTM0NzI5OEba8j4=: 00:17:31.599 17:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.599 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.599 17:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:31.599 17:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.599 17:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.599 17:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.599 17:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:31.599 17:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:31.599 17:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:31.858 17:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:17:31.858 17:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:31.858 17:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:31.858 17:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:31.858 17:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:31.858 17:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.858 17:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.858 17:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.858 17:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.858 17:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.858 17:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.858 17:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.858 17:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.426 00:17:32.426 17:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:32.426 17:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:32.426 17:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.426 17:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.426 17:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.426 17:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.426 17:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.426 17:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.426 17:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:32.426 { 00:17:32.426 "cntlid": 91, 00:17:32.426 "qid": 0, 00:17:32.426 "state": "enabled", 00:17:32.426 "thread": "nvmf_tgt_poll_group_000", 00:17:32.426 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:32.426 "listen_address": { 00:17:32.426 "trtype": "TCP", 00:17:32.426 "adrfam": "IPv4", 00:17:32.426 "traddr": "10.0.0.2", 00:17:32.426 "trsvcid": "4420" 00:17:32.426 }, 00:17:32.426 "peer_address": { 00:17:32.426 "trtype": "TCP", 00:17:32.426 "adrfam": "IPv4", 00:17:32.426 "traddr": "10.0.0.1", 00:17:32.426 "trsvcid": "54056" 00:17:32.426 }, 00:17:32.426 "auth": { 00:17:32.426 "state": "completed", 00:17:32.426 "digest": "sha384", 00:17:32.426 "dhgroup": "ffdhe8192" 00:17:32.426 } 00:17:32.426 } 00:17:32.426 ]' 00:17:32.426 17:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:32.426 17:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:32.426 17:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:32.685 17:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:32.685 17:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:32.685 17:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.685 17:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.685 17:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.943 17:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDQ5MzBhY2ExYmRhN2RiZWEwM2FkNzIyNzZkNWIxMGIfBWUs: --dhchap-ctrl-secret DHHC-1:02:OWJlN2IwNjhlNWU2NWNmNzQ3NTBiMGE3MGI5NWQ5MmMzMTZmYzU1OWFkYTEzZjJidK3ztA==: 00:17:32.943 17:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDQ5MzBhY2ExYmRhN2RiZWEwM2FkNzIyNzZkNWIxMGIfBWUs: --dhchap-ctrl-secret DHHC-1:02:OWJlN2IwNjhlNWU2NWNmNzQ3NTBiMGE3MGI5NWQ5MmMzMTZmYzU1OWFkYTEzZjJidK3ztA==: 00:17:33.510 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.510 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.510 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:33.510 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.510 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.510 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.510 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:33.510 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:33.510 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:33.510 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:17:33.510 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:33.510 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:33.510 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:33.510 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:33.510 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.510 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.510 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.510 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.510 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.510 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.511 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.511 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.078 00:17:34.078 17:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:34.078 17:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:34.078 17:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.337 17:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.337 17:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.337 17:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.337 17:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.337 17:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.337 17:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:34.337 { 00:17:34.337 "cntlid": 93, 00:17:34.337 "qid": 0, 00:17:34.337 "state": "enabled", 00:17:34.337 "thread": "nvmf_tgt_poll_group_000", 00:17:34.337 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:34.337 "listen_address": { 00:17:34.337 "trtype": "TCP", 00:17:34.337 "adrfam": "IPv4", 00:17:34.337 "traddr": "10.0.0.2", 00:17:34.337 "trsvcid": "4420" 00:17:34.337 }, 00:17:34.337 "peer_address": { 00:17:34.337 "trtype": "TCP", 00:17:34.337 "adrfam": "IPv4", 00:17:34.337 "traddr": "10.0.0.1", 00:17:34.337 "trsvcid": "54078" 00:17:34.337 }, 00:17:34.337 "auth": { 00:17:34.337 "state": "completed", 00:17:34.337 "digest": "sha384", 00:17:34.337 "dhgroup": "ffdhe8192" 00:17:34.337 } 00:17:34.337 } 00:17:34.337 ]' 00:17:34.337 17:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:34.337 17:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:34.337 17:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:34.337 17:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:34.337 17:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:34.337 17:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.337 17:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.337 17:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.596 17:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzdlNGM0NTcyYmY1NzA0ZjQwMWQ5NTE4ZTZmMmFiNTBhMWRmZTE3ZmYwZWUwNTAyGQjLOA==: --dhchap-ctrl-secret DHHC-1:01:NTdkNTZiMWM1YjY2MDE3MDRkMDMwZmEzNTJlYjUzZTMf15o7: 00:17:34.596 17:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzdlNGM0NTcyYmY1NzA0ZjQwMWQ5NTE4ZTZmMmFiNTBhMWRmZTE3ZmYwZWUwNTAyGQjLOA==: --dhchap-ctrl-secret DHHC-1:01:NTdkNTZiMWM1YjY2MDE3MDRkMDMwZmEzNTJlYjUzZTMf15o7: 00:17:35.163 17:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.163 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.163 17:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:35.163 17:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.163 17:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.163 17:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.163 17:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:35.163 17:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:35.163 17:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:35.422 17:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:17:35.422 17:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:35.422 17:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:35.422 17:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:35.422 17:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:35.422 17:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.422 17:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:17:35.422 17:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.422 17:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.422 17:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.422 17:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:35.422 17:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:35.422 17:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:35.989 00:17:35.989 17:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:35.989 17:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:35.989 17:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.247 17:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.247 17:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.247 17:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.247 17:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.247 17:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.247 17:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:36.247 { 00:17:36.247 "cntlid": 95, 00:17:36.247 "qid": 0, 00:17:36.247 "state": "enabled", 00:17:36.247 "thread": "nvmf_tgt_poll_group_000", 00:17:36.247 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:36.247 "listen_address": { 00:17:36.247 "trtype": "TCP", 00:17:36.247 "adrfam": "IPv4", 00:17:36.247 "traddr": "10.0.0.2", 00:17:36.247 "trsvcid": "4420" 00:17:36.247 }, 00:17:36.247 "peer_address": { 00:17:36.247 "trtype": "TCP", 00:17:36.247 "adrfam": "IPv4", 00:17:36.247 "traddr": "10.0.0.1", 00:17:36.247 "trsvcid": "59814" 00:17:36.247 }, 00:17:36.247 "auth": { 00:17:36.247 "state": "completed", 00:17:36.247 "digest": "sha384", 00:17:36.247 "dhgroup": "ffdhe8192" 00:17:36.247 } 00:17:36.247 } 00:17:36.247 ]' 00:17:36.247 17:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:36.247 17:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:36.247 17:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:36.247 17:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:36.247 17:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:36.247 17:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.247 17:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.247 17:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.506 17:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjEwZjIwODMwYTZiNTFiNDYzMDJmOTY4NTUyMGQ5NDRjM2E5ZmIwYjBlYzQxNjY4ZGUzYWFhZDA5ZTdiZTY4M+BeRSc=: 00:17:36.506 17:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NjEwZjIwODMwYTZiNTFiNDYzMDJmOTY4NTUyMGQ5NDRjM2E5ZmIwYjBlYzQxNjY4ZGUzYWFhZDA5ZTdiZTY4M+BeRSc=: 00:17:37.073 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.073 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.073 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:37.073 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.073 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.073 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.073 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:37.073 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:37.073 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:37.073 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:37.074 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:37.332 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:17:37.332 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:37.332 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:37.332 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:37.332 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:37.332 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.332 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.332 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.332 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.332 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.332 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.332 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.333 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.592 00:17:37.592 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:37.592 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:37.592 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.851 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.851 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.851 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.851 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.851 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.851 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:37.851 { 00:17:37.851 "cntlid": 97, 00:17:37.851 "qid": 0, 00:17:37.851 "state": "enabled", 00:17:37.851 "thread": "nvmf_tgt_poll_group_000", 00:17:37.851 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:37.851 "listen_address": { 00:17:37.851 "trtype": "TCP", 00:17:37.851 "adrfam": "IPv4", 00:17:37.851 "traddr": "10.0.0.2", 00:17:37.851 "trsvcid": "4420" 00:17:37.851 }, 00:17:37.851 "peer_address": { 00:17:37.851 "trtype": "TCP", 00:17:37.851 "adrfam": "IPv4", 00:17:37.851 "traddr": "10.0.0.1", 00:17:37.851 "trsvcid": "59838" 00:17:37.851 }, 00:17:37.851 "auth": { 00:17:37.851 "state": "completed", 00:17:37.851 "digest": "sha512", 00:17:37.851 "dhgroup": "null" 00:17:37.851 } 00:17:37.851 } 00:17:37.851 ]' 00:17:37.851 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:37.851 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:37.851 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:37.851 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:37.851 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:37.851 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.851 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.851 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.110 17:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjRiMDFkMDVhYmZlMzlkZjU3Yjk0ZmQ3NzI2ZjA0OWUzODlhM2Y3ZmMyNTM5ZTk2/3DGMQ==: --dhchap-ctrl-secret DHHC-1:03:YzAyZTU4YjA1Y2E1Y2FkNzI0NGFlYjU1ZGIxYWUwMGVhZGM1OTVlMGIzNTMxZDkwMzI5ZWU5ZGNhNTM0NzI5OEba8j4=: 00:17:38.110 17:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NjRiMDFkMDVhYmZlMzlkZjU3Yjk0ZmQ3NzI2ZjA0OWUzODlhM2Y3ZmMyNTM5ZTk2/3DGMQ==: --dhchap-ctrl-secret DHHC-1:03:YzAyZTU4YjA1Y2E1Y2FkNzI0NGFlYjU1ZGIxYWUwMGVhZGM1OTVlMGIzNTMxZDkwMzI5ZWU5ZGNhNTM0NzI5OEba8j4=: 00:17:38.677 17:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.677 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.677 17:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:38.677 17:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.677 17:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.677 17:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.677 17:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:38.677 17:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:38.677 17:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:38.937 17:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:17:38.937 17:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:38.937 17:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:38.937 17:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:38.937 17:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:38.937 17:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:38.937 17:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.937 17:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.937 17:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.937 17:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.937 17:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.937 17:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.937 17:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.195 00:17:39.195 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:39.195 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:39.196 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.454 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.454 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.454 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.454 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.454 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.454 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:39.454 { 00:17:39.454 "cntlid": 99, 00:17:39.454 "qid": 0, 00:17:39.454 "state": "enabled", 00:17:39.454 "thread": "nvmf_tgt_poll_group_000", 00:17:39.454 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:39.454 "listen_address": { 00:17:39.454 "trtype": "TCP", 00:17:39.454 "adrfam": "IPv4", 00:17:39.454 "traddr": "10.0.0.2", 00:17:39.454 "trsvcid": "4420" 00:17:39.454 }, 00:17:39.454 "peer_address": { 00:17:39.454 "trtype": "TCP", 00:17:39.454 "adrfam": "IPv4", 00:17:39.454 "traddr": "10.0.0.1", 00:17:39.454 "trsvcid": "59860" 00:17:39.454 }, 00:17:39.454 "auth": { 00:17:39.454 "state": "completed", 00:17:39.454 "digest": "sha512", 00:17:39.454 "dhgroup": "null" 00:17:39.454 } 00:17:39.454 } 00:17:39.454 ]' 00:17:39.454 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:39.454 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:39.454 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:39.454 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:39.454 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:39.454 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.454 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.454 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.713 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDQ5MzBhY2ExYmRhN2RiZWEwM2FkNzIyNzZkNWIxMGIfBWUs: --dhchap-ctrl-secret DHHC-1:02:OWJlN2IwNjhlNWU2NWNmNzQ3NTBiMGE3MGI5NWQ5MmMzMTZmYzU1OWFkYTEzZjJidK3ztA==: 00:17:39.713 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDQ5MzBhY2ExYmRhN2RiZWEwM2FkNzIyNzZkNWIxMGIfBWUs: --dhchap-ctrl-secret DHHC-1:02:OWJlN2IwNjhlNWU2NWNmNzQ3NTBiMGE3MGI5NWQ5MmMzMTZmYzU1OWFkYTEzZjJidK3ztA==: 00:17:40.280 17:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.280 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.280 17:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:40.280 17:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.280 17:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.280 17:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.280 17:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:40.280 17:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:40.280 17:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:40.539 17:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:17:40.539 17:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:40.539 17:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:40.539 17:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:40.539 17:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:40.539 17:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.539 17:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.539 17:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.539 17:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.539 17:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.539 17:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.539 17:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.539 17:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.798 00:17:40.798 17:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:40.798 17:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:40.798 17:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.056 17:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.056 17:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.056 17:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.056 17:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.056 17:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.056 17:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:41.056 { 00:17:41.056 "cntlid": 101, 00:17:41.056 "qid": 0, 00:17:41.056 "state": "enabled", 00:17:41.056 "thread": "nvmf_tgt_poll_group_000", 00:17:41.056 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:41.056 "listen_address": { 00:17:41.056 "trtype": "TCP", 00:17:41.056 "adrfam": "IPv4", 00:17:41.056 "traddr": "10.0.0.2", 00:17:41.056 "trsvcid": "4420" 00:17:41.056 }, 00:17:41.056 "peer_address": { 00:17:41.056 "trtype": "TCP", 00:17:41.056 "adrfam": "IPv4", 00:17:41.056 "traddr": "10.0.0.1", 00:17:41.056 "trsvcid": "59886" 00:17:41.056 }, 00:17:41.056 "auth": { 00:17:41.056 "state": "completed", 00:17:41.056 "digest": "sha512", 00:17:41.056 "dhgroup": "null" 00:17:41.056 } 00:17:41.056 } 00:17:41.056 ]' 00:17:41.056 17:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:41.056 17:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:41.056 17:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:41.056 17:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:41.056 17:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:41.056 17:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.056 17:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.056 17:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.315 17:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzdlNGM0NTcyYmY1NzA0ZjQwMWQ5NTE4ZTZmMmFiNTBhMWRmZTE3ZmYwZWUwNTAyGQjLOA==: --dhchap-ctrl-secret DHHC-1:01:NTdkNTZiMWM1YjY2MDE3MDRkMDMwZmEzNTJlYjUzZTMf15o7: 00:17:41.315 17:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzdlNGM0NTcyYmY1NzA0ZjQwMWQ5NTE4ZTZmMmFiNTBhMWRmZTE3ZmYwZWUwNTAyGQjLOA==: --dhchap-ctrl-secret DHHC-1:01:NTdkNTZiMWM1YjY2MDE3MDRkMDMwZmEzNTJlYjUzZTMf15o7: 00:17:41.883 17:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.883 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.883 17:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:41.883 17:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.883 17:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.883 17:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.883 17:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:41.883 17:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:41.883 17:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:42.141 17:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:17:42.141 17:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:42.141 17:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:42.141 17:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:42.141 17:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:42.141 17:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.141 17:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:17:42.141 17:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.141 17:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.141 17:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.141 17:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:42.141 17:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:42.141 17:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:42.400 00:17:42.400 17:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:42.400 17:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:42.400 17:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.400 17:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.400 17:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.400 17:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.400 17:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.400 17:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.400 17:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:42.400 { 00:17:42.400 "cntlid": 103, 00:17:42.400 "qid": 0, 00:17:42.400 "state": "enabled", 00:17:42.400 "thread": "nvmf_tgt_poll_group_000", 00:17:42.400 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:42.400 "listen_address": { 00:17:42.400 "trtype": "TCP", 00:17:42.400 "adrfam": "IPv4", 00:17:42.400 "traddr": "10.0.0.2", 00:17:42.400 "trsvcid": "4420" 00:17:42.400 }, 00:17:42.400 "peer_address": { 00:17:42.400 "trtype": "TCP", 00:17:42.400 "adrfam": "IPv4", 00:17:42.400 "traddr": "10.0.0.1", 00:17:42.400 "trsvcid": "59928" 00:17:42.400 }, 00:17:42.400 "auth": { 00:17:42.400 "state": "completed", 00:17:42.400 "digest": "sha512", 00:17:42.400 "dhgroup": "null" 00:17:42.400 } 00:17:42.400 } 00:17:42.400 ]' 00:17:42.400 17:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:42.659 17:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:42.659 17:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:42.659 17:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:42.659 17:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:42.659 17:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.659 17:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.659 17:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.918 17:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjEwZjIwODMwYTZiNTFiNDYzMDJmOTY4NTUyMGQ5NDRjM2E5ZmIwYjBlYzQxNjY4ZGUzYWFhZDA5ZTdiZTY4M+BeRSc=: 00:17:42.918 17:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NjEwZjIwODMwYTZiNTFiNDYzMDJmOTY4NTUyMGQ5NDRjM2E5ZmIwYjBlYzQxNjY4ZGUzYWFhZDA5ZTdiZTY4M+BeRSc=: 00:17:43.484 17:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.484 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.484 17:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:43.484 17:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.484 17:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.484 17:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.484 17:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:43.484 17:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:43.484 17:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:43.484 17:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:43.743 17:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:17:43.743 17:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:43.743 17:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:43.743 17:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:43.743 17:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:43.743 17:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.743 17:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.743 17:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.743 17:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.743 17:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.743 17:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.743 17:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.743 17:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.002 00:17:44.002 17:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:44.002 17:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:44.002 17:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.002 17:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.002 17:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.002 17:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.002 17:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.002 17:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.002 17:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:44.002 { 00:17:44.002 "cntlid": 105, 00:17:44.002 "qid": 0, 00:17:44.002 "state": "enabled", 00:17:44.002 "thread": "nvmf_tgt_poll_group_000", 00:17:44.002 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:44.002 "listen_address": { 00:17:44.002 "trtype": "TCP", 00:17:44.002 "adrfam": "IPv4", 00:17:44.002 "traddr": "10.0.0.2", 00:17:44.002 "trsvcid": "4420" 00:17:44.002 }, 00:17:44.002 "peer_address": { 00:17:44.002 "trtype": "TCP", 00:17:44.002 "adrfam": "IPv4", 00:17:44.002 "traddr": "10.0.0.1", 00:17:44.002 "trsvcid": "59962" 00:17:44.002 }, 00:17:44.002 "auth": { 00:17:44.002 "state": "completed", 00:17:44.002 "digest": "sha512", 00:17:44.002 "dhgroup": "ffdhe2048" 00:17:44.002 } 00:17:44.002 } 00:17:44.002 ]' 00:17:44.002 17:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:44.261 17:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:44.261 17:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:44.261 17:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:44.261 17:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:44.261 17:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.261 17:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.261 17:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.520 17:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjRiMDFkMDVhYmZlMzlkZjU3Yjk0ZmQ3NzI2ZjA0OWUzODlhM2Y3ZmMyNTM5ZTk2/3DGMQ==: --dhchap-ctrl-secret DHHC-1:03:YzAyZTU4YjA1Y2E1Y2FkNzI0NGFlYjU1ZGIxYWUwMGVhZGM1OTVlMGIzNTMxZDkwMzI5ZWU5ZGNhNTM0NzI5OEba8j4=: 00:17:44.520 17:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NjRiMDFkMDVhYmZlMzlkZjU3Yjk0ZmQ3NzI2ZjA0OWUzODlhM2Y3ZmMyNTM5ZTk2/3DGMQ==: --dhchap-ctrl-secret DHHC-1:03:YzAyZTU4YjA1Y2E1Y2FkNzI0NGFlYjU1ZGIxYWUwMGVhZGM1OTVlMGIzNTMxZDkwMzI5ZWU5ZGNhNTM0NzI5OEba8j4=: 00:17:45.085 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.085 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.085 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:45.085 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.085 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.085 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.085 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:45.085 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:45.085 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:45.344 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:17:45.344 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:45.344 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:45.344 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:45.344 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:45.344 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.344 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.344 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.344 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.344 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.344 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.344 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.344 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.344 00:17:45.602 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:45.602 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:45.602 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.602 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.602 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:45.602 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.602 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.602 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.602 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:45.602 { 00:17:45.602 "cntlid": 107, 00:17:45.602 "qid": 0, 00:17:45.602 "state": "enabled", 00:17:45.602 "thread": "nvmf_tgt_poll_group_000", 00:17:45.602 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:45.602 "listen_address": { 00:17:45.602 "trtype": "TCP", 00:17:45.602 "adrfam": "IPv4", 00:17:45.602 "traddr": "10.0.0.2", 00:17:45.602 "trsvcid": "4420" 00:17:45.602 }, 00:17:45.602 "peer_address": { 00:17:45.602 "trtype": "TCP", 00:17:45.602 "adrfam": "IPv4", 00:17:45.602 "traddr": "10.0.0.1", 00:17:45.602 "trsvcid": "53564" 00:17:45.602 }, 00:17:45.602 "auth": { 00:17:45.602 "state": "completed", 00:17:45.602 "digest": "sha512", 00:17:45.602 "dhgroup": "ffdhe2048" 00:17:45.602 } 00:17:45.602 } 00:17:45.602 ]' 00:17:45.602 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:45.860 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:45.860 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:45.860 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:45.861 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:45.861 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:45.861 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:45.861 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.119 17:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDQ5MzBhY2ExYmRhN2RiZWEwM2FkNzIyNzZkNWIxMGIfBWUs: --dhchap-ctrl-secret DHHC-1:02:OWJlN2IwNjhlNWU2NWNmNzQ3NTBiMGE3MGI5NWQ5MmMzMTZmYzU1OWFkYTEzZjJidK3ztA==: 00:17:46.119 17:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDQ5MzBhY2ExYmRhN2RiZWEwM2FkNzIyNzZkNWIxMGIfBWUs: --dhchap-ctrl-secret DHHC-1:02:OWJlN2IwNjhlNWU2NWNmNzQ3NTBiMGE3MGI5NWQ5MmMzMTZmYzU1OWFkYTEzZjJidK3ztA==: 00:17:46.686 17:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:46.686 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:46.686 17:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:46.686 17:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.686 17:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.686 17:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.686 17:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:46.686 17:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:46.686 17:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:46.945 17:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:17:46.945 17:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:46.945 17:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:46.945 17:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:46.945 17:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:46.945 17:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.945 17:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.945 17:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.945 17:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.945 17:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.945 17:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.945 17:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.945 17:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.945 00:17:47.203 17:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:47.203 17:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:47.203 17:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.203 17:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.203 17:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.203 17:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.204 17:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.204 17:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.204 17:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:47.204 { 00:17:47.204 "cntlid": 109, 00:17:47.204 "qid": 0, 00:17:47.204 "state": "enabled", 00:17:47.204 "thread": "nvmf_tgt_poll_group_000", 00:17:47.204 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:47.204 "listen_address": { 00:17:47.204 "trtype": "TCP", 00:17:47.204 "adrfam": "IPv4", 00:17:47.204 "traddr": "10.0.0.2", 00:17:47.204 "trsvcid": "4420" 00:17:47.204 }, 00:17:47.204 "peer_address": { 00:17:47.204 "trtype": "TCP", 00:17:47.204 "adrfam": "IPv4", 00:17:47.204 "traddr": "10.0.0.1", 00:17:47.204 "trsvcid": "53612" 00:17:47.204 }, 00:17:47.204 "auth": { 00:17:47.204 "state": "completed", 00:17:47.204 "digest": "sha512", 00:17:47.204 "dhgroup": "ffdhe2048" 00:17:47.204 } 00:17:47.204 } 00:17:47.204 ]' 00:17:47.204 17:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:47.462 17:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:47.462 17:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:47.462 17:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:47.462 17:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:47.462 17:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.462 17:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.462 17:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.720 17:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzdlNGM0NTcyYmY1NzA0ZjQwMWQ5NTE4ZTZmMmFiNTBhMWRmZTE3ZmYwZWUwNTAyGQjLOA==: --dhchap-ctrl-secret DHHC-1:01:NTdkNTZiMWM1YjY2MDE3MDRkMDMwZmEzNTJlYjUzZTMf15o7: 00:17:47.720 17:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzdlNGM0NTcyYmY1NzA0ZjQwMWQ5NTE4ZTZmMmFiNTBhMWRmZTE3ZmYwZWUwNTAyGQjLOA==: --dhchap-ctrl-secret DHHC-1:01:NTdkNTZiMWM1YjY2MDE3MDRkMDMwZmEzNTJlYjUzZTMf15o7: 00:17:48.287 17:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.287 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.287 17:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:48.287 17:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.287 17:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.287 17:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.287 17:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:48.287 17:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:48.287 17:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:48.546 17:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:17:48.546 17:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:48.546 17:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:48.546 17:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:48.546 17:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:48.546 17:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.546 17:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:17:48.546 17:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.546 17:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.546 17:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.546 17:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:48.546 17:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:48.546 17:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:48.546 00:17:48.805 17:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:48.805 17:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:48.805 17:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.805 17:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.805 17:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.805 17:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.805 17:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.805 17:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.805 17:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:48.805 { 00:17:48.805 "cntlid": 111, 00:17:48.805 "qid": 0, 00:17:48.805 "state": "enabled", 00:17:48.805 "thread": "nvmf_tgt_poll_group_000", 00:17:48.805 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:48.805 "listen_address": { 00:17:48.805 "trtype": "TCP", 00:17:48.805 "adrfam": "IPv4", 00:17:48.805 "traddr": "10.0.0.2", 00:17:48.805 "trsvcid": "4420" 00:17:48.805 }, 00:17:48.805 "peer_address": { 00:17:48.805 "trtype": "TCP", 00:17:48.805 "adrfam": "IPv4", 00:17:48.805 "traddr": "10.0.0.1", 00:17:48.805 "trsvcid": "53644" 00:17:48.805 }, 00:17:48.805 "auth": { 00:17:48.805 "state": "completed", 00:17:48.805 "digest": "sha512", 00:17:48.805 "dhgroup": "ffdhe2048" 00:17:48.805 } 00:17:48.805 } 00:17:48.805 ]' 00:17:48.805 17:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:49.064 17:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:49.064 17:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:49.065 17:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:49.065 17:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:49.065 17:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.065 17:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.065 17:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.324 17:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjEwZjIwODMwYTZiNTFiNDYzMDJmOTY4NTUyMGQ5NDRjM2E5ZmIwYjBlYzQxNjY4ZGUzYWFhZDA5ZTdiZTY4M+BeRSc=: 00:17:49.324 17:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NjEwZjIwODMwYTZiNTFiNDYzMDJmOTY4NTUyMGQ5NDRjM2E5ZmIwYjBlYzQxNjY4ZGUzYWFhZDA5ZTdiZTY4M+BeRSc=: 00:17:49.891 17:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.891 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.891 17:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:49.891 17:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.891 17:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.891 17:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.891 17:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:49.891 17:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:49.891 17:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:49.891 17:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:49.891 17:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:17:49.891 17:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:49.891 17:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:49.891 17:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:49.891 17:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:49.891 17:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.891 17:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.891 17:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.892 17:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.892 17:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.892 17:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.892 17:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.892 17:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.150 00:17:50.409 17:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:50.409 17:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:50.409 17:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.409 17:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.409 17:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.409 17:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.409 17:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.409 17:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.409 17:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:50.409 { 00:17:50.409 "cntlid": 113, 00:17:50.409 "qid": 0, 00:17:50.409 "state": "enabled", 00:17:50.409 "thread": "nvmf_tgt_poll_group_000", 00:17:50.409 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:50.409 "listen_address": { 00:17:50.409 "trtype": "TCP", 00:17:50.409 "adrfam": "IPv4", 00:17:50.409 "traddr": "10.0.0.2", 00:17:50.409 "trsvcid": "4420" 00:17:50.409 }, 00:17:50.409 "peer_address": { 00:17:50.409 "trtype": "TCP", 00:17:50.409 "adrfam": "IPv4", 00:17:50.409 "traddr": "10.0.0.1", 00:17:50.409 "trsvcid": "53670" 00:17:50.409 }, 00:17:50.409 "auth": { 00:17:50.409 "state": "completed", 00:17:50.409 "digest": "sha512", 00:17:50.409 "dhgroup": "ffdhe3072" 00:17:50.409 } 00:17:50.409 } 00:17:50.409 ]' 00:17:50.409 17:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:50.409 17:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:50.409 17:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:50.668 17:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:50.668 17:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:50.668 17:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.668 17:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.668 17:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.927 17:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjRiMDFkMDVhYmZlMzlkZjU3Yjk0ZmQ3NzI2ZjA0OWUzODlhM2Y3ZmMyNTM5ZTk2/3DGMQ==: --dhchap-ctrl-secret DHHC-1:03:YzAyZTU4YjA1Y2E1Y2FkNzI0NGFlYjU1ZGIxYWUwMGVhZGM1OTVlMGIzNTMxZDkwMzI5ZWU5ZGNhNTM0NzI5OEba8j4=: 00:17:50.927 17:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NjRiMDFkMDVhYmZlMzlkZjU3Yjk0ZmQ3NzI2ZjA0OWUzODlhM2Y3ZmMyNTM5ZTk2/3DGMQ==: --dhchap-ctrl-secret DHHC-1:03:YzAyZTU4YjA1Y2E1Y2FkNzI0NGFlYjU1ZGIxYWUwMGVhZGM1OTVlMGIzNTMxZDkwMzI5ZWU5ZGNhNTM0NzI5OEba8j4=: 00:17:51.494 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.494 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.494 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:51.494 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.494 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.494 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.494 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:51.494 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:51.494 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:51.753 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:17:51.753 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:51.753 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:51.753 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:51.753 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:51.753 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.753 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:51.753 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.753 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.753 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.753 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:51.753 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:51.753 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.012 00:17:52.012 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:52.012 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:52.012 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.012 17:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.012 17:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.012 17:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.012 17:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.012 17:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.012 17:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:52.012 { 00:17:52.012 "cntlid": 115, 00:17:52.012 "qid": 0, 00:17:52.012 "state": "enabled", 00:17:52.012 "thread": "nvmf_tgt_poll_group_000", 00:17:52.012 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:52.012 "listen_address": { 00:17:52.012 "trtype": "TCP", 00:17:52.012 "adrfam": "IPv4", 00:17:52.012 "traddr": "10.0.0.2", 00:17:52.012 "trsvcid": "4420" 00:17:52.012 }, 00:17:52.012 "peer_address": { 00:17:52.012 "trtype": "TCP", 00:17:52.012 "adrfam": "IPv4", 00:17:52.012 "traddr": "10.0.0.1", 00:17:52.012 "trsvcid": "53700" 00:17:52.012 }, 00:17:52.012 "auth": { 00:17:52.012 "state": "completed", 00:17:52.012 "digest": "sha512", 00:17:52.012 "dhgroup": "ffdhe3072" 00:17:52.012 } 00:17:52.012 } 00:17:52.012 ]' 00:17:52.012 17:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:52.271 17:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:52.271 17:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:52.271 17:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:52.271 17:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:52.271 17:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.271 17:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.271 17:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.529 17:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDQ5MzBhY2ExYmRhN2RiZWEwM2FkNzIyNzZkNWIxMGIfBWUs: --dhchap-ctrl-secret DHHC-1:02:OWJlN2IwNjhlNWU2NWNmNzQ3NTBiMGE3MGI5NWQ5MmMzMTZmYzU1OWFkYTEzZjJidK3ztA==: 00:17:52.529 17:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDQ5MzBhY2ExYmRhN2RiZWEwM2FkNzIyNzZkNWIxMGIfBWUs: --dhchap-ctrl-secret DHHC-1:02:OWJlN2IwNjhlNWU2NWNmNzQ3NTBiMGE3MGI5NWQ5MmMzMTZmYzU1OWFkYTEzZjJidK3ztA==: 00:17:53.097 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.097 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.097 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:53.097 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.097 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.097 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.097 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:53.097 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:53.097 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:53.356 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:17:53.356 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:53.356 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:53.356 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:53.356 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:53.356 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.356 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.356 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.356 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.356 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.356 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.356 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.356 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.616 00:17:53.616 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:53.616 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:53.616 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.616 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.616 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.616 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.616 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.875 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.875 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:53.875 { 00:17:53.875 "cntlid": 117, 00:17:53.875 "qid": 0, 00:17:53.875 "state": "enabled", 00:17:53.875 "thread": "nvmf_tgt_poll_group_000", 00:17:53.875 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:53.875 "listen_address": { 00:17:53.875 "trtype": "TCP", 00:17:53.875 "adrfam": "IPv4", 00:17:53.875 "traddr": "10.0.0.2", 00:17:53.875 "trsvcid": "4420" 00:17:53.875 }, 00:17:53.875 "peer_address": { 00:17:53.875 "trtype": "TCP", 00:17:53.875 "adrfam": "IPv4", 00:17:53.875 "traddr": "10.0.0.1", 00:17:53.875 "trsvcid": "53728" 00:17:53.875 }, 00:17:53.875 "auth": { 00:17:53.875 "state": "completed", 00:17:53.875 "digest": "sha512", 00:17:53.875 "dhgroup": "ffdhe3072" 00:17:53.875 } 00:17:53.875 } 00:17:53.875 ]' 00:17:53.875 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:53.875 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:53.875 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:53.875 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:53.875 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:53.875 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.875 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.875 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.134 17:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzdlNGM0NTcyYmY1NzA0ZjQwMWQ5NTE4ZTZmMmFiNTBhMWRmZTE3ZmYwZWUwNTAyGQjLOA==: --dhchap-ctrl-secret DHHC-1:01:NTdkNTZiMWM1YjY2MDE3MDRkMDMwZmEzNTJlYjUzZTMf15o7: 00:17:54.134 17:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzdlNGM0NTcyYmY1NzA0ZjQwMWQ5NTE4ZTZmMmFiNTBhMWRmZTE3ZmYwZWUwNTAyGQjLOA==: --dhchap-ctrl-secret DHHC-1:01:NTdkNTZiMWM1YjY2MDE3MDRkMDMwZmEzNTJlYjUzZTMf15o7: 00:17:54.819 17:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:54.819 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:54.819 17:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:54.819 17:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.819 17:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.819 17:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.819 17:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:54.819 17:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:54.819 17:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:54.819 17:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:17:54.819 17:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:54.819 17:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:54.819 17:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:54.819 17:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:54.819 17:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.819 17:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:17:54.819 17:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.819 17:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.819 17:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.819 17:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:54.819 17:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:54.819 17:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:55.078 00:17:55.078 17:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:55.078 17:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:55.078 17:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.336 17:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.336 17:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.336 17:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.336 17:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.336 17:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.336 17:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:55.336 { 00:17:55.336 "cntlid": 119, 00:17:55.336 "qid": 0, 00:17:55.336 "state": "enabled", 00:17:55.336 "thread": "nvmf_tgt_poll_group_000", 00:17:55.336 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:55.336 "listen_address": { 00:17:55.336 "trtype": "TCP", 00:17:55.336 "adrfam": "IPv4", 00:17:55.336 "traddr": "10.0.0.2", 00:17:55.336 "trsvcid": "4420" 00:17:55.336 }, 00:17:55.336 "peer_address": { 00:17:55.336 "trtype": "TCP", 00:17:55.336 "adrfam": "IPv4", 00:17:55.336 "traddr": "10.0.0.1", 00:17:55.336 "trsvcid": "39960" 00:17:55.336 }, 00:17:55.336 "auth": { 00:17:55.336 "state": "completed", 00:17:55.336 "digest": "sha512", 00:17:55.336 "dhgroup": "ffdhe3072" 00:17:55.336 } 00:17:55.336 } 00:17:55.336 ]' 00:17:55.336 17:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:55.336 17:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:55.337 17:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:55.595 17:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:55.595 17:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:55.595 17:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.595 17:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.595 17:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.854 17:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjEwZjIwODMwYTZiNTFiNDYzMDJmOTY4NTUyMGQ5NDRjM2E5ZmIwYjBlYzQxNjY4ZGUzYWFhZDA5ZTdiZTY4M+BeRSc=: 00:17:55.854 17:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NjEwZjIwODMwYTZiNTFiNDYzMDJmOTY4NTUyMGQ5NDRjM2E5ZmIwYjBlYzQxNjY4ZGUzYWFhZDA5ZTdiZTY4M+BeRSc=: 00:17:56.423 17:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.423 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.423 17:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:56.423 17:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.423 17:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.423 17:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.423 17:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:56.424 17:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:56.424 17:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:56.424 17:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:56.424 17:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:17:56.424 17:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:56.424 17:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:56.424 17:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:56.424 17:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:56.424 17:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.424 17:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.424 17:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.424 17:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.424 17:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.424 17:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.424 17:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.424 17:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.683 00:17:56.943 17:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:56.943 17:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:56.943 17:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.943 17:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.943 17:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.943 17:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.943 17:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.943 17:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.943 17:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:56.943 { 00:17:56.943 "cntlid": 121, 00:17:56.943 "qid": 0, 00:17:56.943 "state": "enabled", 00:17:56.943 "thread": "nvmf_tgt_poll_group_000", 00:17:56.943 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:56.943 "listen_address": { 00:17:56.943 "trtype": "TCP", 00:17:56.943 "adrfam": "IPv4", 00:17:56.943 "traddr": "10.0.0.2", 00:17:56.943 "trsvcid": "4420" 00:17:56.943 }, 00:17:56.943 "peer_address": { 00:17:56.943 "trtype": "TCP", 00:17:56.943 "adrfam": "IPv4", 00:17:56.943 "traddr": "10.0.0.1", 00:17:56.943 "trsvcid": "40004" 00:17:56.943 }, 00:17:56.943 "auth": { 00:17:56.943 "state": "completed", 00:17:56.943 "digest": "sha512", 00:17:56.943 "dhgroup": "ffdhe4096" 00:17:56.943 } 00:17:56.943 } 00:17:56.943 ]' 00:17:56.943 17:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:57.202 17:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:57.202 17:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:57.202 17:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:57.202 17:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:57.202 17:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.202 17:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.202 17:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.461 17:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjRiMDFkMDVhYmZlMzlkZjU3Yjk0ZmQ3NzI2ZjA0OWUzODlhM2Y3ZmMyNTM5ZTk2/3DGMQ==: --dhchap-ctrl-secret DHHC-1:03:YzAyZTU4YjA1Y2E1Y2FkNzI0NGFlYjU1ZGIxYWUwMGVhZGM1OTVlMGIzNTMxZDkwMzI5ZWU5ZGNhNTM0NzI5OEba8j4=: 00:17:57.461 17:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NjRiMDFkMDVhYmZlMzlkZjU3Yjk0ZmQ3NzI2ZjA0OWUzODlhM2Y3ZmMyNTM5ZTk2/3DGMQ==: --dhchap-ctrl-secret DHHC-1:03:YzAyZTU4YjA1Y2E1Y2FkNzI0NGFlYjU1ZGIxYWUwMGVhZGM1OTVlMGIzNTMxZDkwMzI5ZWU5ZGNhNTM0NzI5OEba8j4=: 00:17:58.029 17:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.029 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.029 17:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:58.029 17:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.029 17:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.029 17:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.029 17:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:58.029 17:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:58.029 17:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:58.029 17:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:17:58.029 17:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:58.029 17:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:58.029 17:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:58.029 17:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:58.029 17:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.029 17:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.029 17:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.029 17:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.029 17:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.029 17:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.029 17:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.029 17:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.288 00:17:58.547 17:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:58.547 17:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:58.547 17:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.547 17:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.547 17:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.547 17:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.547 17:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.547 17:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.547 17:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:58.547 { 00:17:58.547 "cntlid": 123, 00:17:58.547 "qid": 0, 00:17:58.547 "state": "enabled", 00:17:58.547 "thread": "nvmf_tgt_poll_group_000", 00:17:58.547 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:58.547 "listen_address": { 00:17:58.547 "trtype": "TCP", 00:17:58.547 "adrfam": "IPv4", 00:17:58.547 "traddr": "10.0.0.2", 00:17:58.547 "trsvcid": "4420" 00:17:58.547 }, 00:17:58.547 "peer_address": { 00:17:58.547 "trtype": "TCP", 00:17:58.547 "adrfam": "IPv4", 00:17:58.547 "traddr": "10.0.0.1", 00:17:58.547 "trsvcid": "40014" 00:17:58.547 }, 00:17:58.547 "auth": { 00:17:58.547 "state": "completed", 00:17:58.547 "digest": "sha512", 00:17:58.547 "dhgroup": "ffdhe4096" 00:17:58.547 } 00:17:58.547 } 00:17:58.547 ]' 00:17:58.547 17:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:58.547 17:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:58.547 17:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:58.806 17:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:58.806 17:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:58.806 17:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.806 17:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.806 17:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.065 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDQ5MzBhY2ExYmRhN2RiZWEwM2FkNzIyNzZkNWIxMGIfBWUs: --dhchap-ctrl-secret DHHC-1:02:OWJlN2IwNjhlNWU2NWNmNzQ3NTBiMGE3MGI5NWQ5MmMzMTZmYzU1OWFkYTEzZjJidK3ztA==: 00:17:59.065 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDQ5MzBhY2ExYmRhN2RiZWEwM2FkNzIyNzZkNWIxMGIfBWUs: --dhchap-ctrl-secret DHHC-1:02:OWJlN2IwNjhlNWU2NWNmNzQ3NTBiMGE3MGI5NWQ5MmMzMTZmYzU1OWFkYTEzZjJidK3ztA==: 00:17:59.633 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.633 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.633 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:59.633 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.633 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.633 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.633 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:59.633 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:59.633 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:59.633 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:17:59.633 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:59.633 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:59.633 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:59.633 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:59.633 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.633 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.633 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.633 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.633 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.633 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.633 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.633 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.892 00:18:00.150 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:00.150 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:00.150 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.150 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.150 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.150 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.151 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.151 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.151 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:00.151 { 00:18:00.151 "cntlid": 125, 00:18:00.151 "qid": 0, 00:18:00.151 "state": "enabled", 00:18:00.151 "thread": "nvmf_tgt_poll_group_000", 00:18:00.151 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:00.151 "listen_address": { 00:18:00.151 "trtype": "TCP", 00:18:00.151 "adrfam": "IPv4", 00:18:00.151 "traddr": "10.0.0.2", 00:18:00.151 "trsvcid": "4420" 00:18:00.151 }, 00:18:00.151 "peer_address": { 00:18:00.151 "trtype": "TCP", 00:18:00.151 "adrfam": "IPv4", 00:18:00.151 "traddr": "10.0.0.1", 00:18:00.151 "trsvcid": "40044" 00:18:00.151 }, 00:18:00.151 "auth": { 00:18:00.151 "state": "completed", 00:18:00.151 "digest": "sha512", 00:18:00.151 "dhgroup": "ffdhe4096" 00:18:00.151 } 00:18:00.151 } 00:18:00.151 ]' 00:18:00.151 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:00.151 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:00.151 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:00.409 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:00.409 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:00.409 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.409 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.409 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.668 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzdlNGM0NTcyYmY1NzA0ZjQwMWQ5NTE4ZTZmMmFiNTBhMWRmZTE3ZmYwZWUwNTAyGQjLOA==: --dhchap-ctrl-secret DHHC-1:01:NTdkNTZiMWM1YjY2MDE3MDRkMDMwZmEzNTJlYjUzZTMf15o7: 00:18:00.668 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzdlNGM0NTcyYmY1NzA0ZjQwMWQ5NTE4ZTZmMmFiNTBhMWRmZTE3ZmYwZWUwNTAyGQjLOA==: --dhchap-ctrl-secret DHHC-1:01:NTdkNTZiMWM1YjY2MDE3MDRkMDMwZmEzNTJlYjUzZTMf15o7: 00:18:01.235 17:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.235 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.235 17:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:01.235 17:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.235 17:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.235 17:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.235 17:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:01.235 17:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:01.235 17:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:01.235 17:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:18:01.235 17:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:01.235 17:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:01.235 17:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:01.235 17:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:01.235 17:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.235 17:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:18:01.235 17:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.235 17:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.235 17:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.235 17:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:01.235 17:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:01.235 17:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:01.494 00:18:01.494 17:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:01.494 17:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:01.494 17:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.753 17:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.753 17:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.753 17:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.753 17:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.753 17:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.753 17:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:01.753 { 00:18:01.753 "cntlid": 127, 00:18:01.753 "qid": 0, 00:18:01.753 "state": "enabled", 00:18:01.753 "thread": "nvmf_tgt_poll_group_000", 00:18:01.753 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:01.753 "listen_address": { 00:18:01.753 "trtype": "TCP", 00:18:01.753 "adrfam": "IPv4", 00:18:01.753 "traddr": "10.0.0.2", 00:18:01.753 "trsvcid": "4420" 00:18:01.753 }, 00:18:01.753 "peer_address": { 00:18:01.753 "trtype": "TCP", 00:18:01.753 "adrfam": "IPv4", 00:18:01.753 "traddr": "10.0.0.1", 00:18:01.753 "trsvcid": "40068" 00:18:01.753 }, 00:18:01.753 "auth": { 00:18:01.753 "state": "completed", 00:18:01.753 "digest": "sha512", 00:18:01.753 "dhgroup": "ffdhe4096" 00:18:01.753 } 00:18:01.753 } 00:18:01.753 ]' 00:18:01.753 17:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:01.753 17:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:01.753 17:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:02.011 17:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:02.012 17:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:02.012 17:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.012 17:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.012 17:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.012 17:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjEwZjIwODMwYTZiNTFiNDYzMDJmOTY4NTUyMGQ5NDRjM2E5ZmIwYjBlYzQxNjY4ZGUzYWFhZDA5ZTdiZTY4M+BeRSc=: 00:18:02.012 17:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NjEwZjIwODMwYTZiNTFiNDYzMDJmOTY4NTUyMGQ5NDRjM2E5ZmIwYjBlYzQxNjY4ZGUzYWFhZDA5ZTdiZTY4M+BeRSc=: 00:18:02.947 17:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.947 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.947 17:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:02.947 17:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.947 17:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.947 17:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.947 17:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:02.947 17:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:02.947 17:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:02.947 17:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:02.947 17:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:18:02.947 17:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:02.947 17:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:02.947 17:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:02.947 17:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:02.947 17:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.947 17:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.947 17:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.947 17:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.947 17:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.947 17:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.947 17:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.947 17:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.206 00:18:03.206 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:03.206 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:03.206 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.464 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.464 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.465 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.465 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.465 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.465 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:03.465 { 00:18:03.465 "cntlid": 129, 00:18:03.465 "qid": 0, 00:18:03.465 "state": "enabled", 00:18:03.465 "thread": "nvmf_tgt_poll_group_000", 00:18:03.465 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:03.465 "listen_address": { 00:18:03.465 "trtype": "TCP", 00:18:03.465 "adrfam": "IPv4", 00:18:03.465 "traddr": "10.0.0.2", 00:18:03.465 "trsvcid": "4420" 00:18:03.465 }, 00:18:03.465 "peer_address": { 00:18:03.465 "trtype": "TCP", 00:18:03.465 "adrfam": "IPv4", 00:18:03.465 "traddr": "10.0.0.1", 00:18:03.465 "trsvcid": "40090" 00:18:03.465 }, 00:18:03.465 "auth": { 00:18:03.465 "state": "completed", 00:18:03.465 "digest": "sha512", 00:18:03.465 "dhgroup": "ffdhe6144" 00:18:03.465 } 00:18:03.465 } 00:18:03.465 ]' 00:18:03.465 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:03.465 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:03.465 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:03.465 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:03.465 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:03.724 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.724 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.724 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.724 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjRiMDFkMDVhYmZlMzlkZjU3Yjk0ZmQ3NzI2ZjA0OWUzODlhM2Y3ZmMyNTM5ZTk2/3DGMQ==: --dhchap-ctrl-secret DHHC-1:03:YzAyZTU4YjA1Y2E1Y2FkNzI0NGFlYjU1ZGIxYWUwMGVhZGM1OTVlMGIzNTMxZDkwMzI5ZWU5ZGNhNTM0NzI5OEba8j4=: 00:18:03.724 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NjRiMDFkMDVhYmZlMzlkZjU3Yjk0ZmQ3NzI2ZjA0OWUzODlhM2Y3ZmMyNTM5ZTk2/3DGMQ==: --dhchap-ctrl-secret DHHC-1:03:YzAyZTU4YjA1Y2E1Y2FkNzI0NGFlYjU1ZGIxYWUwMGVhZGM1OTVlMGIzNTMxZDkwMzI5ZWU5ZGNhNTM0NzI5OEba8j4=: 00:18:04.291 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.291 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.291 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:04.291 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.291 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.549 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.549 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:04.549 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:04.549 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:04.549 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:18:04.549 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:04.549 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:04.549 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:04.549 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:04.549 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.549 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:04.549 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.549 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.549 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.549 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:04.549 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:04.549 17:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.117 00:18:05.117 17:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:05.117 17:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:05.117 17:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.117 17:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.117 17:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.117 17:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.117 17:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.117 17:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.117 17:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:05.117 { 00:18:05.117 "cntlid": 131, 00:18:05.117 "qid": 0, 00:18:05.117 "state": "enabled", 00:18:05.117 "thread": "nvmf_tgt_poll_group_000", 00:18:05.117 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:05.117 "listen_address": { 00:18:05.117 "trtype": "TCP", 00:18:05.117 "adrfam": "IPv4", 00:18:05.117 "traddr": "10.0.0.2", 00:18:05.117 "trsvcid": "4420" 00:18:05.117 }, 00:18:05.117 "peer_address": { 00:18:05.117 "trtype": "TCP", 00:18:05.117 "adrfam": "IPv4", 00:18:05.117 "traddr": "10.0.0.1", 00:18:05.117 "trsvcid": "40116" 00:18:05.117 }, 00:18:05.117 "auth": { 00:18:05.117 "state": "completed", 00:18:05.117 "digest": "sha512", 00:18:05.117 "dhgroup": "ffdhe6144" 00:18:05.117 } 00:18:05.117 } 00:18:05.117 ]' 00:18:05.117 17:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:05.117 17:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:05.117 17:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:05.376 17:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:05.376 17:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:05.376 17:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.376 17:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.376 17:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.635 17:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDQ5MzBhY2ExYmRhN2RiZWEwM2FkNzIyNzZkNWIxMGIfBWUs: --dhchap-ctrl-secret DHHC-1:02:OWJlN2IwNjhlNWU2NWNmNzQ3NTBiMGE3MGI5NWQ5MmMzMTZmYzU1OWFkYTEzZjJidK3ztA==: 00:18:05.635 17:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDQ5MzBhY2ExYmRhN2RiZWEwM2FkNzIyNzZkNWIxMGIfBWUs: --dhchap-ctrl-secret DHHC-1:02:OWJlN2IwNjhlNWU2NWNmNzQ3NTBiMGE3MGI5NWQ5MmMzMTZmYzU1OWFkYTEzZjJidK3ztA==: 00:18:06.202 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.202 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.202 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:06.202 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.202 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.202 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.202 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:06.202 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:06.202 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:06.202 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:18:06.202 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:06.202 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:06.202 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:06.202 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:06.202 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.202 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.202 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.202 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.460 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.460 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.460 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.460 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.719 00:18:06.719 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:06.719 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:06.719 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.978 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.978 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.978 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.978 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.978 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.978 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:06.978 { 00:18:06.978 "cntlid": 133, 00:18:06.978 "qid": 0, 00:18:06.978 "state": "enabled", 00:18:06.978 "thread": "nvmf_tgt_poll_group_000", 00:18:06.978 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:06.978 "listen_address": { 00:18:06.978 "trtype": "TCP", 00:18:06.978 "adrfam": "IPv4", 00:18:06.978 "traddr": "10.0.0.2", 00:18:06.978 "trsvcid": "4420" 00:18:06.978 }, 00:18:06.978 "peer_address": { 00:18:06.978 "trtype": "TCP", 00:18:06.978 "adrfam": "IPv4", 00:18:06.978 "traddr": "10.0.0.1", 00:18:06.978 "trsvcid": "58440" 00:18:06.978 }, 00:18:06.978 "auth": { 00:18:06.978 "state": "completed", 00:18:06.978 "digest": "sha512", 00:18:06.978 "dhgroup": "ffdhe6144" 00:18:06.978 } 00:18:06.978 } 00:18:06.978 ]' 00:18:06.978 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:06.978 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:06.978 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:06.978 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:06.978 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:06.978 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.978 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.978 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.237 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzdlNGM0NTcyYmY1NzA0ZjQwMWQ5NTE4ZTZmMmFiNTBhMWRmZTE3ZmYwZWUwNTAyGQjLOA==: --dhchap-ctrl-secret DHHC-1:01:NTdkNTZiMWM1YjY2MDE3MDRkMDMwZmEzNTJlYjUzZTMf15o7: 00:18:07.237 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzdlNGM0NTcyYmY1NzA0ZjQwMWQ5NTE4ZTZmMmFiNTBhMWRmZTE3ZmYwZWUwNTAyGQjLOA==: --dhchap-ctrl-secret DHHC-1:01:NTdkNTZiMWM1YjY2MDE3MDRkMDMwZmEzNTJlYjUzZTMf15o7: 00:18:07.805 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.805 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.805 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:07.805 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.805 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.805 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.805 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:07.805 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:07.805 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:08.064 17:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:18:08.064 17:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:08.064 17:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:08.064 17:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:08.064 17:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:08.064 17:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.064 17:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:18:08.064 17:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.064 17:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.064 17:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.064 17:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:08.064 17:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:08.064 17:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:08.323 00:18:08.323 17:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:08.323 17:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:08.323 17:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.582 17:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.582 17:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.582 17:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.582 17:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.582 17:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.582 17:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:08.582 { 00:18:08.582 "cntlid": 135, 00:18:08.582 "qid": 0, 00:18:08.582 "state": "enabled", 00:18:08.582 "thread": "nvmf_tgt_poll_group_000", 00:18:08.582 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:08.582 "listen_address": { 00:18:08.582 "trtype": "TCP", 00:18:08.582 "adrfam": "IPv4", 00:18:08.582 "traddr": "10.0.0.2", 00:18:08.582 "trsvcid": "4420" 00:18:08.582 }, 00:18:08.582 "peer_address": { 00:18:08.582 "trtype": "TCP", 00:18:08.582 "adrfam": "IPv4", 00:18:08.582 "traddr": "10.0.0.1", 00:18:08.582 "trsvcid": "58474" 00:18:08.582 }, 00:18:08.582 "auth": { 00:18:08.582 "state": "completed", 00:18:08.582 "digest": "sha512", 00:18:08.582 "dhgroup": "ffdhe6144" 00:18:08.582 } 00:18:08.582 } 00:18:08.582 ]' 00:18:08.582 17:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:08.582 17:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:08.582 17:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:08.582 17:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:08.582 17:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:08.582 17:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.582 17:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.582 17:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.841 17:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjEwZjIwODMwYTZiNTFiNDYzMDJmOTY4NTUyMGQ5NDRjM2E5ZmIwYjBlYzQxNjY4ZGUzYWFhZDA5ZTdiZTY4M+BeRSc=: 00:18:08.841 17:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NjEwZjIwODMwYTZiNTFiNDYzMDJmOTY4NTUyMGQ5NDRjM2E5ZmIwYjBlYzQxNjY4ZGUzYWFhZDA5ZTdiZTY4M+BeRSc=: 00:18:09.409 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.409 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.409 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:09.409 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.409 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.409 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.409 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:09.409 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:09.409 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:09.409 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:09.668 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:18:09.668 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:09.668 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:09.668 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:09.668 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:09.668 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.668 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.668 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.668 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.668 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.668 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.668 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.668 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.235 00:18:10.235 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:10.235 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:10.235 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.494 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.494 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.494 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.494 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.494 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.494 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:10.494 { 00:18:10.494 "cntlid": 137, 00:18:10.494 "qid": 0, 00:18:10.494 "state": "enabled", 00:18:10.494 "thread": "nvmf_tgt_poll_group_000", 00:18:10.494 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:10.494 "listen_address": { 00:18:10.494 "trtype": "TCP", 00:18:10.494 "adrfam": "IPv4", 00:18:10.494 "traddr": "10.0.0.2", 00:18:10.494 "trsvcid": "4420" 00:18:10.494 }, 00:18:10.494 "peer_address": { 00:18:10.494 "trtype": "TCP", 00:18:10.494 "adrfam": "IPv4", 00:18:10.494 "traddr": "10.0.0.1", 00:18:10.494 "trsvcid": "58490" 00:18:10.494 }, 00:18:10.494 "auth": { 00:18:10.494 "state": "completed", 00:18:10.494 "digest": "sha512", 00:18:10.494 "dhgroup": "ffdhe8192" 00:18:10.495 } 00:18:10.495 } 00:18:10.495 ]' 00:18:10.495 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:10.495 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:10.495 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:10.495 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:10.495 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:10.495 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.495 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.495 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.753 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjRiMDFkMDVhYmZlMzlkZjU3Yjk0ZmQ3NzI2ZjA0OWUzODlhM2Y3ZmMyNTM5ZTk2/3DGMQ==: --dhchap-ctrl-secret DHHC-1:03:YzAyZTU4YjA1Y2E1Y2FkNzI0NGFlYjU1ZGIxYWUwMGVhZGM1OTVlMGIzNTMxZDkwMzI5ZWU5ZGNhNTM0NzI5OEba8j4=: 00:18:10.753 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NjRiMDFkMDVhYmZlMzlkZjU3Yjk0ZmQ3NzI2ZjA0OWUzODlhM2Y3ZmMyNTM5ZTk2/3DGMQ==: --dhchap-ctrl-secret DHHC-1:03:YzAyZTU4YjA1Y2E1Y2FkNzI0NGFlYjU1ZGIxYWUwMGVhZGM1OTVlMGIzNTMxZDkwMzI5ZWU5ZGNhNTM0NzI5OEba8j4=: 00:18:11.320 17:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.320 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.320 17:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:11.320 17:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.320 17:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.320 17:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.320 17:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:11.320 17:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:11.320 17:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:11.579 17:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:18:11.579 17:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:11.579 17:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:11.579 17:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:11.579 17:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:11.579 17:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.579 17:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.579 17:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.579 17:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.579 17:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.579 17:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.579 17:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.579 17:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.838 00:18:11.838 17:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:11.838 17:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:11.838 17:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.097 17:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.097 17:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.097 17:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.097 17:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.097 17:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.097 17:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:12.097 { 00:18:12.097 "cntlid": 139, 00:18:12.097 "qid": 0, 00:18:12.097 "state": "enabled", 00:18:12.097 "thread": "nvmf_tgt_poll_group_000", 00:18:12.097 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:12.097 "listen_address": { 00:18:12.097 "trtype": "TCP", 00:18:12.097 "adrfam": "IPv4", 00:18:12.097 "traddr": "10.0.0.2", 00:18:12.097 "trsvcid": "4420" 00:18:12.097 }, 00:18:12.097 "peer_address": { 00:18:12.097 "trtype": "TCP", 00:18:12.097 "adrfam": "IPv4", 00:18:12.097 "traddr": "10.0.0.1", 00:18:12.097 "trsvcid": "58526" 00:18:12.097 }, 00:18:12.097 "auth": { 00:18:12.097 "state": "completed", 00:18:12.097 "digest": "sha512", 00:18:12.097 "dhgroup": "ffdhe8192" 00:18:12.097 } 00:18:12.097 } 00:18:12.097 ]' 00:18:12.097 17:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:12.097 17:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:12.097 17:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:12.356 17:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:12.356 17:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:12.356 17:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.356 17:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.356 17:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.356 17:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDQ5MzBhY2ExYmRhN2RiZWEwM2FkNzIyNzZkNWIxMGIfBWUs: --dhchap-ctrl-secret DHHC-1:02:OWJlN2IwNjhlNWU2NWNmNzQ3NTBiMGE3MGI5NWQ5MmMzMTZmYzU1OWFkYTEzZjJidK3ztA==: 00:18:12.356 17:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDQ5MzBhY2ExYmRhN2RiZWEwM2FkNzIyNzZkNWIxMGIfBWUs: --dhchap-ctrl-secret DHHC-1:02:OWJlN2IwNjhlNWU2NWNmNzQ3NTBiMGE3MGI5NWQ5MmMzMTZmYzU1OWFkYTEzZjJidK3ztA==: 00:18:12.923 17:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.181 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.182 17:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:13.182 17:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.182 17:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.182 17:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.182 17:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:13.182 17:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:13.182 17:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:13.182 17:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:18:13.182 17:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:13.182 17:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:13.182 17:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:13.182 17:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:13.182 17:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.182 17:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.182 17:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.182 17:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.182 17:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.182 17:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.182 17:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.182 17:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.750 00:18:13.750 17:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:13.750 17:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:13.750 17:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.009 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.009 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.009 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.009 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.009 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.009 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:14.009 { 00:18:14.009 "cntlid": 141, 00:18:14.009 "qid": 0, 00:18:14.009 "state": "enabled", 00:18:14.009 "thread": "nvmf_tgt_poll_group_000", 00:18:14.009 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:14.009 "listen_address": { 00:18:14.009 "trtype": "TCP", 00:18:14.009 "adrfam": "IPv4", 00:18:14.009 "traddr": "10.0.0.2", 00:18:14.009 "trsvcid": "4420" 00:18:14.009 }, 00:18:14.009 "peer_address": { 00:18:14.009 "trtype": "TCP", 00:18:14.009 "adrfam": "IPv4", 00:18:14.009 "traddr": "10.0.0.1", 00:18:14.009 "trsvcid": "58548" 00:18:14.009 }, 00:18:14.009 "auth": { 00:18:14.009 "state": "completed", 00:18:14.009 "digest": "sha512", 00:18:14.009 "dhgroup": "ffdhe8192" 00:18:14.009 } 00:18:14.009 } 00:18:14.009 ]' 00:18:14.009 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:14.009 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:14.009 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:14.009 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:14.009 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:14.009 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.009 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.009 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.268 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzdlNGM0NTcyYmY1NzA0ZjQwMWQ5NTE4ZTZmMmFiNTBhMWRmZTE3ZmYwZWUwNTAyGQjLOA==: --dhchap-ctrl-secret DHHC-1:01:NTdkNTZiMWM1YjY2MDE3MDRkMDMwZmEzNTJlYjUzZTMf15o7: 00:18:14.268 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzdlNGM0NTcyYmY1NzA0ZjQwMWQ5NTE4ZTZmMmFiNTBhMWRmZTE3ZmYwZWUwNTAyGQjLOA==: --dhchap-ctrl-secret DHHC-1:01:NTdkNTZiMWM1YjY2MDE3MDRkMDMwZmEzNTJlYjUzZTMf15o7: 00:18:14.835 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:14.835 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:14.835 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:14.835 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.835 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.835 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.835 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:14.836 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:14.836 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:15.095 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:18:15.095 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:15.095 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:15.095 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:15.095 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:15.095 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.095 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:18:15.095 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.095 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.095 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.095 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:15.095 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:15.095 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:15.662 00:18:15.662 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:15.662 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:15.662 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.921 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.921 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.921 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.921 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.921 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.921 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:15.921 { 00:18:15.921 "cntlid": 143, 00:18:15.921 "qid": 0, 00:18:15.921 "state": "enabled", 00:18:15.921 "thread": "nvmf_tgt_poll_group_000", 00:18:15.921 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:15.921 "listen_address": { 00:18:15.921 "trtype": "TCP", 00:18:15.921 "adrfam": "IPv4", 00:18:15.921 "traddr": "10.0.0.2", 00:18:15.921 "trsvcid": "4420" 00:18:15.921 }, 00:18:15.921 "peer_address": { 00:18:15.921 "trtype": "TCP", 00:18:15.921 "adrfam": "IPv4", 00:18:15.921 "traddr": "10.0.0.1", 00:18:15.921 "trsvcid": "39820" 00:18:15.921 }, 00:18:15.921 "auth": { 00:18:15.921 "state": "completed", 00:18:15.921 "digest": "sha512", 00:18:15.922 "dhgroup": "ffdhe8192" 00:18:15.922 } 00:18:15.922 } 00:18:15.922 ]' 00:18:15.922 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:15.922 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:15.922 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:15.922 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:15.922 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:15.922 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.922 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.922 17:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.179 17:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjEwZjIwODMwYTZiNTFiNDYzMDJmOTY4NTUyMGQ5NDRjM2E5ZmIwYjBlYzQxNjY4ZGUzYWFhZDA5ZTdiZTY4M+BeRSc=: 00:18:16.179 17:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NjEwZjIwODMwYTZiNTFiNDYzMDJmOTY4NTUyMGQ5NDRjM2E5ZmIwYjBlYzQxNjY4ZGUzYWFhZDA5ZTdiZTY4M+BeRSc=: 00:18:16.746 17:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.746 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.746 17:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:16.746 17:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.746 17:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.746 17:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.746 17:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:16.746 17:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:18:16.746 17:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:16.746 17:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:16.746 17:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:16.746 17:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:17.005 17:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:18:17.005 17:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:17.005 17:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:17.005 17:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:17.005 17:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:17.005 17:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.005 17:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:17.005 17:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.005 17:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.005 17:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.005 17:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:17.005 17:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:17.005 17:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:17.573 00:18:17.573 17:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:17.573 17:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:17.573 17:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.573 17:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.573 17:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.573 17:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.573 17:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.573 17:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.573 17:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:17.573 { 00:18:17.573 "cntlid": 145, 00:18:17.573 "qid": 0, 00:18:17.573 "state": "enabled", 00:18:17.573 "thread": "nvmf_tgt_poll_group_000", 00:18:17.573 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:17.573 "listen_address": { 00:18:17.573 "trtype": "TCP", 00:18:17.573 "adrfam": "IPv4", 00:18:17.573 "traddr": "10.0.0.2", 00:18:17.573 "trsvcid": "4420" 00:18:17.573 }, 00:18:17.573 "peer_address": { 00:18:17.573 "trtype": "TCP", 00:18:17.573 "adrfam": "IPv4", 00:18:17.573 "traddr": "10.0.0.1", 00:18:17.573 "trsvcid": "39866" 00:18:17.573 }, 00:18:17.573 "auth": { 00:18:17.573 "state": "completed", 00:18:17.573 "digest": "sha512", 00:18:17.573 "dhgroup": "ffdhe8192" 00:18:17.573 } 00:18:17.573 } 00:18:17.573 ]' 00:18:17.573 17:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:17.831 17:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:17.832 17:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:17.832 17:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:17.832 17:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:17.832 17:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.832 17:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.832 17:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.090 17:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjRiMDFkMDVhYmZlMzlkZjU3Yjk0ZmQ3NzI2ZjA0OWUzODlhM2Y3ZmMyNTM5ZTk2/3DGMQ==: --dhchap-ctrl-secret DHHC-1:03:YzAyZTU4YjA1Y2E1Y2FkNzI0NGFlYjU1ZGIxYWUwMGVhZGM1OTVlMGIzNTMxZDkwMzI5ZWU5ZGNhNTM0NzI5OEba8j4=: 00:18:18.090 17:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NjRiMDFkMDVhYmZlMzlkZjU3Yjk0ZmQ3NzI2ZjA0OWUzODlhM2Y3ZmMyNTM5ZTk2/3DGMQ==: --dhchap-ctrl-secret DHHC-1:03:YzAyZTU4YjA1Y2E1Y2FkNzI0NGFlYjU1ZGIxYWUwMGVhZGM1OTVlMGIzNTMxZDkwMzI5ZWU5ZGNhNTM0NzI5OEba8j4=: 00:18:18.657 17:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.657 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.657 17:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:18.657 17:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.657 17:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.657 17:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.657 17:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 00:18:18.657 17:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.657 17:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.657 17:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.657 17:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:18:18.657 17:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:18.657 17:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:18:18.657 17:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:18.657 17:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:18.657 17:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:18.657 17:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:18.657 17:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:18:18.658 17:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:18.658 17:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:18.916 request: 00:18:18.916 { 00:18:18.916 "name": "nvme0", 00:18:18.916 "trtype": "tcp", 00:18:18.916 "traddr": "10.0.0.2", 00:18:18.916 "adrfam": "ipv4", 00:18:18.916 "trsvcid": "4420", 00:18:18.916 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:18.916 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:18.916 "prchk_reftag": false, 00:18:18.916 "prchk_guard": false, 00:18:18.916 "hdgst": false, 00:18:18.916 "ddgst": false, 00:18:18.916 "dhchap_key": "key2", 00:18:18.916 "allow_unrecognized_csi": false, 00:18:18.916 "method": "bdev_nvme_attach_controller", 00:18:18.916 "req_id": 1 00:18:18.916 } 00:18:18.916 Got JSON-RPC error response 00:18:18.916 response: 00:18:18.916 { 00:18:18.916 "code": -5, 00:18:18.916 "message": "Input/output error" 00:18:18.916 } 00:18:19.175 17:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:19.175 17:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:19.175 17:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:19.175 17:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:19.175 17:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:19.175 17:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.175 17:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.175 17:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.175 17:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:19.175 17:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.175 17:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.175 17:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.175 17:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:19.175 17:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:19.175 17:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:19.175 17:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:19.175 17:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:19.175 17:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:19.175 17:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:19.175 17:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:19.175 17:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:19.175 17:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:19.434 request: 00:18:19.434 { 00:18:19.434 "name": "nvme0", 00:18:19.434 "trtype": "tcp", 00:18:19.434 "traddr": "10.0.0.2", 00:18:19.434 "adrfam": "ipv4", 00:18:19.434 "trsvcid": "4420", 00:18:19.434 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:19.435 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:19.435 "prchk_reftag": false, 00:18:19.435 "prchk_guard": false, 00:18:19.435 "hdgst": false, 00:18:19.435 "ddgst": false, 00:18:19.435 "dhchap_key": "key1", 00:18:19.435 "dhchap_ctrlr_key": "ckey2", 00:18:19.435 "allow_unrecognized_csi": false, 00:18:19.435 "method": "bdev_nvme_attach_controller", 00:18:19.435 "req_id": 1 00:18:19.435 } 00:18:19.435 Got JSON-RPC error response 00:18:19.435 response: 00:18:19.435 { 00:18:19.435 "code": -5, 00:18:19.435 "message": "Input/output error" 00:18:19.435 } 00:18:19.435 17:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:19.435 17:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:19.435 17:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:19.435 17:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:19.435 17:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:19.435 17:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.435 17:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.435 17:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.435 17:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 00:18:19.435 17:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.435 17:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.435 17:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.435 17:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:19.435 17:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:19.435 17:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:19.435 17:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:19.435 17:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:19.435 17:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:19.435 17:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:19.435 17:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:19.435 17:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:19.435 17:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:20.003 request: 00:18:20.003 { 00:18:20.003 "name": "nvme0", 00:18:20.003 "trtype": "tcp", 00:18:20.003 "traddr": "10.0.0.2", 00:18:20.003 "adrfam": "ipv4", 00:18:20.003 "trsvcid": "4420", 00:18:20.003 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:20.003 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:20.003 "prchk_reftag": false, 00:18:20.003 "prchk_guard": false, 00:18:20.003 "hdgst": false, 00:18:20.003 "ddgst": false, 00:18:20.003 "dhchap_key": "key1", 00:18:20.003 "dhchap_ctrlr_key": "ckey1", 00:18:20.003 "allow_unrecognized_csi": false, 00:18:20.003 "method": "bdev_nvme_attach_controller", 00:18:20.003 "req_id": 1 00:18:20.003 } 00:18:20.003 Got JSON-RPC error response 00:18:20.003 response: 00:18:20.003 { 00:18:20.003 "code": -5, 00:18:20.003 "message": "Input/output error" 00:18:20.003 } 00:18:20.003 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:20.003 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:20.003 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:20.003 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:20.003 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:20.003 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.003 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.003 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.003 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 2558966 00:18:20.003 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2558966 ']' 00:18:20.003 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2558966 00:18:20.003 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:20.003 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:20.003 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2558966 00:18:20.003 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:20.004 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:20.004 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2558966' 00:18:20.004 killing process with pid 2558966 00:18:20.004 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2558966 00:18:20.004 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2558966 00:18:20.263 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:20.263 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:20.263 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:20.263 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.263 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2580918 00:18:20.263 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2580918 00:18:20.263 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:20.263 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2580918 ']' 00:18:20.263 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:20.263 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:20.263 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:20.263 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:20.263 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.522 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:20.522 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:20.522 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:20.522 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:20.522 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.522 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:20.522 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:20.522 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 2580918 00:18:20.522 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2580918 ']' 00:18:20.522 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:20.522 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:20.522 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:20.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:20.522 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:20.522 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.782 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:20.782 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:20.782 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:18:20.782 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.782 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.782 null0 00:18:20.782 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.782 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:20.782 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Wv6 00:18:20.782 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.782 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.782 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.782 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.hTm ]] 00:18:20.782 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.hTm 00:18:20.782 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.782 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.782 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.782 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:20.782 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.rQ6 00:18:20.782 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.782 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.782 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.782 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.33M ]] 00:18:20.782 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.33M 00:18:20.782 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.782 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.782 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.782 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:20.782 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.FUw 00:18:20.782 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.782 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.782 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.782 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.APp ]] 00:18:20.782 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.APp 00:18:20.782 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.782 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.782 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.782 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:20.782 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.yZz 00:18:20.782 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.782 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.782 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.782 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:18:20.782 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:18:20.782 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:20.782 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:20.782 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:20.782 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:20.782 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.782 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:18:20.782 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.782 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.782 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.782 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:20.782 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:20.783 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:21.719 nvme0n1 00:18:21.719 17:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:21.719 17:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:21.719 17:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.719 17:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.719 17:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.719 17:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.719 17:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.978 17:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.978 17:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:21.978 { 00:18:21.978 "cntlid": 1, 00:18:21.978 "qid": 0, 00:18:21.978 "state": "enabled", 00:18:21.978 "thread": "nvmf_tgt_poll_group_000", 00:18:21.978 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:21.978 "listen_address": { 00:18:21.978 "trtype": "TCP", 00:18:21.978 "adrfam": "IPv4", 00:18:21.978 "traddr": "10.0.0.2", 00:18:21.978 "trsvcid": "4420" 00:18:21.978 }, 00:18:21.978 "peer_address": { 00:18:21.978 "trtype": "TCP", 00:18:21.978 "adrfam": "IPv4", 00:18:21.978 "traddr": "10.0.0.1", 00:18:21.978 "trsvcid": "39916" 00:18:21.978 }, 00:18:21.978 "auth": { 00:18:21.978 "state": "completed", 00:18:21.978 "digest": "sha512", 00:18:21.978 "dhgroup": "ffdhe8192" 00:18:21.978 } 00:18:21.978 } 00:18:21.978 ]' 00:18:21.978 17:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:21.978 17:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:21.978 17:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:21.978 17:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:21.978 17:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:21.978 17:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.978 17:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.978 17:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.237 17:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjEwZjIwODMwYTZiNTFiNDYzMDJmOTY4NTUyMGQ5NDRjM2E5ZmIwYjBlYzQxNjY4ZGUzYWFhZDA5ZTdiZTY4M+BeRSc=: 00:18:22.237 17:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NjEwZjIwODMwYTZiNTFiNDYzMDJmOTY4NTUyMGQ5NDRjM2E5ZmIwYjBlYzQxNjY4ZGUzYWFhZDA5ZTdiZTY4M+BeRSc=: 00:18:22.804 17:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.804 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.804 17:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:22.804 17:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.804 17:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.804 17:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.804 17:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:18:22.804 17:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.804 17:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.804 17:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.804 17:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:22.805 17:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:23.063 17:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:23.063 17:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:23.063 17:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:23.063 17:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:23.063 17:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:23.063 17:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:23.063 17:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:23.063 17:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:23.063 17:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:23.063 17:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:23.063 request: 00:18:23.063 { 00:18:23.063 "name": "nvme0", 00:18:23.063 "trtype": "tcp", 00:18:23.063 "traddr": "10.0.0.2", 00:18:23.063 "adrfam": "ipv4", 00:18:23.063 "trsvcid": "4420", 00:18:23.063 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:23.063 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:23.063 "prchk_reftag": false, 00:18:23.063 "prchk_guard": false, 00:18:23.063 "hdgst": false, 00:18:23.063 "ddgst": false, 00:18:23.063 "dhchap_key": "key3", 00:18:23.063 "allow_unrecognized_csi": false, 00:18:23.063 "method": "bdev_nvme_attach_controller", 00:18:23.063 "req_id": 1 00:18:23.063 } 00:18:23.063 Got JSON-RPC error response 00:18:23.063 response: 00:18:23.063 { 00:18:23.063 "code": -5, 00:18:23.063 "message": "Input/output error" 00:18:23.063 } 00:18:23.322 17:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:23.322 17:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:23.322 17:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:23.322 17:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:23.322 17:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:18:23.322 17:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:18:23.322 17:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:23.322 17:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:23.322 17:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:23.322 17:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:23.322 17:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:23.322 17:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:23.322 17:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:23.322 17:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:23.322 17:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:23.322 17:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:23.322 17:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:23.322 17:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:23.580 request: 00:18:23.580 { 00:18:23.580 "name": "nvme0", 00:18:23.580 "trtype": "tcp", 00:18:23.580 "traddr": "10.0.0.2", 00:18:23.580 "adrfam": "ipv4", 00:18:23.580 "trsvcid": "4420", 00:18:23.580 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:23.580 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:23.580 "prchk_reftag": false, 00:18:23.580 "prchk_guard": false, 00:18:23.580 "hdgst": false, 00:18:23.580 "ddgst": false, 00:18:23.580 "dhchap_key": "key3", 00:18:23.580 "allow_unrecognized_csi": false, 00:18:23.580 "method": "bdev_nvme_attach_controller", 00:18:23.580 "req_id": 1 00:18:23.580 } 00:18:23.580 Got JSON-RPC error response 00:18:23.580 response: 00:18:23.580 { 00:18:23.580 "code": -5, 00:18:23.580 "message": "Input/output error" 00:18:23.580 } 00:18:23.580 17:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:23.580 17:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:23.580 17:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:23.580 17:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:23.580 17:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:23.580 17:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:18:23.580 17:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:23.580 17:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:23.580 17:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:23.580 17:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:23.839 17:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:23.839 17:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.839 17:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.839 17:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.839 17:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:23.839 17:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.839 17:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.840 17:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.840 17:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:23.840 17:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:23.840 17:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:23.840 17:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:23.840 17:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:23.840 17:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:23.840 17:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:23.840 17:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:23.840 17:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:23.840 17:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:24.097 request: 00:18:24.097 { 00:18:24.097 "name": "nvme0", 00:18:24.097 "trtype": "tcp", 00:18:24.097 "traddr": "10.0.0.2", 00:18:24.097 "adrfam": "ipv4", 00:18:24.097 "trsvcid": "4420", 00:18:24.097 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:24.097 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:24.097 "prchk_reftag": false, 00:18:24.097 "prchk_guard": false, 00:18:24.097 "hdgst": false, 00:18:24.097 "ddgst": false, 00:18:24.097 "dhchap_key": "key0", 00:18:24.097 "dhchap_ctrlr_key": "key1", 00:18:24.097 "allow_unrecognized_csi": false, 00:18:24.097 "method": "bdev_nvme_attach_controller", 00:18:24.097 "req_id": 1 00:18:24.097 } 00:18:24.097 Got JSON-RPC error response 00:18:24.097 response: 00:18:24.097 { 00:18:24.097 "code": -5, 00:18:24.097 "message": "Input/output error" 00:18:24.097 } 00:18:24.097 17:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:24.097 17:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:24.097 17:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:24.097 17:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:24.097 17:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:18:24.097 17:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:24.097 17:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:24.356 nvme0n1 00:18:24.356 17:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:18:24.356 17:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:18:24.356 17:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.615 17:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.615 17:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.615 17:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.874 17:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 00:18:24.874 17:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.874 17:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.874 17:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.874 17:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:24.874 17:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:24.874 17:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:25.810 nvme0n1 00:18:25.810 17:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:18:25.810 17:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:18:25.810 17:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.810 17:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.810 17:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:25.810 17:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.810 17:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.810 17:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.810 17:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:18:25.810 17:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:18:25.810 17:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.069 17:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.069 17:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NzdlNGM0NTcyYmY1NzA0ZjQwMWQ5NTE4ZTZmMmFiNTBhMWRmZTE3ZmYwZWUwNTAyGQjLOA==: --dhchap-ctrl-secret DHHC-1:03:NjEwZjIwODMwYTZiNTFiNDYzMDJmOTY4NTUyMGQ5NDRjM2E5ZmIwYjBlYzQxNjY4ZGUzYWFhZDA5ZTdiZTY4M+BeRSc=: 00:18:26.069 17:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzdlNGM0NTcyYmY1NzA0ZjQwMWQ5NTE4ZTZmMmFiNTBhMWRmZTE3ZmYwZWUwNTAyGQjLOA==: --dhchap-ctrl-secret DHHC-1:03:NjEwZjIwODMwYTZiNTFiNDYzMDJmOTY4NTUyMGQ5NDRjM2E5ZmIwYjBlYzQxNjY4ZGUzYWFhZDA5ZTdiZTY4M+BeRSc=: 00:18:26.635 17:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:18:26.635 17:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:18:26.635 17:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:18:26.635 17:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:18:26.635 17:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:18:26.635 17:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:18:26.635 17:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:18:26.635 17:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.635 17:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.894 17:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:18:26.894 17:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:26.894 17:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:18:26.894 17:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:26.894 17:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:26.894 17:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:26.894 17:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:26.894 17:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:26.894 17:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:26.894 17:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:27.153 request: 00:18:27.153 { 00:18:27.153 "name": "nvme0", 00:18:27.153 "trtype": "tcp", 00:18:27.153 "traddr": "10.0.0.2", 00:18:27.153 "adrfam": "ipv4", 00:18:27.153 "trsvcid": "4420", 00:18:27.153 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:27.153 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:27.153 "prchk_reftag": false, 00:18:27.153 "prchk_guard": false, 00:18:27.153 "hdgst": false, 00:18:27.153 "ddgst": false, 00:18:27.153 "dhchap_key": "key1", 00:18:27.153 "allow_unrecognized_csi": false, 00:18:27.153 "method": "bdev_nvme_attach_controller", 00:18:27.153 "req_id": 1 00:18:27.153 } 00:18:27.153 Got JSON-RPC error response 00:18:27.153 response: 00:18:27.153 { 00:18:27.153 "code": -5, 00:18:27.153 "message": "Input/output error" 00:18:27.153 } 00:18:27.153 17:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:27.153 17:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:27.153 17:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:27.153 17:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:27.153 17:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:27.153 17:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:27.153 17:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:28.090 nvme0n1 00:18:28.090 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:18:28.090 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:18:28.090 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.090 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.090 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.090 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.349 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:28.349 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.349 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.349 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.349 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:18:28.349 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:28.349 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:28.608 nvme0n1 00:18:28.608 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:18:28.608 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.608 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:18:28.866 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.867 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.867 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.125 17:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:29.125 17:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.125 17:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.125 17:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.125 17:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:NDQ5MzBhY2ExYmRhN2RiZWEwM2FkNzIyNzZkNWIxMGIfBWUs: '' 2s 00:18:29.125 17:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:29.125 17:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:29.125 17:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:NDQ5MzBhY2ExYmRhN2RiZWEwM2FkNzIyNzZkNWIxMGIfBWUs: 00:18:29.125 17:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:18:29.125 17:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:29.125 17:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:29.125 17:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:NDQ5MzBhY2ExYmRhN2RiZWEwM2FkNzIyNzZkNWIxMGIfBWUs: ]] 00:18:29.126 17:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:NDQ5MzBhY2ExYmRhN2RiZWEwM2FkNzIyNzZkNWIxMGIfBWUs: 00:18:29.126 17:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:18:29.126 17:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:29.126 17:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:31.027 17:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:18:31.027 17:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:18:31.027 17:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:18:31.027 17:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:18:31.027 17:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:18:31.027 17:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:18:31.027 17:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:18:31.027 17:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:18:31.028 17:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.028 17:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.028 17:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.028 17:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NzdlNGM0NTcyYmY1NzA0ZjQwMWQ5NTE4ZTZmMmFiNTBhMWRmZTE3ZmYwZWUwNTAyGQjLOA==: 2s 00:18:31.028 17:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:31.028 17:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:31.028 17:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:18:31.028 17:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NzdlNGM0NTcyYmY1NzA0ZjQwMWQ5NTE4ZTZmMmFiNTBhMWRmZTE3ZmYwZWUwNTAyGQjLOA==: 00:18:31.028 17:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:31.028 17:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:31.028 17:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:18:31.028 17:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NzdlNGM0NTcyYmY1NzA0ZjQwMWQ5NTE4ZTZmMmFiNTBhMWRmZTE3ZmYwZWUwNTAyGQjLOA==: ]] 00:18:31.028 17:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NzdlNGM0NTcyYmY1NzA0ZjQwMWQ5NTE4ZTZmMmFiNTBhMWRmZTE3ZmYwZWUwNTAyGQjLOA==: 00:18:31.028 17:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:31.028 17:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:33.561 17:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:18:33.561 17:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:18:33.561 17:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:18:33.561 17:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:18:33.561 17:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:18:33.561 17:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:18:33.561 17:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:18:33.561 17:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.561 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.561 17:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:33.561 17:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.561 17:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.561 17:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.561 17:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:33.561 17:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:33.561 17:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:33.820 nvme0n1 00:18:34.100 17:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:34.100 17:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.100 17:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.100 17:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.100 17:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:34.100 17:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:34.451 17:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:18:34.451 17:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:18:34.451 17:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.742 17:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.742 17:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:34.742 17:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.742 17:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.742 17:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.742 17:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:18:34.742 17:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:18:34.742 17:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:18:34.742 17:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:18:34.742 17:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.001 17:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.002 17:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:35.002 17:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.002 17:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.002 17:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.002 17:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:35.002 17:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:35.002 17:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:35.002 17:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:18:35.002 17:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:35.002 17:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:18:35.002 17:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:35.002 17:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:35.002 17:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:35.569 request: 00:18:35.569 { 00:18:35.569 "name": "nvme0", 00:18:35.569 "dhchap_key": "key1", 00:18:35.569 "dhchap_ctrlr_key": "key3", 00:18:35.569 "method": "bdev_nvme_set_keys", 00:18:35.569 "req_id": 1 00:18:35.569 } 00:18:35.569 Got JSON-RPC error response 00:18:35.569 response: 00:18:35.569 { 00:18:35.569 "code": -13, 00:18:35.569 "message": "Permission denied" 00:18:35.569 } 00:18:35.569 17:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:35.569 17:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:35.569 17:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:35.569 17:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:35.569 17:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:35.569 17:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:35.569 17:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.569 17:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:18:35.569 17:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:18:36.944 17:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:36.944 17:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:36.944 17:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.944 17:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:18:36.944 17:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:36.944 17:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.944 17:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.944 17:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.944 17:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:36.944 17:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:36.944 17:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:37.511 nvme0n1 00:18:37.770 17:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:37.770 17:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.770 17:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.770 17:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.770 17:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:37.770 17:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:37.770 17:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:37.770 17:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:18:37.770 17:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:37.770 17:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:18:37.770 17:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:37.770 17:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:37.770 17:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:38.029 request: 00:18:38.029 { 00:18:38.029 "name": "nvme0", 00:18:38.029 "dhchap_key": "key2", 00:18:38.029 "dhchap_ctrlr_key": "key0", 00:18:38.029 "method": "bdev_nvme_set_keys", 00:18:38.029 "req_id": 1 00:18:38.029 } 00:18:38.029 Got JSON-RPC error response 00:18:38.029 response: 00:18:38.029 { 00:18:38.029 "code": -13, 00:18:38.029 "message": "Permission denied" 00:18:38.029 } 00:18:38.029 17:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:38.029 17:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:38.029 17:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:38.029 17:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:38.029 17:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:38.029 17:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:38.029 17:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.288 17:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:18:38.288 17:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:18:39.223 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:39.223 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:39.223 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.482 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:18:39.482 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:18:39.482 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:18:39.482 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2558988 00:18:39.482 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2558988 ']' 00:18:39.482 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2558988 00:18:39.482 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:39.482 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:39.482 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2558988 00:18:39.482 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:39.482 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:39.482 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2558988' 00:18:39.482 killing process with pid 2558988 00:18:39.482 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2558988 00:18:39.482 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2558988 00:18:39.741 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:39.741 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:39.741 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:18:39.741 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:39.741 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:18:39.741 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:39.741 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:40.000 rmmod nvme_tcp 00:18:40.000 rmmod nvme_fabrics 00:18:40.000 rmmod nvme_keyring 00:18:40.000 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:40.000 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:18:40.000 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:18:40.000 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 2580918 ']' 00:18:40.000 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 2580918 00:18:40.000 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2580918 ']' 00:18:40.000 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2580918 00:18:40.000 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:40.000 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:40.000 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2580918 00:18:40.000 17:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:40.000 17:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:40.000 17:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2580918' 00:18:40.000 killing process with pid 2580918 00:18:40.000 17:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2580918 00:18:40.000 17:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2580918 00:18:40.000 17:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:40.000 17:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:40.000 17:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:40.000 17:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:18:40.000 17:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:18:40.000 17:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:40.000 17:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:18:40.000 17:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:40.000 17:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:40.259 17:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:40.259 17:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:40.259 17:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:42.165 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:42.165 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.Wv6 /tmp/spdk.key-sha256.rQ6 /tmp/spdk.key-sha384.FUw /tmp/spdk.key-sha512.yZz /tmp/spdk.key-sha512.hTm /tmp/spdk.key-sha384.33M /tmp/spdk.key-sha256.APp '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:18:42.165 00:18:42.165 real 2m32.673s 00:18:42.165 user 5m51.974s 00:18:42.165 sys 0m24.131s 00:18:42.165 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:42.165 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.165 ************************************ 00:18:42.165 END TEST nvmf_auth_target 00:18:42.165 ************************************ 00:18:42.165 17:29:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:18:42.165 17:29:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:42.165 17:29:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:42.165 17:29:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:42.165 17:29:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:42.165 ************************************ 00:18:42.165 START TEST nvmf_bdevio_no_huge 00:18:42.165 ************************************ 00:18:42.165 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:42.424 * Looking for test storage... 00:18:42.424 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:42.424 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:42.424 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:18:42.424 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:42.424 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:42.424 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:42.424 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:42.424 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:42.424 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:18:42.424 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:18:42.424 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:18:42.424 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:18:42.424 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:18:42.424 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:18:42.424 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:18:42.424 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:42.424 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:18:42.424 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:18:42.425 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:42.425 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:42.425 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:18:42.425 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:18:42.425 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:42.425 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:18:42.425 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:18:42.425 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:18:42.425 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:18:42.425 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:42.425 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:18:42.425 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:18:42.425 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:42.425 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:42.425 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:18:42.425 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:42.425 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:42.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:42.425 --rc genhtml_branch_coverage=1 00:18:42.425 --rc genhtml_function_coverage=1 00:18:42.425 --rc genhtml_legend=1 00:18:42.425 --rc geninfo_all_blocks=1 00:18:42.425 --rc geninfo_unexecuted_blocks=1 00:18:42.425 00:18:42.425 ' 00:18:42.425 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:42.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:42.425 --rc genhtml_branch_coverage=1 00:18:42.425 --rc genhtml_function_coverage=1 00:18:42.425 --rc genhtml_legend=1 00:18:42.425 --rc geninfo_all_blocks=1 00:18:42.425 --rc geninfo_unexecuted_blocks=1 00:18:42.425 00:18:42.425 ' 00:18:42.425 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:42.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:42.425 --rc genhtml_branch_coverage=1 00:18:42.425 --rc genhtml_function_coverage=1 00:18:42.425 --rc genhtml_legend=1 00:18:42.425 --rc geninfo_all_blocks=1 00:18:42.425 --rc geninfo_unexecuted_blocks=1 00:18:42.425 00:18:42.425 ' 00:18:42.425 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:42.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:42.425 --rc genhtml_branch_coverage=1 00:18:42.425 --rc genhtml_function_coverage=1 00:18:42.425 --rc genhtml_legend=1 00:18:42.425 --rc geninfo_all_blocks=1 00:18:42.425 --rc geninfo_unexecuted_blocks=1 00:18:42.425 00:18:42.425 ' 00:18:42.425 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:42.425 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:42.425 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:42.425 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:42.425 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:42.425 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:42.425 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:42.425 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:42.425 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:42.425 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:42.425 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:42.425 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:42.425 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:42.425 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:18:42.425 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:42.425 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:42.425 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:42.425 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:42.425 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:42.425 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:18:42.425 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:42.425 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:42.425 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:42.425 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.425 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.425 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.425 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:42.425 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.425 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:18:42.425 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:42.425 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:42.425 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:42.425 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:42.425 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:42.425 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:42.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:42.425 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:42.425 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:42.425 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:42.425 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:42.425 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:42.425 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:42.425 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:42.425 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:42.425 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:42.425 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:42.425 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:42.425 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:42.425 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:42.425 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:42.425 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:42.425 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:42.425 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:18:42.425 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:48.993 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:48.993 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:18:48.993 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:48.993 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:48.993 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:48.993 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:48.993 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:48.993 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:18:48.993 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:48.993 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:18:48.993 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:18:48.993 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:18:48.993 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:18:48.993 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:18:48.993 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:18:48.993 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:48.993 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:48.993 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:48.993 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:48.993 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:48.993 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:48.993 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:48.993 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:48.993 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:48.993 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:48.993 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:48.993 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:48.993 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:48.993 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:48.993 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:48.993 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:48.993 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:48.993 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:48.993 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:48.993 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:48.993 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:48.993 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:48.993 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:48.993 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:48.993 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:48.993 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:48.993 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:48.993 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:48.993 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:48.993 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:48.993 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:48.993 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:48.993 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:48.993 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:48.993 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:48.993 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:48.993 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:48.993 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:48.993 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:48.993 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:48.993 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:48.993 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:48.993 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:48.993 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:48.994 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:48.994 Found net devices under 0000:af:00.0: cvl_0_0 00:18:48.994 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:48.994 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:48.994 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:48.994 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:48.994 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:48.994 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:48.994 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:48.994 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:48.994 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:48.994 Found net devices under 0000:af:00.1: cvl_0_1 00:18:48.994 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:48.994 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:48.994 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:18:48.994 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:48.994 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:48.994 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:48.994 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:48.994 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:48.994 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:48.994 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:48.994 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:48.994 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:48.994 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:48.994 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:48.994 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:48.994 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:48.994 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:48.994 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:48.994 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:48.994 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:48.994 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:48.994 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:48.994 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:48.994 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:48.994 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:48.994 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:48.994 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:48.994 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:48.994 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:48.994 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:48.994 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.349 ms 00:18:48.994 00:18:48.994 --- 10.0.0.2 ping statistics --- 00:18:48.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:48.994 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:18:48.994 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:48.994 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:48.994 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:18:48.994 00:18:48.994 --- 10.0.0.1 ping statistics --- 00:18:48.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:48.994 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:18:48.994 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:48.994 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:18:48.994 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:48.994 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:48.994 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:48.994 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:48.994 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:48.994 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:48.994 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:48.994 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:48.994 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:48.994 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:48.994 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:48.994 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=2587734 00:18:48.994 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:48.994 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 2587734 00:18:48.994 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 2587734 ']' 00:18:48.994 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:48.994 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:48.994 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:48.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:48.994 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:48.994 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:48.994 [2024-12-09 17:29:17.500307] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:18:48.994 [2024-12-09 17:29:17.500353] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:48.994 [2024-12-09 17:29:17.583296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:48.994 [2024-12-09 17:29:17.629637] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:48.994 [2024-12-09 17:29:17.629672] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:48.994 [2024-12-09 17:29:17.629679] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:48.994 [2024-12-09 17:29:17.629685] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:48.994 [2024-12-09 17:29:17.629690] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:48.994 [2024-12-09 17:29:17.630885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:18:48.994 [2024-12-09 17:29:17.630993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:18:48.994 [2024-12-09 17:29:17.631099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:48.994 [2024-12-09 17:29:17.631100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:18:49.253 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:49.253 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:18:49.253 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:49.253 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:49.253 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:49.253 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:49.253 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:49.253 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.253 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:49.253 [2024-12-09 17:29:18.399469] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:49.253 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.253 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:49.253 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.253 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:49.253 Malloc0 00:18:49.253 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.253 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:49.253 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.253 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:49.512 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.512 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:49.512 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.512 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:49.512 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.512 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:49.512 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.512 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:49.512 [2024-12-09 17:29:18.443732] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:49.512 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.512 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:49.512 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:49.512 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:18:49.512 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:18:49.512 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:49.512 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:49.512 { 00:18:49.512 "params": { 00:18:49.512 "name": "Nvme$subsystem", 00:18:49.512 "trtype": "$TEST_TRANSPORT", 00:18:49.512 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:49.512 "adrfam": "ipv4", 00:18:49.512 "trsvcid": "$NVMF_PORT", 00:18:49.512 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:49.512 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:49.512 "hdgst": ${hdgst:-false}, 00:18:49.512 "ddgst": ${ddgst:-false} 00:18:49.512 }, 00:18:49.512 "method": "bdev_nvme_attach_controller" 00:18:49.512 } 00:18:49.512 EOF 00:18:49.512 )") 00:18:49.512 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:18:49.512 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:18:49.512 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:18:49.512 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:18:49.512 "params": { 00:18:49.512 "name": "Nvme1", 00:18:49.512 "trtype": "tcp", 00:18:49.512 "traddr": "10.0.0.2", 00:18:49.512 "adrfam": "ipv4", 00:18:49.512 "trsvcid": "4420", 00:18:49.512 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:49.512 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:49.512 "hdgst": false, 00:18:49.512 "ddgst": false 00:18:49.512 }, 00:18:49.512 "method": "bdev_nvme_attach_controller" 00:18:49.512 }' 00:18:49.512 [2024-12-09 17:29:18.493477] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:18:49.512 [2024-12-09 17:29:18.493520] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2587980 ] 00:18:49.512 [2024-12-09 17:29:18.572028] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:49.512 [2024-12-09 17:29:18.619856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:49.512 [2024-12-09 17:29:18.619960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:49.512 [2024-12-09 17:29:18.619960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:49.771 I/O targets: 00:18:49.771 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:49.771 00:18:49.771 00:18:49.771 CUnit - A unit testing framework for C - Version 2.1-3 00:18:49.771 http://cunit.sourceforge.net/ 00:18:49.771 00:18:49.771 00:18:49.771 Suite: bdevio tests on: Nvme1n1 00:18:49.771 Test: blockdev write read block ...passed 00:18:49.771 Test: blockdev write zeroes read block ...passed 00:18:49.771 Test: blockdev write zeroes read no split ...passed 00:18:50.030 Test: blockdev write zeroes read split ...passed 00:18:50.030 Test: blockdev write zeroes read split partial ...passed 00:18:50.030 Test: blockdev reset ...[2024-12-09 17:29:19.028932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:18:50.030 [2024-12-09 17:29:19.028993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe6cef0 (9): Bad file descriptor 00:18:50.030 [2024-12-09 17:29:19.121950] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:18:50.030 passed 00:18:50.030 Test: blockdev write read 8 blocks ...passed 00:18:50.030 Test: blockdev write read size > 128k ...passed 00:18:50.030 Test: blockdev write read invalid size ...passed 00:18:50.030 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:50.030 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:50.030 Test: blockdev write read max offset ...passed 00:18:50.289 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:50.289 Test: blockdev writev readv 8 blocks ...passed 00:18:50.289 Test: blockdev writev readv 30 x 1block ...passed 00:18:50.289 Test: blockdev writev readv block ...passed 00:18:50.289 Test: blockdev writev readv size > 128k ...passed 00:18:50.289 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:50.289 Test: blockdev comparev and writev ...[2024-12-09 17:29:19.331810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:50.289 [2024-12-09 17:29:19.331839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.289 [2024-12-09 17:29:19.331852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:50.289 [2024-12-09 17:29:19.331860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:50.289 [2024-12-09 17:29:19.332083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:50.289 [2024-12-09 17:29:19.332092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:50.289 [2024-12-09 17:29:19.332104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:50.289 [2024-12-09 17:29:19.332111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:50.289 [2024-12-09 17:29:19.332344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:50.289 [2024-12-09 17:29:19.332355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:50.289 [2024-12-09 17:29:19.332366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:50.289 [2024-12-09 17:29:19.332373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:50.289 [2024-12-09 17:29:19.332604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:50.289 [2024-12-09 17:29:19.332614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:50.289 [2024-12-09 17:29:19.332625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:50.289 [2024-12-09 17:29:19.332633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:50.289 passed 00:18:50.289 Test: blockdev nvme passthru rw ...passed 00:18:50.289 Test: blockdev nvme passthru vendor specific ...[2024-12-09 17:29:19.414469] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:50.289 [2024-12-09 17:29:19.414487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:50.289 [2024-12-09 17:29:19.414591] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:50.289 [2024-12-09 17:29:19.414601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:50.289 [2024-12-09 17:29:19.414698] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:50.289 [2024-12-09 17:29:19.414711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:50.289 [2024-12-09 17:29:19.414807] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:50.289 [2024-12-09 17:29:19.414816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:50.289 passed 00:18:50.289 Test: blockdev nvme admin passthru ...passed 00:18:50.548 Test: blockdev copy ...passed 00:18:50.548 00:18:50.548 Run Summary: Type Total Ran Passed Failed Inactive 00:18:50.548 suites 1 1 n/a 0 0 00:18:50.548 tests 23 23 23 0 0 00:18:50.548 asserts 152 152 152 0 n/a 00:18:50.548 00:18:50.548 Elapsed time = 1.306 seconds 00:18:50.548 17:29:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:50.548 17:29:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.548 17:29:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:50.807 17:29:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.807 17:29:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:50.807 17:29:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:50.807 17:29:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:50.807 17:29:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:18:50.808 17:29:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:50.808 17:29:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:18:50.808 17:29:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:50.808 17:29:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:50.808 rmmod nvme_tcp 00:18:50.808 rmmod nvme_fabrics 00:18:50.808 rmmod nvme_keyring 00:18:50.808 17:29:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:50.808 17:29:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:18:50.808 17:29:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:18:50.808 17:29:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 2587734 ']' 00:18:50.808 17:29:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 2587734 00:18:50.808 17:29:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 2587734 ']' 00:18:50.808 17:29:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 2587734 00:18:50.808 17:29:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:18:50.808 17:29:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:50.808 17:29:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2587734 00:18:50.808 17:29:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:18:50.808 17:29:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:18:50.808 17:29:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2587734' 00:18:50.808 killing process with pid 2587734 00:18:50.808 17:29:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 2587734 00:18:50.808 17:29:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 2587734 00:18:51.067 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:51.067 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:51.067 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:51.067 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:18:51.067 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:18:51.067 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:51.067 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:18:51.067 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:51.067 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:51.067 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:51.067 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:51.067 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:53.604 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:53.604 00:18:53.604 real 0m10.901s 00:18:53.604 user 0m13.931s 00:18:53.604 sys 0m5.383s 00:18:53.604 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:53.604 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:53.604 ************************************ 00:18:53.604 END TEST nvmf_bdevio_no_huge 00:18:53.604 ************************************ 00:18:53.604 17:29:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:53.604 17:29:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:53.604 17:29:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:53.604 17:29:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:53.604 ************************************ 00:18:53.604 START TEST nvmf_tls 00:18:53.604 ************************************ 00:18:53.604 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:53.604 * Looking for test storage... 00:18:53.604 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:53.604 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:53.604 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:18:53.604 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:53.604 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:53.604 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:53.604 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:53.604 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:53.604 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:18:53.604 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:18:53.604 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:18:53.604 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:18:53.604 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:18:53.604 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:18:53.604 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:18:53.604 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:53.604 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:18:53.604 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:18:53.604 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:53.604 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:53.604 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:18:53.604 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:18:53.604 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:53.604 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:18:53.604 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:18:53.604 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:18:53.604 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:18:53.604 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:53.604 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:18:53.604 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:18:53.605 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:53.605 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:53.605 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:18:53.605 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:53.605 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:53.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:53.605 --rc genhtml_branch_coverage=1 00:18:53.605 --rc genhtml_function_coverage=1 00:18:53.605 --rc genhtml_legend=1 00:18:53.605 --rc geninfo_all_blocks=1 00:18:53.605 --rc geninfo_unexecuted_blocks=1 00:18:53.605 00:18:53.605 ' 00:18:53.605 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:53.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:53.605 --rc genhtml_branch_coverage=1 00:18:53.605 --rc genhtml_function_coverage=1 00:18:53.605 --rc genhtml_legend=1 00:18:53.605 --rc geninfo_all_blocks=1 00:18:53.605 --rc geninfo_unexecuted_blocks=1 00:18:53.605 00:18:53.605 ' 00:18:53.605 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:53.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:53.605 --rc genhtml_branch_coverage=1 00:18:53.605 --rc genhtml_function_coverage=1 00:18:53.605 --rc genhtml_legend=1 00:18:53.605 --rc geninfo_all_blocks=1 00:18:53.605 --rc geninfo_unexecuted_blocks=1 00:18:53.605 00:18:53.605 ' 00:18:53.605 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:53.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:53.605 --rc genhtml_branch_coverage=1 00:18:53.605 --rc genhtml_function_coverage=1 00:18:53.605 --rc genhtml_legend=1 00:18:53.605 --rc geninfo_all_blocks=1 00:18:53.605 --rc geninfo_unexecuted_blocks=1 00:18:53.605 00:18:53.605 ' 00:18:53.605 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:53.605 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:18:53.605 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:53.605 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:53.605 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:53.605 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:53.605 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:53.605 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:53.605 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:53.605 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:53.605 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:53.605 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:53.605 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:53.605 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:18:53.605 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:53.605 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:53.605 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:53.605 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:53.605 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:53.605 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:18:53.605 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:53.605 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:53.605 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:53.605 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.605 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.605 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.605 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:18:53.605 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.605 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:18:53.605 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:53.605 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:53.605 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:53.605 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:53.605 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:53.605 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:53.605 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:53.605 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:53.605 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:53.605 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:53.605 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:53.605 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:18:53.605 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:53.605 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:53.605 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:53.605 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:53.605 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:53.605 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:53.605 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:53.605 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:53.605 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:53.605 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:53.605 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:18:53.605 17:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:00.174 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:00.174 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:19:00.174 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:00.174 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:00.174 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:00.174 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:00.174 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:00.174 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:19:00.174 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:00.174 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:19:00.174 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:19:00.174 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:19:00.174 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:19:00.174 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:19:00.174 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:19:00.174 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:00.174 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:00.174 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:00.174 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:00.174 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:00.174 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:00.174 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:00.174 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:00.174 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:00.174 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:00.174 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:00.174 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:00.174 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:00.174 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:00.174 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:00.174 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:00.174 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:00.174 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:00.174 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:00.174 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:00.174 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:00.174 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:00.174 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:00.174 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:00.174 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:00.174 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:00.174 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:00.174 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:00.174 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:00.174 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:00.174 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:00.174 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:00.174 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:00.174 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:00.174 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:00.174 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:00.174 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:00.174 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:00.174 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:00.174 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:00.174 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:00.174 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:00.174 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:00.174 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:00.174 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:00.174 Found net devices under 0000:af:00.0: cvl_0_0 00:19:00.174 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:00.174 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:00.174 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:00.174 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:00.174 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:00.174 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:00.174 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:00.174 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:00.174 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:00.174 Found net devices under 0000:af:00.1: cvl_0_1 00:19:00.174 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:00.175 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:00.175 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:19:00.175 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:00.175 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:00.175 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:00.175 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:00.175 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:00.175 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:00.175 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:00.175 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:00.175 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:00.175 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:00.175 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:00.175 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:00.175 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:00.175 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:00.175 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:00.175 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:00.175 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:00.175 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:00.175 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:00.175 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:00.175 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:00.175 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:00.175 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:00.175 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:00.175 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:00.175 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:00.175 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:00.175 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.349 ms 00:19:00.175 00:19:00.175 --- 10.0.0.2 ping statistics --- 00:19:00.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:00.175 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:19:00.175 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:00.175 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:00.175 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:19:00.175 00:19:00.175 --- 10.0.0.1 ping statistics --- 00:19:00.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:00.175 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:19:00.175 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:00.175 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:19:00.175 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:00.175 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:00.175 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:00.175 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:00.175 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:00.175 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:00.175 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:00.175 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:00.175 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:00.175 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:00.175 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:00.175 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2591700 00:19:00.175 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2591700 00:19:00.175 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:00.175 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2591700 ']' 00:19:00.175 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:00.175 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:00.175 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:00.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:00.175 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:00.175 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:00.175 [2024-12-09 17:29:28.507654] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:19:00.175 [2024-12-09 17:29:28.507697] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:00.175 [2024-12-09 17:29:28.587728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:00.175 [2024-12-09 17:29:28.627128] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:00.175 [2024-12-09 17:29:28.627162] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:00.175 [2024-12-09 17:29:28.627172] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:00.175 [2024-12-09 17:29:28.627182] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:00.175 [2024-12-09 17:29:28.627189] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:00.175 [2024-12-09 17:29:28.627748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:00.175 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:00.175 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:00.175 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:00.175 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:00.175 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:00.175 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:00.175 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:19:00.175 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:00.175 true 00:19:00.175 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:00.175 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:19:00.175 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:19:00.175 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:19:00.175 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:00.175 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:00.175 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:19:00.434 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:19:00.434 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:19:00.434 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:00.692 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:00.692 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:19:00.692 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:19:00.692 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:19:00.692 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:00.692 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:19:00.950 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:19:00.950 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:19:00.950 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:01.209 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:01.209 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:19:01.209 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:19:01.209 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:19:01.467 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:01.467 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:01.467 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:19:01.726 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:19:01.726 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:19:01.726 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:19:01.726 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:19:01.726 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:01.726 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:01.726 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:19:01.726 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:19:01.726 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:01.726 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:01.726 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:19:01.726 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:19:01.726 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:01.726 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:01.726 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:19:01.726 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:19:01.726 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:01.726 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:01.726 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:19:01.726 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.3BG4kTd7WS 00:19:01.726 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:19:01.726 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.aP1mxlhMMi 00:19:01.726 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:01.726 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:01.726 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.3BG4kTd7WS 00:19:01.726 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.aP1mxlhMMi 00:19:01.726 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:01.984 17:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:19:02.243 17:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.3BG4kTd7WS 00:19:02.243 17:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.3BG4kTd7WS 00:19:02.243 17:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:02.501 [2024-12-09 17:29:31.470199] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:02.501 17:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:02.501 17:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:02.760 [2024-12-09 17:29:31.847179] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:02.760 [2024-12-09 17:29:31.847410] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:02.760 17:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:03.018 malloc0 00:19:03.018 17:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:03.276 17:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.3BG4kTd7WS 00:19:03.276 17:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:03.534 17:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.3BG4kTd7WS 00:19:15.736 Initializing NVMe Controllers 00:19:15.736 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:15.736 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:15.736 Initialization complete. Launching workers. 00:19:15.736 ======================================================== 00:19:15.736 Latency(us) 00:19:15.736 Device Information : IOPS MiB/s Average min max 00:19:15.736 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16921.25 66.10 3782.31 849.34 4387.60 00:19:15.736 ======================================================== 00:19:15.736 Total : 16921.25 66.10 3782.31 849.34 4387.60 00:19:15.736 00:19:15.737 17:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.3BG4kTd7WS 00:19:15.737 17:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:15.737 17:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:15.737 17:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:15.737 17:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.3BG4kTd7WS 00:19:15.737 17:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:15.737 17:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2594020 00:19:15.737 17:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:15.737 17:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:15.737 17:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2594020 /var/tmp/bdevperf.sock 00:19:15.737 17:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2594020 ']' 00:19:15.737 17:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:15.737 17:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:15.737 17:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:15.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:15.737 17:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:15.737 17:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:15.737 [2024-12-09 17:29:42.784256] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:19:15.737 [2024-12-09 17:29:42.784309] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2594020 ] 00:19:15.737 [2024-12-09 17:29:42.856481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.737 [2024-12-09 17:29:42.897212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:15.737 17:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:15.737 17:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:15.737 17:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.3BG4kTd7WS 00:19:15.737 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:15.737 [2024-12-09 17:29:43.341474] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:15.737 TLSTESTn1 00:19:15.737 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:15.737 Running I/O for 10 seconds... 00:19:16.670 5460.00 IOPS, 21.33 MiB/s [2024-12-09T16:29:46.783Z] 5549.00 IOPS, 21.68 MiB/s [2024-12-09T16:29:47.719Z] 5614.33 IOPS, 21.93 MiB/s [2024-12-09T16:29:48.653Z] 5608.50 IOPS, 21.91 MiB/s [2024-12-09T16:29:49.588Z] 5641.00 IOPS, 22.04 MiB/s [2024-12-09T16:29:50.962Z] 5592.17 IOPS, 21.84 MiB/s [2024-12-09T16:29:51.897Z] 5580.14 IOPS, 21.80 MiB/s [2024-12-09T16:29:52.831Z] 5599.12 IOPS, 21.87 MiB/s [2024-12-09T16:29:53.766Z] 5595.67 IOPS, 21.86 MiB/s [2024-12-09T16:29:53.766Z] 5610.70 IOPS, 21.92 MiB/s 00:19:24.587 Latency(us) 00:19:24.587 [2024-12-09T16:29:53.766Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:24.587 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:24.587 Verification LBA range: start 0x0 length 0x2000 00:19:24.587 TLSTESTn1 : 10.02 5613.92 21.93 0.00 0.00 22764.86 6491.18 23468.13 00:19:24.587 [2024-12-09T16:29:53.766Z] =================================================================================================================== 00:19:24.587 [2024-12-09T16:29:53.766Z] Total : 5613.92 21.93 0.00 0.00 22764.86 6491.18 23468.13 00:19:24.587 { 00:19:24.587 "results": [ 00:19:24.587 { 00:19:24.587 "job": "TLSTESTn1", 00:19:24.587 "core_mask": "0x4", 00:19:24.587 "workload": "verify", 00:19:24.587 "status": "finished", 00:19:24.587 "verify_range": { 00:19:24.587 "start": 0, 00:19:24.587 "length": 8192 00:19:24.587 }, 00:19:24.587 "queue_depth": 128, 00:19:24.587 "io_size": 4096, 00:19:24.587 "runtime": 10.016884, 00:19:24.587 "iops": 5613.921455015352, 00:19:24.587 "mibps": 21.92938068365372, 00:19:24.587 "io_failed": 0, 00:19:24.587 "io_timeout": 0, 00:19:24.587 "avg_latency_us": 22764.86426913391, 00:19:24.587 "min_latency_us": 6491.184761904762, 00:19:24.587 "max_latency_us": 23468.129523809523 00:19:24.587 } 00:19:24.587 ], 00:19:24.587 "core_count": 1 00:19:24.587 } 00:19:24.587 17:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:24.587 17:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2594020 00:19:24.587 17:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2594020 ']' 00:19:24.587 17:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2594020 00:19:24.587 17:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:24.587 17:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:24.587 17:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2594020 00:19:24.587 17:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:24.588 17:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:24.588 17:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2594020' 00:19:24.588 killing process with pid 2594020 00:19:24.588 17:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2594020 00:19:24.588 Received shutdown signal, test time was about 10.000000 seconds 00:19:24.588 00:19:24.588 Latency(us) 00:19:24.588 [2024-12-09T16:29:53.767Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:24.588 [2024-12-09T16:29:53.767Z] =================================================================================================================== 00:19:24.588 [2024-12-09T16:29:53.767Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:24.588 17:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2594020 00:19:24.847 17:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.aP1mxlhMMi 00:19:24.847 17:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:24.847 17:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.aP1mxlhMMi 00:19:24.847 17:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:24.847 17:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:24.847 17:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:24.847 17:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:24.847 17:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.aP1mxlhMMi 00:19:24.847 17:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:24.847 17:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:24.847 17:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:24.847 17:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.aP1mxlhMMi 00:19:24.847 17:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:24.847 17:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2595833 00:19:24.847 17:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:24.847 17:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:24.847 17:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2595833 /var/tmp/bdevperf.sock 00:19:24.847 17:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2595833 ']' 00:19:24.847 17:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:24.847 17:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:24.847 17:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:24.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:24.847 17:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:24.847 17:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:24.847 [2024-12-09 17:29:53.855363] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:19:24.847 [2024-12-09 17:29:53.855413] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2595833 ] 00:19:24.847 [2024-12-09 17:29:53.929260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.847 [2024-12-09 17:29:53.967419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:25.105 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:25.105 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:25.105 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.aP1mxlhMMi 00:19:25.105 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:25.363 [2024-12-09 17:29:54.431298] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:25.363 [2024-12-09 17:29:54.435908] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:25.363 [2024-12-09 17:29:54.436542] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x718700 (107): Transport endpoint is not connected 00:19:25.363 [2024-12-09 17:29:54.437535] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x718700 (9): Bad file descriptor 00:19:25.363 [2024-12-09 17:29:54.438535] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:19:25.363 [2024-12-09 17:29:54.438548] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:25.363 [2024-12-09 17:29:54.438555] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:25.363 [2024-12-09 17:29:54.438565] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:19:25.363 request: 00:19:25.363 { 00:19:25.363 "name": "TLSTEST", 00:19:25.363 "trtype": "tcp", 00:19:25.363 "traddr": "10.0.0.2", 00:19:25.363 "adrfam": "ipv4", 00:19:25.363 "trsvcid": "4420", 00:19:25.363 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:25.363 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:25.363 "prchk_reftag": false, 00:19:25.363 "prchk_guard": false, 00:19:25.363 "hdgst": false, 00:19:25.363 "ddgst": false, 00:19:25.363 "psk": "key0", 00:19:25.363 "allow_unrecognized_csi": false, 00:19:25.363 "method": "bdev_nvme_attach_controller", 00:19:25.363 "req_id": 1 00:19:25.363 } 00:19:25.363 Got JSON-RPC error response 00:19:25.363 response: 00:19:25.363 { 00:19:25.363 "code": -5, 00:19:25.363 "message": "Input/output error" 00:19:25.363 } 00:19:25.363 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2595833 00:19:25.363 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2595833 ']' 00:19:25.363 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2595833 00:19:25.363 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:25.363 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:25.363 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2595833 00:19:25.363 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:25.363 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:25.363 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2595833' 00:19:25.363 killing process with pid 2595833 00:19:25.363 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2595833 00:19:25.363 Received shutdown signal, test time was about 10.000000 seconds 00:19:25.363 00:19:25.363 Latency(us) 00:19:25.363 [2024-12-09T16:29:54.542Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:25.363 [2024-12-09T16:29:54.542Z] =================================================================================================================== 00:19:25.363 [2024-12-09T16:29:54.542Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:25.363 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2595833 00:19:25.622 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:25.622 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:25.622 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:25.622 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:25.622 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:25.622 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.3BG4kTd7WS 00:19:25.622 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:25.622 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.3BG4kTd7WS 00:19:25.622 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:25.622 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:25.622 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:25.622 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:25.622 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.3BG4kTd7WS 00:19:25.622 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:25.622 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:25.622 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:25.622 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.3BG4kTd7WS 00:19:25.622 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:25.622 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2595857 00:19:25.622 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:25.622 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:25.622 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2595857 /var/tmp/bdevperf.sock 00:19:25.622 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2595857 ']' 00:19:25.622 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:25.622 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:25.622 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:25.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:25.622 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:25.622 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:25.622 [2024-12-09 17:29:54.716204] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:19:25.622 [2024-12-09 17:29:54.716255] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2595857 ] 00:19:25.622 [2024-12-09 17:29:54.790667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.881 [2024-12-09 17:29:54.831561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:25.881 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:25.881 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:25.881 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.3BG4kTd7WS 00:19:26.138 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:19:26.138 [2024-12-09 17:29:55.258678] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:26.138 [2024-12-09 17:29:55.265874] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:26.138 [2024-12-09 17:29:55.265896] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:26.138 [2024-12-09 17:29:55.265919] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:26.138 [2024-12-09 17:29:55.265949] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x193f700 (107): Transport endpoint is not connected 00:19:26.138 [2024-12-09 17:29:55.266931] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x193f700 (9): Bad file descriptor 00:19:26.138 [2024-12-09 17:29:55.267932] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:19:26.138 [2024-12-09 17:29:55.267942] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:26.138 [2024-12-09 17:29:55.267949] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:26.138 [2024-12-09 17:29:55.267959] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:19:26.138 request: 00:19:26.138 { 00:19:26.138 "name": "TLSTEST", 00:19:26.138 "trtype": "tcp", 00:19:26.138 "traddr": "10.0.0.2", 00:19:26.138 "adrfam": "ipv4", 00:19:26.138 "trsvcid": "4420", 00:19:26.138 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:26.138 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:26.138 "prchk_reftag": false, 00:19:26.138 "prchk_guard": false, 00:19:26.138 "hdgst": false, 00:19:26.138 "ddgst": false, 00:19:26.138 "psk": "key0", 00:19:26.138 "allow_unrecognized_csi": false, 00:19:26.138 "method": "bdev_nvme_attach_controller", 00:19:26.138 "req_id": 1 00:19:26.138 } 00:19:26.138 Got JSON-RPC error response 00:19:26.138 response: 00:19:26.138 { 00:19:26.138 "code": -5, 00:19:26.138 "message": "Input/output error" 00:19:26.138 } 00:19:26.138 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2595857 00:19:26.138 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2595857 ']' 00:19:26.138 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2595857 00:19:26.138 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:26.138 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:26.138 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2595857 00:19:26.396 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:26.396 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:26.396 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2595857' 00:19:26.396 killing process with pid 2595857 00:19:26.396 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2595857 00:19:26.396 Received shutdown signal, test time was about 10.000000 seconds 00:19:26.396 00:19:26.396 Latency(us) 00:19:26.396 [2024-12-09T16:29:55.575Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:26.396 [2024-12-09T16:29:55.575Z] =================================================================================================================== 00:19:26.396 [2024-12-09T16:29:55.575Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:26.396 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2595857 00:19:26.396 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:26.396 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:26.396 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:26.396 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:26.396 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:26.396 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.3BG4kTd7WS 00:19:26.396 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:26.396 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.3BG4kTd7WS 00:19:26.396 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:26.396 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:26.396 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:26.396 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:26.396 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.3BG4kTd7WS 00:19:26.396 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:26.396 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:26.396 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:26.396 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.3BG4kTd7WS 00:19:26.396 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:26.396 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2596082 00:19:26.396 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:26.396 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:26.396 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2596082 /var/tmp/bdevperf.sock 00:19:26.396 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2596082 ']' 00:19:26.396 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:26.396 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:26.396 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:26.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:26.396 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:26.396 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:26.396 [2024-12-09 17:29:55.538875] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:19:26.396 [2024-12-09 17:29:55.538922] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2596082 ] 00:19:26.655 [2024-12-09 17:29:55.600664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:26.655 [2024-12-09 17:29:55.637010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:26.655 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:26.655 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:26.655 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.3BG4kTd7WS 00:19:26.913 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:26.913 [2024-12-09 17:29:56.088357] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:27.171 [2024-12-09 17:29:56.097966] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:27.171 [2024-12-09 17:29:56.097988] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:27.171 [2024-12-09 17:29:56.098010] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:27.171 [2024-12-09 17:29:56.098712] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0700 (107): Transport endpoint is not connected 00:19:27.172 [2024-12-09 17:29:56.099705] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0700 (9): Bad file descriptor 00:19:27.172 [2024-12-09 17:29:56.100707] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:19:27.172 [2024-12-09 17:29:56.100718] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:27.172 [2024-12-09 17:29:56.100725] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:19:27.172 [2024-12-09 17:29:56.100736] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:19:27.172 request: 00:19:27.172 { 00:19:27.172 "name": "TLSTEST", 00:19:27.172 "trtype": "tcp", 00:19:27.172 "traddr": "10.0.0.2", 00:19:27.172 "adrfam": "ipv4", 00:19:27.172 "trsvcid": "4420", 00:19:27.172 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:27.172 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:27.172 "prchk_reftag": false, 00:19:27.172 "prchk_guard": false, 00:19:27.172 "hdgst": false, 00:19:27.172 "ddgst": false, 00:19:27.172 "psk": "key0", 00:19:27.172 "allow_unrecognized_csi": false, 00:19:27.172 "method": "bdev_nvme_attach_controller", 00:19:27.172 "req_id": 1 00:19:27.172 } 00:19:27.172 Got JSON-RPC error response 00:19:27.172 response: 00:19:27.172 { 00:19:27.172 "code": -5, 00:19:27.172 "message": "Input/output error" 00:19:27.172 } 00:19:27.172 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2596082 00:19:27.172 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2596082 ']' 00:19:27.172 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2596082 00:19:27.172 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:27.172 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:27.172 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2596082 00:19:27.172 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:27.172 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:27.172 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2596082' 00:19:27.172 killing process with pid 2596082 00:19:27.172 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2596082 00:19:27.172 Received shutdown signal, test time was about 10.000000 seconds 00:19:27.172 00:19:27.172 Latency(us) 00:19:27.172 [2024-12-09T16:29:56.351Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:27.172 [2024-12-09T16:29:56.351Z] =================================================================================================================== 00:19:27.172 [2024-12-09T16:29:56.351Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:27.172 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2596082 00:19:27.172 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:27.172 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:27.172 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:27.172 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:27.172 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:27.172 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:27.172 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:27.172 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:27.172 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:27.172 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:27.172 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:27.172 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:27.172 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:27.172 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:27.172 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:27.172 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:27.172 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:27.172 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:27.172 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2596241 00:19:27.172 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:27.172 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:27.172 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2596241 /var/tmp/bdevperf.sock 00:19:27.172 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2596241 ']' 00:19:27.172 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:27.172 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:27.172 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:27.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:27.172 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:27.172 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:27.431 [2024-12-09 17:29:56.384085] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:19:27.431 [2024-12-09 17:29:56.384136] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2596241 ] 00:19:27.431 [2024-12-09 17:29:56.459044] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:27.431 [2024-12-09 17:29:56.498029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:27.431 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:27.431 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:27.431 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:19:27.689 [2024-12-09 17:29:56.760389] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:19:27.689 [2024-12-09 17:29:56.760420] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:27.689 request: 00:19:27.689 { 00:19:27.689 "name": "key0", 00:19:27.689 "path": "", 00:19:27.689 "method": "keyring_file_add_key", 00:19:27.689 "req_id": 1 00:19:27.689 } 00:19:27.689 Got JSON-RPC error response 00:19:27.689 response: 00:19:27.689 { 00:19:27.689 "code": -1, 00:19:27.689 "message": "Operation not permitted" 00:19:27.689 } 00:19:27.689 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:27.948 [2024-12-09 17:29:56.944947] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:27.948 [2024-12-09 17:29:56.944972] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:27.948 request: 00:19:27.948 { 00:19:27.948 "name": "TLSTEST", 00:19:27.948 "trtype": "tcp", 00:19:27.948 "traddr": "10.0.0.2", 00:19:27.948 "adrfam": "ipv4", 00:19:27.948 "trsvcid": "4420", 00:19:27.948 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:27.948 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:27.948 "prchk_reftag": false, 00:19:27.948 "prchk_guard": false, 00:19:27.948 "hdgst": false, 00:19:27.948 "ddgst": false, 00:19:27.948 "psk": "key0", 00:19:27.948 "allow_unrecognized_csi": false, 00:19:27.948 "method": "bdev_nvme_attach_controller", 00:19:27.948 "req_id": 1 00:19:27.948 } 00:19:27.948 Got JSON-RPC error response 00:19:27.948 response: 00:19:27.948 { 00:19:27.948 "code": -126, 00:19:27.948 "message": "Required key not available" 00:19:27.948 } 00:19:27.948 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2596241 00:19:27.948 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2596241 ']' 00:19:27.948 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2596241 00:19:27.948 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:27.948 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:27.948 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2596241 00:19:27.948 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:27.948 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:27.948 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2596241' 00:19:27.948 killing process with pid 2596241 00:19:27.948 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2596241 00:19:27.948 Received shutdown signal, test time was about 10.000000 seconds 00:19:27.948 00:19:27.948 Latency(us) 00:19:27.948 [2024-12-09T16:29:57.127Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:27.948 [2024-12-09T16:29:57.127Z] =================================================================================================================== 00:19:27.948 [2024-12-09T16:29:57.127Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:27.948 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2596241 00:19:28.207 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:28.207 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:28.207 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:28.207 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:28.207 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:28.207 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 2591700 00:19:28.207 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2591700 ']' 00:19:28.207 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2591700 00:19:28.207 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:28.207 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:28.207 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2591700 00:19:28.207 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:28.207 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:28.207 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2591700' 00:19:28.207 killing process with pid 2591700 00:19:28.207 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2591700 00:19:28.207 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2591700 00:19:28.207 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:28.207 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:28.207 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:28.207 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:28.207 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:28.207 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:19:28.207 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:28.465 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:28.465 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:19:28.466 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.zVkKwP74ju 00:19:28.466 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:28.466 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.zVkKwP74ju 00:19:28.466 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:19:28.466 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:28.466 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:28.466 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:28.466 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:28.466 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2596345 00:19:28.466 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2596345 00:19:28.466 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2596345 ']' 00:19:28.466 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:28.466 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:28.466 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:28.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:28.466 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:28.466 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:28.466 [2024-12-09 17:29:57.473891] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:19:28.466 [2024-12-09 17:29:57.473942] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:28.466 [2024-12-09 17:29:57.551206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:28.466 [2024-12-09 17:29:57.588445] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:28.466 [2024-12-09 17:29:57.588490] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:28.466 [2024-12-09 17:29:57.588497] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:28.466 [2024-12-09 17:29:57.588504] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:28.466 [2024-12-09 17:29:57.588509] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:28.466 [2024-12-09 17:29:57.589050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:28.724 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:28.724 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:28.724 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:28.724 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:28.724 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:28.724 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:28.724 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.zVkKwP74ju 00:19:28.724 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.zVkKwP74ju 00:19:28.724 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:28.983 [2024-12-09 17:29:57.905238] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:28.983 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:28.983 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:29.241 [2024-12-09 17:29:58.278179] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:29.241 [2024-12-09 17:29:58.278393] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:29.241 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:29.500 malloc0 00:19:29.500 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:29.500 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.zVkKwP74ju 00:19:29.758 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:30.016 17:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zVkKwP74ju 00:19:30.016 17:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:30.016 17:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:30.016 17:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:30.016 17:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.zVkKwP74ju 00:19:30.016 17:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:30.016 17:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2596676 00:19:30.016 17:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:30.017 17:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:30.017 17:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2596676 /var/tmp/bdevperf.sock 00:19:30.017 17:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2596676 ']' 00:19:30.017 17:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:30.017 17:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:30.017 17:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:30.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:30.017 17:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:30.017 17:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:30.017 [2024-12-09 17:29:59.072186] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:19:30.017 [2024-12-09 17:29:59.072242] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2596676 ] 00:19:30.017 [2024-12-09 17:29:59.146938] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:30.017 [2024-12-09 17:29:59.187149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:30.275 17:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:30.275 17:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:30.275 17:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zVkKwP74ju 00:19:30.533 17:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:30.533 [2024-12-09 17:29:59.643483] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:30.792 TLSTESTn1 00:19:30.792 17:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:30.792 Running I/O for 10 seconds... 00:19:32.810 5319.00 IOPS, 20.78 MiB/s [2024-12-09T16:30:02.922Z] 5494.50 IOPS, 21.46 MiB/s [2024-12-09T16:30:03.857Z] 5493.00 IOPS, 21.46 MiB/s [2024-12-09T16:30:05.233Z] 5509.25 IOPS, 21.52 MiB/s [2024-12-09T16:30:06.167Z] 5526.80 IOPS, 21.59 MiB/s [2024-12-09T16:30:07.103Z] 5544.33 IOPS, 21.66 MiB/s [2024-12-09T16:30:08.037Z] 5558.86 IOPS, 21.71 MiB/s [2024-12-09T16:30:08.970Z] 5550.50 IOPS, 21.68 MiB/s [2024-12-09T16:30:09.905Z] 5481.22 IOPS, 21.41 MiB/s [2024-12-09T16:30:09.905Z] 5450.40 IOPS, 21.29 MiB/s 00:19:40.726 Latency(us) 00:19:40.726 [2024-12-09T16:30:09.905Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:40.726 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:40.726 Verification LBA range: start 0x0 length 0x2000 00:19:40.726 TLSTESTn1 : 10.02 5453.51 21.30 0.00 0.00 23435.64 4774.77 27088.21 00:19:40.726 [2024-12-09T16:30:09.905Z] =================================================================================================================== 00:19:40.726 [2024-12-09T16:30:09.905Z] Total : 5453.51 21.30 0.00 0.00 23435.64 4774.77 27088.21 00:19:40.726 { 00:19:40.726 "results": [ 00:19:40.726 { 00:19:40.726 "job": "TLSTESTn1", 00:19:40.726 "core_mask": "0x4", 00:19:40.726 "workload": "verify", 00:19:40.726 "status": "finished", 00:19:40.726 "verify_range": { 00:19:40.726 "start": 0, 00:19:40.726 "length": 8192 00:19:40.726 }, 00:19:40.726 "queue_depth": 128, 00:19:40.726 "io_size": 4096, 00:19:40.726 "runtime": 10.017773, 00:19:40.726 "iops": 5453.507481153745, 00:19:40.726 "mibps": 21.302763598256817, 00:19:40.726 "io_failed": 0, 00:19:40.726 "io_timeout": 0, 00:19:40.726 "avg_latency_us": 23435.63748662915, 00:19:40.726 "min_latency_us": 4774.765714285714, 00:19:40.726 "max_latency_us": 27088.213333333333 00:19:40.726 } 00:19:40.726 ], 00:19:40.726 "core_count": 1 00:19:40.726 } 00:19:40.726 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:40.726 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2596676 00:19:40.726 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2596676 ']' 00:19:40.726 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2596676 00:19:40.726 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:40.984 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:40.984 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2596676 00:19:40.984 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:40.984 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:40.984 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2596676' 00:19:40.984 killing process with pid 2596676 00:19:40.984 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2596676 00:19:40.984 Received shutdown signal, test time was about 10.000000 seconds 00:19:40.984 00:19:40.984 Latency(us) 00:19:40.984 [2024-12-09T16:30:10.163Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:40.984 [2024-12-09T16:30:10.163Z] =================================================================================================================== 00:19:40.984 [2024-12-09T16:30:10.163Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:40.984 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2596676 00:19:40.984 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.zVkKwP74ju 00:19:40.984 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zVkKwP74ju 00:19:40.984 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:40.984 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zVkKwP74ju 00:19:40.984 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:40.984 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:40.984 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:40.984 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:40.984 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zVkKwP74ju 00:19:40.984 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:40.984 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:40.984 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:40.984 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.zVkKwP74ju 00:19:40.984 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:40.984 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2598925 00:19:40.984 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:40.984 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:40.984 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2598925 /var/tmp/bdevperf.sock 00:19:40.984 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2598925 ']' 00:19:40.984 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:40.984 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:40.984 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:40.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:40.984 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:40.984 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:41.242 [2024-12-09 17:30:10.166101] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:19:41.242 [2024-12-09 17:30:10.166151] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2598925 ] 00:19:41.242 [2024-12-09 17:30:10.241932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:41.242 [2024-12-09 17:30:10.278442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:41.242 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:41.242 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:41.242 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zVkKwP74ju 00:19:41.500 [2024-12-09 17:30:10.557304] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.zVkKwP74ju': 0100666 00:19:41.500 [2024-12-09 17:30:10.557331] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:41.500 request: 00:19:41.500 { 00:19:41.500 "name": "key0", 00:19:41.500 "path": "/tmp/tmp.zVkKwP74ju", 00:19:41.500 "method": "keyring_file_add_key", 00:19:41.500 "req_id": 1 00:19:41.500 } 00:19:41.500 Got JSON-RPC error response 00:19:41.500 response: 00:19:41.500 { 00:19:41.500 "code": -1, 00:19:41.500 "message": "Operation not permitted" 00:19:41.500 } 00:19:41.500 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:41.758 [2024-12-09 17:30:10.757903] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:41.758 [2024-12-09 17:30:10.757932] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:41.758 request: 00:19:41.758 { 00:19:41.758 "name": "TLSTEST", 00:19:41.758 "trtype": "tcp", 00:19:41.758 "traddr": "10.0.0.2", 00:19:41.758 "adrfam": "ipv4", 00:19:41.758 "trsvcid": "4420", 00:19:41.758 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:41.758 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:41.758 "prchk_reftag": false, 00:19:41.758 "prchk_guard": false, 00:19:41.758 "hdgst": false, 00:19:41.758 "ddgst": false, 00:19:41.758 "psk": "key0", 00:19:41.758 "allow_unrecognized_csi": false, 00:19:41.758 "method": "bdev_nvme_attach_controller", 00:19:41.758 "req_id": 1 00:19:41.758 } 00:19:41.758 Got JSON-RPC error response 00:19:41.758 response: 00:19:41.758 { 00:19:41.758 "code": -126, 00:19:41.758 "message": "Required key not available" 00:19:41.758 } 00:19:41.758 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2598925 00:19:41.758 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2598925 ']' 00:19:41.758 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2598925 00:19:41.758 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:41.758 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:41.758 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2598925 00:19:41.758 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:41.758 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:41.758 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2598925' 00:19:41.758 killing process with pid 2598925 00:19:41.758 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2598925 00:19:41.758 Received shutdown signal, test time was about 10.000000 seconds 00:19:41.758 00:19:41.758 Latency(us) 00:19:41.758 [2024-12-09T16:30:10.937Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:41.758 [2024-12-09T16:30:10.937Z] =================================================================================================================== 00:19:41.758 [2024-12-09T16:30:10.937Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:41.758 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2598925 00:19:42.016 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:42.016 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:42.016 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:42.016 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:42.016 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:42.016 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 2596345 00:19:42.016 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2596345 ']' 00:19:42.016 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2596345 00:19:42.016 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:42.016 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:42.016 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2596345 00:19:42.016 17:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:42.016 17:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:42.016 17:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2596345' 00:19:42.016 killing process with pid 2596345 00:19:42.016 17:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2596345 00:19:42.016 17:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2596345 00:19:42.275 17:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:19:42.275 17:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:42.275 17:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:42.275 17:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:42.275 17:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2599163 00:19:42.275 17:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:42.275 17:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2599163 00:19:42.275 17:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2599163 ']' 00:19:42.275 17:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:42.275 17:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:42.275 17:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:42.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:42.275 17:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:42.275 17:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:42.275 [2024-12-09 17:30:11.261665] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:19:42.275 [2024-12-09 17:30:11.261709] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:42.275 [2024-12-09 17:30:11.338320] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.275 [2024-12-09 17:30:11.376799] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:42.275 [2024-12-09 17:30:11.376833] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:42.275 [2024-12-09 17:30:11.376840] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:42.275 [2024-12-09 17:30:11.376846] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:42.275 [2024-12-09 17:30:11.376851] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:42.275 [2024-12-09 17:30:11.377403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:42.533 17:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:42.533 17:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:42.533 17:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:42.533 17:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:42.533 17:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:42.533 17:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:42.533 17:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.zVkKwP74ju 00:19:42.533 17:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:42.533 17:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.zVkKwP74ju 00:19:42.533 17:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:19:42.533 17:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:42.533 17:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:19:42.533 17:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:42.533 17:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.zVkKwP74ju 00:19:42.533 17:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.zVkKwP74ju 00:19:42.533 17:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:42.533 [2024-12-09 17:30:11.684048] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:42.792 17:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:42.792 17:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:43.050 [2024-12-09 17:30:12.085081] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:43.050 [2024-12-09 17:30:12.085299] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:43.050 17:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:43.308 malloc0 00:19:43.308 17:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:43.566 17:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.zVkKwP74ju 00:19:43.566 [2024-12-09 17:30:12.674499] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.zVkKwP74ju': 0100666 00:19:43.567 [2024-12-09 17:30:12.674523] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:43.567 request: 00:19:43.567 { 00:19:43.567 "name": "key0", 00:19:43.567 "path": "/tmp/tmp.zVkKwP74ju", 00:19:43.567 "method": "keyring_file_add_key", 00:19:43.567 "req_id": 1 00:19:43.567 } 00:19:43.567 Got JSON-RPC error response 00:19:43.567 response: 00:19:43.567 { 00:19:43.567 "code": -1, 00:19:43.567 "message": "Operation not permitted" 00:19:43.567 } 00:19:43.567 17:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:43.826 [2024-12-09 17:30:12.875035] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:19:43.826 [2024-12-09 17:30:12.875068] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:43.826 request: 00:19:43.826 { 00:19:43.826 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:43.826 "host": "nqn.2016-06.io.spdk:host1", 00:19:43.826 "psk": "key0", 00:19:43.826 "method": "nvmf_subsystem_add_host", 00:19:43.826 "req_id": 1 00:19:43.826 } 00:19:43.826 Got JSON-RPC error response 00:19:43.826 response: 00:19:43.826 { 00:19:43.826 "code": -32603, 00:19:43.826 "message": "Internal error" 00:19:43.826 } 00:19:43.826 17:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:43.826 17:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:43.826 17:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:43.826 17:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:43.826 17:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 2599163 00:19:43.826 17:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2599163 ']' 00:19:43.826 17:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2599163 00:19:43.826 17:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:43.826 17:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:43.826 17:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2599163 00:19:43.826 17:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:43.826 17:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:43.826 17:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2599163' 00:19:43.826 killing process with pid 2599163 00:19:43.826 17:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2599163 00:19:43.826 17:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2599163 00:19:44.084 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.zVkKwP74ju 00:19:44.084 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:19:44.084 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:44.084 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:44.084 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:44.084 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2599502 00:19:44.084 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:44.084 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2599502 00:19:44.084 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2599502 ']' 00:19:44.084 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:44.084 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:44.084 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:44.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:44.084 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:44.084 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:44.084 [2024-12-09 17:30:13.178564] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:19:44.084 [2024-12-09 17:30:13.178613] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:44.084 [2024-12-09 17:30:13.258207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:44.343 [2024-12-09 17:30:13.301052] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:44.343 [2024-12-09 17:30:13.301088] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:44.343 [2024-12-09 17:30:13.301097] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:44.343 [2024-12-09 17:30:13.301103] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:44.343 [2024-12-09 17:30:13.301110] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:44.343 [2024-12-09 17:30:13.301674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:44.343 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:44.343 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:44.343 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:44.343 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:44.343 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:44.343 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:44.343 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.zVkKwP74ju 00:19:44.343 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.zVkKwP74ju 00:19:44.343 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:44.601 [2024-12-09 17:30:13.605787] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:44.601 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:44.860 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:44.860 [2024-12-09 17:30:13.994780] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:44.860 [2024-12-09 17:30:13.994992] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:44.860 17:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:45.118 malloc0 00:19:45.118 17:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:45.376 17:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.zVkKwP74ju 00:19:45.635 17:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:45.894 17:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:45.894 17:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=2599895 00:19:45.894 17:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:45.894 17:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 2599895 /var/tmp/bdevperf.sock 00:19:45.894 17:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2599895 ']' 00:19:45.894 17:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:45.894 17:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:45.894 17:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:45.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:45.894 17:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:45.894 17:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:45.894 [2024-12-09 17:30:14.850906] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:19:45.894 [2024-12-09 17:30:14.850952] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2599895 ] 00:19:45.894 [2024-12-09 17:30:14.905706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:45.894 [2024-12-09 17:30:14.944698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:45.894 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:45.894 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:45.894 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zVkKwP74ju 00:19:46.151 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:46.408 [2024-12-09 17:30:15.399601] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:46.408 TLSTESTn1 00:19:46.408 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:19:46.666 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:19:46.666 "subsystems": [ 00:19:46.666 { 00:19:46.666 "subsystem": "keyring", 00:19:46.666 "config": [ 00:19:46.666 { 00:19:46.666 "method": "keyring_file_add_key", 00:19:46.666 "params": { 00:19:46.666 "name": "key0", 00:19:46.666 "path": "/tmp/tmp.zVkKwP74ju" 00:19:46.666 } 00:19:46.666 } 00:19:46.666 ] 00:19:46.666 }, 00:19:46.666 { 00:19:46.666 "subsystem": "iobuf", 00:19:46.666 "config": [ 00:19:46.666 { 00:19:46.666 "method": "iobuf_set_options", 00:19:46.666 "params": { 00:19:46.666 "small_pool_count": 8192, 00:19:46.666 "large_pool_count": 1024, 00:19:46.666 "small_bufsize": 8192, 00:19:46.666 "large_bufsize": 135168, 00:19:46.666 "enable_numa": false 00:19:46.666 } 00:19:46.666 } 00:19:46.666 ] 00:19:46.666 }, 00:19:46.666 { 00:19:46.666 "subsystem": "sock", 00:19:46.666 "config": [ 00:19:46.667 { 00:19:46.667 "method": "sock_set_default_impl", 00:19:46.667 "params": { 00:19:46.667 "impl_name": "posix" 00:19:46.667 } 00:19:46.667 }, 00:19:46.667 { 00:19:46.667 "method": "sock_impl_set_options", 00:19:46.667 "params": { 00:19:46.667 "impl_name": "ssl", 00:19:46.667 "recv_buf_size": 4096, 00:19:46.667 "send_buf_size": 4096, 00:19:46.667 "enable_recv_pipe": true, 00:19:46.667 "enable_quickack": false, 00:19:46.667 "enable_placement_id": 0, 00:19:46.667 "enable_zerocopy_send_server": true, 00:19:46.667 "enable_zerocopy_send_client": false, 00:19:46.667 "zerocopy_threshold": 0, 00:19:46.667 "tls_version": 0, 00:19:46.667 "enable_ktls": false 00:19:46.667 } 00:19:46.667 }, 00:19:46.667 { 00:19:46.667 "method": "sock_impl_set_options", 00:19:46.667 "params": { 00:19:46.667 "impl_name": "posix", 00:19:46.667 "recv_buf_size": 2097152, 00:19:46.667 "send_buf_size": 2097152, 00:19:46.667 "enable_recv_pipe": true, 00:19:46.667 "enable_quickack": false, 00:19:46.667 "enable_placement_id": 0, 00:19:46.667 "enable_zerocopy_send_server": true, 00:19:46.667 "enable_zerocopy_send_client": false, 00:19:46.667 "zerocopy_threshold": 0, 00:19:46.667 "tls_version": 0, 00:19:46.667 "enable_ktls": false 00:19:46.667 } 00:19:46.667 } 00:19:46.667 ] 00:19:46.667 }, 00:19:46.667 { 00:19:46.667 "subsystem": "vmd", 00:19:46.667 "config": [] 00:19:46.667 }, 00:19:46.667 { 00:19:46.667 "subsystem": "accel", 00:19:46.667 "config": [ 00:19:46.667 { 00:19:46.667 "method": "accel_set_options", 00:19:46.667 "params": { 00:19:46.667 "small_cache_size": 128, 00:19:46.667 "large_cache_size": 16, 00:19:46.667 "task_count": 2048, 00:19:46.667 "sequence_count": 2048, 00:19:46.667 "buf_count": 2048 00:19:46.667 } 00:19:46.667 } 00:19:46.667 ] 00:19:46.667 }, 00:19:46.667 { 00:19:46.667 "subsystem": "bdev", 00:19:46.667 "config": [ 00:19:46.667 { 00:19:46.667 "method": "bdev_set_options", 00:19:46.667 "params": { 00:19:46.667 "bdev_io_pool_size": 65535, 00:19:46.667 "bdev_io_cache_size": 256, 00:19:46.667 "bdev_auto_examine": true, 00:19:46.667 "iobuf_small_cache_size": 128, 00:19:46.667 "iobuf_large_cache_size": 16 00:19:46.667 } 00:19:46.667 }, 00:19:46.667 { 00:19:46.667 "method": "bdev_raid_set_options", 00:19:46.667 "params": { 00:19:46.667 "process_window_size_kb": 1024, 00:19:46.667 "process_max_bandwidth_mb_sec": 0 00:19:46.667 } 00:19:46.667 }, 00:19:46.667 { 00:19:46.667 "method": "bdev_iscsi_set_options", 00:19:46.667 "params": { 00:19:46.667 "timeout_sec": 30 00:19:46.667 } 00:19:46.667 }, 00:19:46.667 { 00:19:46.667 "method": "bdev_nvme_set_options", 00:19:46.667 "params": { 00:19:46.667 "action_on_timeout": "none", 00:19:46.667 "timeout_us": 0, 00:19:46.667 "timeout_admin_us": 0, 00:19:46.667 "keep_alive_timeout_ms": 10000, 00:19:46.667 "arbitration_burst": 0, 00:19:46.667 "low_priority_weight": 0, 00:19:46.667 "medium_priority_weight": 0, 00:19:46.667 "high_priority_weight": 0, 00:19:46.667 "nvme_adminq_poll_period_us": 10000, 00:19:46.667 "nvme_ioq_poll_period_us": 0, 00:19:46.667 "io_queue_requests": 0, 00:19:46.667 "delay_cmd_submit": true, 00:19:46.667 "transport_retry_count": 4, 00:19:46.667 "bdev_retry_count": 3, 00:19:46.667 "transport_ack_timeout": 0, 00:19:46.667 "ctrlr_loss_timeout_sec": 0, 00:19:46.667 "reconnect_delay_sec": 0, 00:19:46.667 "fast_io_fail_timeout_sec": 0, 00:19:46.667 "disable_auto_failback": false, 00:19:46.667 "generate_uuids": false, 00:19:46.667 "transport_tos": 0, 00:19:46.667 "nvme_error_stat": false, 00:19:46.667 "rdma_srq_size": 0, 00:19:46.667 "io_path_stat": false, 00:19:46.667 "allow_accel_sequence": false, 00:19:46.667 "rdma_max_cq_size": 0, 00:19:46.667 "rdma_cm_event_timeout_ms": 0, 00:19:46.667 "dhchap_digests": [ 00:19:46.667 "sha256", 00:19:46.667 "sha384", 00:19:46.667 "sha512" 00:19:46.667 ], 00:19:46.667 "dhchap_dhgroups": [ 00:19:46.667 "null", 00:19:46.667 "ffdhe2048", 00:19:46.667 "ffdhe3072", 00:19:46.667 "ffdhe4096", 00:19:46.667 "ffdhe6144", 00:19:46.667 "ffdhe8192" 00:19:46.667 ] 00:19:46.667 } 00:19:46.667 }, 00:19:46.667 { 00:19:46.667 "method": "bdev_nvme_set_hotplug", 00:19:46.667 "params": { 00:19:46.667 "period_us": 100000, 00:19:46.667 "enable": false 00:19:46.667 } 00:19:46.667 }, 00:19:46.667 { 00:19:46.667 "method": "bdev_malloc_create", 00:19:46.667 "params": { 00:19:46.667 "name": "malloc0", 00:19:46.667 "num_blocks": 8192, 00:19:46.667 "block_size": 4096, 00:19:46.667 "physical_block_size": 4096, 00:19:46.667 "uuid": "3af0b474-5082-46da-87a3-6d46997c5f17", 00:19:46.667 "optimal_io_boundary": 0, 00:19:46.667 "md_size": 0, 00:19:46.667 "dif_type": 0, 00:19:46.667 "dif_is_head_of_md": false, 00:19:46.667 "dif_pi_format": 0 00:19:46.667 } 00:19:46.667 }, 00:19:46.667 { 00:19:46.667 "method": "bdev_wait_for_examine" 00:19:46.667 } 00:19:46.667 ] 00:19:46.667 }, 00:19:46.667 { 00:19:46.667 "subsystem": "nbd", 00:19:46.667 "config": [] 00:19:46.667 }, 00:19:46.667 { 00:19:46.667 "subsystem": "scheduler", 00:19:46.667 "config": [ 00:19:46.667 { 00:19:46.667 "method": "framework_set_scheduler", 00:19:46.667 "params": { 00:19:46.667 "name": "static" 00:19:46.667 } 00:19:46.667 } 00:19:46.667 ] 00:19:46.667 }, 00:19:46.667 { 00:19:46.667 "subsystem": "nvmf", 00:19:46.667 "config": [ 00:19:46.667 { 00:19:46.667 "method": "nvmf_set_config", 00:19:46.667 "params": { 00:19:46.667 "discovery_filter": "match_any", 00:19:46.667 "admin_cmd_passthru": { 00:19:46.667 "identify_ctrlr": false 00:19:46.667 }, 00:19:46.667 "dhchap_digests": [ 00:19:46.667 "sha256", 00:19:46.667 "sha384", 00:19:46.667 "sha512" 00:19:46.667 ], 00:19:46.667 "dhchap_dhgroups": [ 00:19:46.667 "null", 00:19:46.667 "ffdhe2048", 00:19:46.667 "ffdhe3072", 00:19:46.667 "ffdhe4096", 00:19:46.667 "ffdhe6144", 00:19:46.667 "ffdhe8192" 00:19:46.667 ] 00:19:46.667 } 00:19:46.667 }, 00:19:46.667 { 00:19:46.667 "method": "nvmf_set_max_subsystems", 00:19:46.667 "params": { 00:19:46.667 "max_subsystems": 1024 00:19:46.667 } 00:19:46.667 }, 00:19:46.667 { 00:19:46.667 "method": "nvmf_set_crdt", 00:19:46.667 "params": { 00:19:46.667 "crdt1": 0, 00:19:46.667 "crdt2": 0, 00:19:46.667 "crdt3": 0 00:19:46.667 } 00:19:46.667 }, 00:19:46.667 { 00:19:46.667 "method": "nvmf_create_transport", 00:19:46.667 "params": { 00:19:46.667 "trtype": "TCP", 00:19:46.667 "max_queue_depth": 128, 00:19:46.667 "max_io_qpairs_per_ctrlr": 127, 00:19:46.667 "in_capsule_data_size": 4096, 00:19:46.667 "max_io_size": 131072, 00:19:46.667 "io_unit_size": 131072, 00:19:46.667 "max_aq_depth": 128, 00:19:46.667 "num_shared_buffers": 511, 00:19:46.667 "buf_cache_size": 4294967295, 00:19:46.667 "dif_insert_or_strip": false, 00:19:46.667 "zcopy": false, 00:19:46.667 "c2h_success": false, 00:19:46.667 "sock_priority": 0, 00:19:46.667 "abort_timeout_sec": 1, 00:19:46.667 "ack_timeout": 0, 00:19:46.667 "data_wr_pool_size": 0 00:19:46.667 } 00:19:46.667 }, 00:19:46.667 { 00:19:46.667 "method": "nvmf_create_subsystem", 00:19:46.667 "params": { 00:19:46.667 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:46.667 "allow_any_host": false, 00:19:46.667 "serial_number": "SPDK00000000000001", 00:19:46.667 "model_number": "SPDK bdev Controller", 00:19:46.667 "max_namespaces": 10, 00:19:46.667 "min_cntlid": 1, 00:19:46.667 "max_cntlid": 65519, 00:19:46.667 "ana_reporting": false 00:19:46.667 } 00:19:46.667 }, 00:19:46.667 { 00:19:46.667 "method": "nvmf_subsystem_add_host", 00:19:46.668 "params": { 00:19:46.668 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:46.668 "host": "nqn.2016-06.io.spdk:host1", 00:19:46.668 "psk": "key0" 00:19:46.668 } 00:19:46.668 }, 00:19:46.668 { 00:19:46.668 "method": "nvmf_subsystem_add_ns", 00:19:46.668 "params": { 00:19:46.668 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:46.668 "namespace": { 00:19:46.668 "nsid": 1, 00:19:46.668 "bdev_name": "malloc0", 00:19:46.668 "nguid": "3AF0B474508246DA87A36D46997C5F17", 00:19:46.668 "uuid": "3af0b474-5082-46da-87a3-6d46997c5f17", 00:19:46.668 "no_auto_visible": false 00:19:46.668 } 00:19:46.668 } 00:19:46.668 }, 00:19:46.668 { 00:19:46.668 "method": "nvmf_subsystem_add_listener", 00:19:46.668 "params": { 00:19:46.668 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:46.668 "listen_address": { 00:19:46.668 "trtype": "TCP", 00:19:46.668 "adrfam": "IPv4", 00:19:46.668 "traddr": "10.0.0.2", 00:19:46.668 "trsvcid": "4420" 00:19:46.668 }, 00:19:46.668 "secure_channel": true 00:19:46.668 } 00:19:46.668 } 00:19:46.668 ] 00:19:46.668 } 00:19:46.668 ] 00:19:46.668 }' 00:19:46.668 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:46.927 17:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:19:46.927 "subsystems": [ 00:19:46.927 { 00:19:46.927 "subsystem": "keyring", 00:19:46.927 "config": [ 00:19:46.927 { 00:19:46.927 "method": "keyring_file_add_key", 00:19:46.927 "params": { 00:19:46.927 "name": "key0", 00:19:46.927 "path": "/tmp/tmp.zVkKwP74ju" 00:19:46.927 } 00:19:46.927 } 00:19:46.927 ] 00:19:46.927 }, 00:19:46.927 { 00:19:46.927 "subsystem": "iobuf", 00:19:46.927 "config": [ 00:19:46.927 { 00:19:46.927 "method": "iobuf_set_options", 00:19:46.927 "params": { 00:19:46.927 "small_pool_count": 8192, 00:19:46.927 "large_pool_count": 1024, 00:19:46.927 "small_bufsize": 8192, 00:19:46.927 "large_bufsize": 135168, 00:19:46.927 "enable_numa": false 00:19:46.927 } 00:19:46.927 } 00:19:46.927 ] 00:19:46.927 }, 00:19:46.927 { 00:19:46.927 "subsystem": "sock", 00:19:46.927 "config": [ 00:19:46.927 { 00:19:46.927 "method": "sock_set_default_impl", 00:19:46.927 "params": { 00:19:46.927 "impl_name": "posix" 00:19:46.927 } 00:19:46.927 }, 00:19:46.927 { 00:19:46.927 "method": "sock_impl_set_options", 00:19:46.927 "params": { 00:19:46.927 "impl_name": "ssl", 00:19:46.927 "recv_buf_size": 4096, 00:19:46.927 "send_buf_size": 4096, 00:19:46.927 "enable_recv_pipe": true, 00:19:46.927 "enable_quickack": false, 00:19:46.927 "enable_placement_id": 0, 00:19:46.927 "enable_zerocopy_send_server": true, 00:19:46.927 "enable_zerocopy_send_client": false, 00:19:46.927 "zerocopy_threshold": 0, 00:19:46.927 "tls_version": 0, 00:19:46.927 "enable_ktls": false 00:19:46.927 } 00:19:46.927 }, 00:19:46.927 { 00:19:46.927 "method": "sock_impl_set_options", 00:19:46.927 "params": { 00:19:46.927 "impl_name": "posix", 00:19:46.927 "recv_buf_size": 2097152, 00:19:46.927 "send_buf_size": 2097152, 00:19:46.927 "enable_recv_pipe": true, 00:19:46.927 "enable_quickack": false, 00:19:46.927 "enable_placement_id": 0, 00:19:46.927 "enable_zerocopy_send_server": true, 00:19:46.927 "enable_zerocopy_send_client": false, 00:19:46.927 "zerocopy_threshold": 0, 00:19:46.927 "tls_version": 0, 00:19:46.927 "enable_ktls": false 00:19:46.927 } 00:19:46.927 } 00:19:46.927 ] 00:19:46.927 }, 00:19:46.927 { 00:19:46.927 "subsystem": "vmd", 00:19:46.927 "config": [] 00:19:46.927 }, 00:19:46.927 { 00:19:46.927 "subsystem": "accel", 00:19:46.927 "config": [ 00:19:46.927 { 00:19:46.927 "method": "accel_set_options", 00:19:46.927 "params": { 00:19:46.927 "small_cache_size": 128, 00:19:46.927 "large_cache_size": 16, 00:19:46.927 "task_count": 2048, 00:19:46.927 "sequence_count": 2048, 00:19:46.927 "buf_count": 2048 00:19:46.927 } 00:19:46.927 } 00:19:46.927 ] 00:19:46.927 }, 00:19:46.927 { 00:19:46.927 "subsystem": "bdev", 00:19:46.927 "config": [ 00:19:46.927 { 00:19:46.927 "method": "bdev_set_options", 00:19:46.927 "params": { 00:19:46.927 "bdev_io_pool_size": 65535, 00:19:46.927 "bdev_io_cache_size": 256, 00:19:46.927 "bdev_auto_examine": true, 00:19:46.927 "iobuf_small_cache_size": 128, 00:19:46.927 "iobuf_large_cache_size": 16 00:19:46.927 } 00:19:46.927 }, 00:19:46.927 { 00:19:46.927 "method": "bdev_raid_set_options", 00:19:46.927 "params": { 00:19:46.927 "process_window_size_kb": 1024, 00:19:46.927 "process_max_bandwidth_mb_sec": 0 00:19:46.927 } 00:19:46.927 }, 00:19:46.927 { 00:19:46.927 "method": "bdev_iscsi_set_options", 00:19:46.927 "params": { 00:19:46.927 "timeout_sec": 30 00:19:46.927 } 00:19:46.927 }, 00:19:46.927 { 00:19:46.927 "method": "bdev_nvme_set_options", 00:19:46.927 "params": { 00:19:46.927 "action_on_timeout": "none", 00:19:46.927 "timeout_us": 0, 00:19:46.927 "timeout_admin_us": 0, 00:19:46.927 "keep_alive_timeout_ms": 10000, 00:19:46.927 "arbitration_burst": 0, 00:19:46.927 "low_priority_weight": 0, 00:19:46.927 "medium_priority_weight": 0, 00:19:46.927 "high_priority_weight": 0, 00:19:46.927 "nvme_adminq_poll_period_us": 10000, 00:19:46.927 "nvme_ioq_poll_period_us": 0, 00:19:46.927 "io_queue_requests": 512, 00:19:46.927 "delay_cmd_submit": true, 00:19:46.927 "transport_retry_count": 4, 00:19:46.927 "bdev_retry_count": 3, 00:19:46.927 "transport_ack_timeout": 0, 00:19:46.927 "ctrlr_loss_timeout_sec": 0, 00:19:46.927 "reconnect_delay_sec": 0, 00:19:46.927 "fast_io_fail_timeout_sec": 0, 00:19:46.927 "disable_auto_failback": false, 00:19:46.927 "generate_uuids": false, 00:19:46.927 "transport_tos": 0, 00:19:46.927 "nvme_error_stat": false, 00:19:46.927 "rdma_srq_size": 0, 00:19:46.927 "io_path_stat": false, 00:19:46.927 "allow_accel_sequence": false, 00:19:46.927 "rdma_max_cq_size": 0, 00:19:46.927 "rdma_cm_event_timeout_ms": 0, 00:19:46.927 "dhchap_digests": [ 00:19:46.927 "sha256", 00:19:46.927 "sha384", 00:19:46.927 "sha512" 00:19:46.927 ], 00:19:46.927 "dhchap_dhgroups": [ 00:19:46.927 "null", 00:19:46.927 "ffdhe2048", 00:19:46.927 "ffdhe3072", 00:19:46.927 "ffdhe4096", 00:19:46.927 "ffdhe6144", 00:19:46.927 "ffdhe8192" 00:19:46.927 ] 00:19:46.927 } 00:19:46.927 }, 00:19:46.927 { 00:19:46.927 "method": "bdev_nvme_attach_controller", 00:19:46.927 "params": { 00:19:46.927 "name": "TLSTEST", 00:19:46.927 "trtype": "TCP", 00:19:46.927 "adrfam": "IPv4", 00:19:46.927 "traddr": "10.0.0.2", 00:19:46.927 "trsvcid": "4420", 00:19:46.927 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:46.927 "prchk_reftag": false, 00:19:46.927 "prchk_guard": false, 00:19:46.927 "ctrlr_loss_timeout_sec": 0, 00:19:46.927 "reconnect_delay_sec": 0, 00:19:46.927 "fast_io_fail_timeout_sec": 0, 00:19:46.927 "psk": "key0", 00:19:46.927 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:46.927 "hdgst": false, 00:19:46.927 "ddgst": false, 00:19:46.927 "multipath": "multipath" 00:19:46.927 } 00:19:46.927 }, 00:19:46.927 { 00:19:46.927 "method": "bdev_nvme_set_hotplug", 00:19:46.927 "params": { 00:19:46.928 "period_us": 100000, 00:19:46.928 "enable": false 00:19:46.928 } 00:19:46.928 }, 00:19:46.928 { 00:19:46.928 "method": "bdev_wait_for_examine" 00:19:46.928 } 00:19:46.928 ] 00:19:46.928 }, 00:19:46.928 { 00:19:46.928 "subsystem": "nbd", 00:19:46.928 "config": [] 00:19:46.928 } 00:19:46.928 ] 00:19:46.928 }' 00:19:46.928 17:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 2599895 00:19:46.928 17:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2599895 ']' 00:19:46.928 17:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2599895 00:19:46.928 17:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:46.928 17:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:46.928 17:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2599895 00:19:46.928 17:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:46.928 17:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:46.928 17:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2599895' 00:19:46.928 killing process with pid 2599895 00:19:46.928 17:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2599895 00:19:46.928 Received shutdown signal, test time was about 10.000000 seconds 00:19:46.928 00:19:46.928 Latency(us) 00:19:46.928 [2024-12-09T16:30:16.107Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:46.928 [2024-12-09T16:30:16.107Z] =================================================================================================================== 00:19:46.928 [2024-12-09T16:30:16.107Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:46.928 17:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2599895 00:19:47.187 17:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 2599502 00:19:47.187 17:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2599502 ']' 00:19:47.187 17:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2599502 00:19:47.187 17:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:47.187 17:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:47.187 17:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2599502 00:19:47.187 17:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:47.187 17:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:47.187 17:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2599502' 00:19:47.187 killing process with pid 2599502 00:19:47.187 17:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2599502 00:19:47.187 17:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2599502 00:19:47.446 17:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:47.446 17:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:47.446 17:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:47.446 17:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:19:47.446 "subsystems": [ 00:19:47.446 { 00:19:47.446 "subsystem": "keyring", 00:19:47.446 "config": [ 00:19:47.446 { 00:19:47.446 "method": "keyring_file_add_key", 00:19:47.446 "params": { 00:19:47.446 "name": "key0", 00:19:47.446 "path": "/tmp/tmp.zVkKwP74ju" 00:19:47.446 } 00:19:47.446 } 00:19:47.446 ] 00:19:47.446 }, 00:19:47.446 { 00:19:47.446 "subsystem": "iobuf", 00:19:47.446 "config": [ 00:19:47.446 { 00:19:47.446 "method": "iobuf_set_options", 00:19:47.446 "params": { 00:19:47.446 "small_pool_count": 8192, 00:19:47.446 "large_pool_count": 1024, 00:19:47.446 "small_bufsize": 8192, 00:19:47.446 "large_bufsize": 135168, 00:19:47.446 "enable_numa": false 00:19:47.446 } 00:19:47.446 } 00:19:47.446 ] 00:19:47.446 }, 00:19:47.446 { 00:19:47.446 "subsystem": "sock", 00:19:47.446 "config": [ 00:19:47.446 { 00:19:47.446 "method": "sock_set_default_impl", 00:19:47.446 "params": { 00:19:47.446 "impl_name": "posix" 00:19:47.446 } 00:19:47.446 }, 00:19:47.446 { 00:19:47.446 "method": "sock_impl_set_options", 00:19:47.446 "params": { 00:19:47.446 "impl_name": "ssl", 00:19:47.446 "recv_buf_size": 4096, 00:19:47.446 "send_buf_size": 4096, 00:19:47.446 "enable_recv_pipe": true, 00:19:47.446 "enable_quickack": false, 00:19:47.446 "enable_placement_id": 0, 00:19:47.446 "enable_zerocopy_send_server": true, 00:19:47.446 "enable_zerocopy_send_client": false, 00:19:47.446 "zerocopy_threshold": 0, 00:19:47.446 "tls_version": 0, 00:19:47.446 "enable_ktls": false 00:19:47.446 } 00:19:47.446 }, 00:19:47.446 { 00:19:47.446 "method": "sock_impl_set_options", 00:19:47.446 "params": { 00:19:47.446 "impl_name": "posix", 00:19:47.446 "recv_buf_size": 2097152, 00:19:47.446 "send_buf_size": 2097152, 00:19:47.446 "enable_recv_pipe": true, 00:19:47.446 "enable_quickack": false, 00:19:47.446 "enable_placement_id": 0, 00:19:47.446 "enable_zerocopy_send_server": true, 00:19:47.446 "enable_zerocopy_send_client": false, 00:19:47.446 "zerocopy_threshold": 0, 00:19:47.446 "tls_version": 0, 00:19:47.446 "enable_ktls": false 00:19:47.446 } 00:19:47.446 } 00:19:47.446 ] 00:19:47.446 }, 00:19:47.446 { 00:19:47.446 "subsystem": "vmd", 00:19:47.446 "config": [] 00:19:47.446 }, 00:19:47.446 { 00:19:47.446 "subsystem": "accel", 00:19:47.446 "config": [ 00:19:47.446 { 00:19:47.446 "method": "accel_set_options", 00:19:47.446 "params": { 00:19:47.446 "small_cache_size": 128, 00:19:47.446 "large_cache_size": 16, 00:19:47.446 "task_count": 2048, 00:19:47.446 "sequence_count": 2048, 00:19:47.446 "buf_count": 2048 00:19:47.446 } 00:19:47.446 } 00:19:47.446 ] 00:19:47.446 }, 00:19:47.446 { 00:19:47.446 "subsystem": "bdev", 00:19:47.446 "config": [ 00:19:47.446 { 00:19:47.446 "method": "bdev_set_options", 00:19:47.446 "params": { 00:19:47.446 "bdev_io_pool_size": 65535, 00:19:47.446 "bdev_io_cache_size": 256, 00:19:47.446 "bdev_auto_examine": true, 00:19:47.446 "iobuf_small_cache_size": 128, 00:19:47.446 "iobuf_large_cache_size": 16 00:19:47.446 } 00:19:47.446 }, 00:19:47.446 { 00:19:47.446 "method": "bdev_raid_set_options", 00:19:47.446 "params": { 00:19:47.446 "process_window_size_kb": 1024, 00:19:47.446 "process_max_bandwidth_mb_sec": 0 00:19:47.446 } 00:19:47.446 }, 00:19:47.446 { 00:19:47.446 "method": "bdev_iscsi_set_options", 00:19:47.446 "params": { 00:19:47.446 "timeout_sec": 30 00:19:47.446 } 00:19:47.446 }, 00:19:47.446 { 00:19:47.446 "method": "bdev_nvme_set_options", 00:19:47.446 "params": { 00:19:47.446 "action_on_timeout": "none", 00:19:47.446 "timeout_us": 0, 00:19:47.446 "timeout_admin_us": 0, 00:19:47.446 "keep_alive_timeout_ms": 10000, 00:19:47.446 "arbitration_burst": 0, 00:19:47.446 "low_priority_weight": 0, 00:19:47.446 "medium_priority_weight": 0, 00:19:47.446 "high_priority_weight": 0, 00:19:47.446 "nvme_adminq_poll_period_us": 10000, 00:19:47.446 "nvme_ioq_poll_period_us": 0, 00:19:47.446 "io_queue_requests": 0, 00:19:47.446 "delay_cmd_submit": true, 00:19:47.446 "transport_retry_count": 4, 00:19:47.446 "bdev_retry_count": 3, 00:19:47.446 "transport_ack_timeout": 0, 00:19:47.446 "ctrlr_loss_timeout_sec": 0, 00:19:47.446 "reconnect_delay_sec": 0, 00:19:47.446 "fast_io_fail_timeout_sec": 0, 00:19:47.446 "disable_auto_failback": false, 00:19:47.446 "generate_uuids": false, 00:19:47.446 "transport_tos": 0, 00:19:47.447 "nvme_error_stat": false, 00:19:47.447 "rdma_srq_size": 0, 00:19:47.447 "io_path_stat": false, 00:19:47.447 "allow_accel_sequence": false, 00:19:47.447 "rdma_max_cq_size": 0, 00:19:47.447 "rdma_cm_event_timeout_ms": 0, 00:19:47.447 "dhchap_digests": [ 00:19:47.447 "sha256", 00:19:47.447 "sha384", 00:19:47.447 "sha512" 00:19:47.447 ], 00:19:47.447 "dhchap_dhgroups": [ 00:19:47.447 "null", 00:19:47.447 "ffdhe2048", 00:19:47.447 "ffdhe3072", 00:19:47.447 "ffdhe4096", 00:19:47.447 "ffdhe6144", 00:19:47.447 "ffdhe8192" 00:19:47.447 ] 00:19:47.447 } 00:19:47.447 }, 00:19:47.447 { 00:19:47.447 "method": "bdev_nvme_set_hotplug", 00:19:47.447 "params": { 00:19:47.447 "period_us": 100000, 00:19:47.447 "enable": false 00:19:47.447 } 00:19:47.447 }, 00:19:47.447 { 00:19:47.447 "method": "bdev_malloc_create", 00:19:47.447 "params": { 00:19:47.447 "name": "malloc0", 00:19:47.447 "num_blocks": 8192, 00:19:47.447 "block_size": 4096, 00:19:47.447 "physical_block_size": 4096, 00:19:47.447 "uuid": "3af0b474-5082-46da-87a3-6d46997c5f17", 00:19:47.447 "optimal_io_boundary": 0, 00:19:47.447 "md_size": 0, 00:19:47.447 "dif_type": 0, 00:19:47.447 "dif_is_head_of_md": false, 00:19:47.447 "dif_pi_format": 0 00:19:47.447 } 00:19:47.447 }, 00:19:47.447 { 00:19:47.447 "method": "bdev_wait_for_examine" 00:19:47.447 } 00:19:47.447 ] 00:19:47.447 }, 00:19:47.447 { 00:19:47.447 "subsystem": "nbd", 00:19:47.447 "config": [] 00:19:47.447 }, 00:19:47.447 { 00:19:47.447 "subsystem": "scheduler", 00:19:47.447 "config": [ 00:19:47.447 { 00:19:47.447 "method": "framework_set_scheduler", 00:19:47.447 "params": { 00:19:47.447 "name": "static" 00:19:47.447 } 00:19:47.447 } 00:19:47.447 ] 00:19:47.447 }, 00:19:47.447 { 00:19:47.447 "subsystem": "nvmf", 00:19:47.447 "config": [ 00:19:47.447 { 00:19:47.447 "method": "nvmf_set_config", 00:19:47.447 "params": { 00:19:47.447 "discovery_filter": "match_any", 00:19:47.447 "admin_cmd_passthru": { 00:19:47.447 "identify_ctrlr": false 00:19:47.447 }, 00:19:47.447 "dhchap_digests": [ 00:19:47.447 "sha256", 00:19:47.447 "sha384", 00:19:47.447 "sha512" 00:19:47.447 ], 00:19:47.447 "dhchap_dhgroups": [ 00:19:47.447 "null", 00:19:47.447 "ffdhe2048", 00:19:47.447 "ffdhe3072", 00:19:47.447 "ffdhe4096", 00:19:47.447 "ffdhe6144", 00:19:47.447 "ffdhe8192" 00:19:47.447 ] 00:19:47.447 } 00:19:47.447 }, 00:19:47.447 { 00:19:47.447 "method": "nvmf_set_max_subsystems", 00:19:47.447 "params": { 00:19:47.447 "max_subsystems": 1024 00:19:47.447 } 00:19:47.447 }, 00:19:47.447 { 00:19:47.447 "method": "nvmf_set_crdt", 00:19:47.447 "params": { 00:19:47.447 "crdt1": 0, 00:19:47.447 "crdt2": 0, 00:19:47.447 "crdt3": 0 00:19:47.447 } 00:19:47.447 }, 00:19:47.447 { 00:19:47.447 "method": "nvmf_create_transport", 00:19:47.447 "params": { 00:19:47.447 "trtype": "TCP", 00:19:47.447 "max_queue_depth": 128, 00:19:47.447 "max_io_qpairs_per_ctrlr": 127, 00:19:47.447 "in_capsule_data_size": 4096, 00:19:47.447 "max_io_size": 131072, 00:19:47.447 "io_unit_size": 131072, 00:19:47.447 "max_aq_depth": 128, 00:19:47.447 "num_shared_buffers": 511, 00:19:47.447 "buf_cache_size": 4294967295, 00:19:47.447 "dif_insert_or_strip": false, 00:19:47.447 "zcopy": false, 00:19:47.447 "c2h_success": false, 00:19:47.447 "sock_priority": 0, 00:19:47.447 "abort_timeout_sec": 1, 00:19:47.447 "ack_timeout": 0, 00:19:47.447 "data_wr_pool_size": 0 00:19:47.447 } 00:19:47.447 }, 00:19:47.447 { 00:19:47.447 "method": "nvmf_create_subsystem", 00:19:47.447 "params": { 00:19:47.447 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:47.447 "allow_any_host": false, 00:19:47.447 "serial_number": "SPDK00000000000001", 00:19:47.447 "model_number": "SPDK bdev Controller", 00:19:47.447 "max_namespaces": 10, 00:19:47.447 "min_cntlid": 1, 00:19:47.447 "max_cntlid": 65519, 00:19:47.447 "ana_reporting": false 00:19:47.447 } 00:19:47.447 }, 00:19:47.447 { 00:19:47.447 "method": "nvmf_subsystem_add_host", 00:19:47.447 "params": { 00:19:47.447 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:47.447 "host": "nqn.2016-06.io.spdk:host1", 00:19:47.447 "psk": "key0" 00:19:47.447 } 00:19:47.447 }, 00:19:47.447 { 00:19:47.447 "method": "nvmf_subsystem_add_ns", 00:19:47.447 "params": { 00:19:47.447 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:47.447 "namespace": { 00:19:47.447 "nsid": 1, 00:19:47.447 "bdev_name": "malloc0", 00:19:47.447 "nguid": "3AF0B474508246DA87A36D46997C5F17", 00:19:47.447 "uuid": "3af0b474-5082-46da-87a3-6d46997c5f17", 00:19:47.447 "no_auto_visible": false 00:19:47.447 } 00:19:47.447 } 00:19:47.447 }, 00:19:47.447 { 00:19:47.447 "method": "nvmf_subsystem_add_listener", 00:19:47.447 "params": { 00:19:47.447 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:47.447 "listen_address": { 00:19:47.447 "trtype": "TCP", 00:19:47.447 "adrfam": "IPv4", 00:19:47.447 "traddr": "10.0.0.2", 00:19:47.447 "trsvcid": "4420" 00:19:47.447 }, 00:19:47.447 "secure_channel": true 00:19:47.447 } 00:19:47.447 } 00:19:47.447 ] 00:19:47.447 } 00:19:47.447 ] 00:19:47.447 }' 00:19:47.447 17:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:47.447 17:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2600142 00:19:47.447 17:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:47.447 17:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2600142 00:19:47.447 17:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2600142 ']' 00:19:47.447 17:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:47.447 17:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:47.447 17:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:47.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:47.447 17:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:47.447 17:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:47.447 [2024-12-09 17:30:16.511029] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:19:47.447 [2024-12-09 17:30:16.511074] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:47.447 [2024-12-09 17:30:16.589584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:47.706 [2024-12-09 17:30:16.629108] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:47.706 [2024-12-09 17:30:16.629140] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:47.706 [2024-12-09 17:30:16.629147] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:47.706 [2024-12-09 17:30:16.629153] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:47.706 [2024-12-09 17:30:16.629158] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:47.706 [2024-12-09 17:30:16.629754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:47.706 [2024-12-09 17:30:16.842020] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:47.706 [2024-12-09 17:30:16.874057] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:47.706 [2024-12-09 17:30:16.874258] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:48.273 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:48.273 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:48.273 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:48.273 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:48.273 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:48.273 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:48.273 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=2600262 00:19:48.273 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 2600262 /var/tmp/bdevperf.sock 00:19:48.273 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2600262 ']' 00:19:48.273 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:48.273 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:48.273 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:48.273 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:19:48.273 "subsystems": [ 00:19:48.273 { 00:19:48.273 "subsystem": "keyring", 00:19:48.273 "config": [ 00:19:48.273 { 00:19:48.273 "method": "keyring_file_add_key", 00:19:48.273 "params": { 00:19:48.273 "name": "key0", 00:19:48.273 "path": "/tmp/tmp.zVkKwP74ju" 00:19:48.273 } 00:19:48.273 } 00:19:48.273 ] 00:19:48.273 }, 00:19:48.273 { 00:19:48.273 "subsystem": "iobuf", 00:19:48.273 "config": [ 00:19:48.273 { 00:19:48.273 "method": "iobuf_set_options", 00:19:48.273 "params": { 00:19:48.273 "small_pool_count": 8192, 00:19:48.273 "large_pool_count": 1024, 00:19:48.273 "small_bufsize": 8192, 00:19:48.273 "large_bufsize": 135168, 00:19:48.273 "enable_numa": false 00:19:48.273 } 00:19:48.273 } 00:19:48.273 ] 00:19:48.273 }, 00:19:48.273 { 00:19:48.273 "subsystem": "sock", 00:19:48.273 "config": [ 00:19:48.273 { 00:19:48.273 "method": "sock_set_default_impl", 00:19:48.273 "params": { 00:19:48.273 "impl_name": "posix" 00:19:48.273 } 00:19:48.273 }, 00:19:48.273 { 00:19:48.273 "method": "sock_impl_set_options", 00:19:48.273 "params": { 00:19:48.273 "impl_name": "ssl", 00:19:48.273 "recv_buf_size": 4096, 00:19:48.273 "send_buf_size": 4096, 00:19:48.273 "enable_recv_pipe": true, 00:19:48.273 "enable_quickack": false, 00:19:48.273 "enable_placement_id": 0, 00:19:48.273 "enable_zerocopy_send_server": true, 00:19:48.273 "enable_zerocopy_send_client": false, 00:19:48.273 "zerocopy_threshold": 0, 00:19:48.273 "tls_version": 0, 00:19:48.273 "enable_ktls": false 00:19:48.273 } 00:19:48.273 }, 00:19:48.273 { 00:19:48.273 "method": "sock_impl_set_options", 00:19:48.273 "params": { 00:19:48.273 "impl_name": "posix", 00:19:48.273 "recv_buf_size": 2097152, 00:19:48.273 "send_buf_size": 2097152, 00:19:48.273 "enable_recv_pipe": true, 00:19:48.273 "enable_quickack": false, 00:19:48.273 "enable_placement_id": 0, 00:19:48.273 "enable_zerocopy_send_server": true, 00:19:48.273 "enable_zerocopy_send_client": false, 00:19:48.273 "zerocopy_threshold": 0, 00:19:48.273 "tls_version": 0, 00:19:48.273 "enable_ktls": false 00:19:48.273 } 00:19:48.273 } 00:19:48.273 ] 00:19:48.273 }, 00:19:48.273 { 00:19:48.273 "subsystem": "vmd", 00:19:48.273 "config": [] 00:19:48.273 }, 00:19:48.273 { 00:19:48.273 "subsystem": "accel", 00:19:48.273 "config": [ 00:19:48.273 { 00:19:48.273 "method": "accel_set_options", 00:19:48.273 "params": { 00:19:48.273 "small_cache_size": 128, 00:19:48.273 "large_cache_size": 16, 00:19:48.273 "task_count": 2048, 00:19:48.273 "sequence_count": 2048, 00:19:48.273 "buf_count": 2048 00:19:48.273 } 00:19:48.273 } 00:19:48.273 ] 00:19:48.273 }, 00:19:48.273 { 00:19:48.273 "subsystem": "bdev", 00:19:48.273 "config": [ 00:19:48.273 { 00:19:48.273 "method": "bdev_set_options", 00:19:48.273 "params": { 00:19:48.273 "bdev_io_pool_size": 65535, 00:19:48.273 "bdev_io_cache_size": 256, 00:19:48.273 "bdev_auto_examine": true, 00:19:48.273 "iobuf_small_cache_size": 128, 00:19:48.273 "iobuf_large_cache_size": 16 00:19:48.273 } 00:19:48.273 }, 00:19:48.273 { 00:19:48.273 "method": "bdev_raid_set_options", 00:19:48.273 "params": { 00:19:48.273 "process_window_size_kb": 1024, 00:19:48.273 "process_max_bandwidth_mb_sec": 0 00:19:48.273 } 00:19:48.273 }, 00:19:48.273 { 00:19:48.273 "method": "bdev_iscsi_set_options", 00:19:48.273 "params": { 00:19:48.273 "timeout_sec": 30 00:19:48.273 } 00:19:48.273 }, 00:19:48.273 { 00:19:48.273 "method": "bdev_nvme_set_options", 00:19:48.273 "params": { 00:19:48.273 "action_on_timeout": "none", 00:19:48.273 "timeout_us": 0, 00:19:48.273 "timeout_admin_us": 0, 00:19:48.273 "keep_alive_timeout_ms": 10000, 00:19:48.273 "arbitration_burst": 0, 00:19:48.273 "low_priority_weight": 0, 00:19:48.273 "medium_priority_weight": 0, 00:19:48.273 "high_priority_weight": 0, 00:19:48.273 "nvme_adminq_poll_period_us": 10000, 00:19:48.273 "nvme_ioq_poll_period_us": 0, 00:19:48.273 "io_queue_requests": 512, 00:19:48.273 "delay_cmd_submit": true, 00:19:48.273 "transport_retry_count": 4, 00:19:48.273 "bdev_retry_count": 3, 00:19:48.273 "transport_ack_timeout": 0, 00:19:48.273 "ctrlr_loss_timeout_sec": 0, 00:19:48.273 "reconnect_delay_sec": 0, 00:19:48.273 "fast_io_fail_timeout_sec": 0, 00:19:48.273 "disable_auto_failback": false, 00:19:48.273 "generate_uuids": false, 00:19:48.273 "transport_tos": 0, 00:19:48.273 "nvme_error_stat": false, 00:19:48.273 "rdma_srq_size": 0, 00:19:48.273 "io_path_stat": false, 00:19:48.273 "allow_accel_sequence": false, 00:19:48.273 "rdma_max_cq_size": 0, 00:19:48.273 "rdma_cm_event_timeout_ms": 0, 00:19:48.273 "dhchap_digests": [ 00:19:48.273 "sha256", 00:19:48.273 "sha384", 00:19:48.273 "sha512" 00:19:48.274 ], 00:19:48.274 "dhchap_dhgroups": [ 00:19:48.274 "null", 00:19:48.274 "ffdhe2048", 00:19:48.274 "ffdhe3072", 00:19:48.274 "ffdhe4096", 00:19:48.274 "ffdhe6144", 00:19:48.274 "ffdhe8192" 00:19:48.274 ] 00:19:48.274 } 00:19:48.274 }, 00:19:48.274 { 00:19:48.274 "method": "bdev_nvme_attach_controller", 00:19:48.274 "params": { 00:19:48.274 "name": "TLSTEST", 00:19:48.274 "trtype": "TCP", 00:19:48.274 "adrfam": "IPv4", 00:19:48.274 "traddr": "10.0.0.2", 00:19:48.274 "trsvcid": "4420", 00:19:48.274 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:48.274 "prchk_reftag": false, 00:19:48.274 "prchk_guard": false, 00:19:48.274 "ctrlr_loss_timeout_sec": 0, 00:19:48.274 "reconnect_delay_sec": 0, 00:19:48.274 "fast_io_fail_timeout_sec": 0, 00:19:48.274 "psk": "key0", 00:19:48.274 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:48.274 "hdgst": false, 00:19:48.274 "ddgst": false, 00:19:48.274 "multipath": "multipath" 00:19:48.274 } 00:19:48.274 }, 00:19:48.274 { 00:19:48.274 "method": "bdev_nvme_set_hotplug", 00:19:48.274 "params": { 00:19:48.274 "period_us": 100000, 00:19:48.274 "enable": false 00:19:48.274 } 00:19:48.274 }, 00:19:48.274 { 00:19:48.274 "method": "bdev_wait_for_examine" 00:19:48.274 } 00:19:48.274 ] 00:19:48.274 }, 00:19:48.274 { 00:19:48.274 "subsystem": "nbd", 00:19:48.274 "config": [] 00:19:48.274 } 00:19:48.274 ] 00:19:48.274 }' 00:19:48.274 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:48.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:48.274 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:48.274 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:48.274 [2024-12-09 17:30:17.421073] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:19:48.274 [2024-12-09 17:30:17.421121] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2600262 ] 00:19:48.532 [2024-12-09 17:30:17.493764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:48.532 [2024-12-09 17:30:17.534245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:48.532 [2024-12-09 17:30:17.687101] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:49.099 17:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:49.099 17:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:49.099 17:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:49.357 Running I/O for 10 seconds... 00:19:51.227 5446.00 IOPS, 21.27 MiB/s [2024-12-09T16:30:21.781Z] 5484.00 IOPS, 21.42 MiB/s [2024-12-09T16:30:22.716Z] 5513.00 IOPS, 21.54 MiB/s [2024-12-09T16:30:23.650Z] 5525.25 IOPS, 21.58 MiB/s [2024-12-09T16:30:24.585Z] 5554.40 IOPS, 21.70 MiB/s [2024-12-09T16:30:25.519Z] 5552.50 IOPS, 21.69 MiB/s [2024-12-09T16:30:26.453Z] 5562.00 IOPS, 21.73 MiB/s [2024-12-09T16:30:27.388Z] 5568.62 IOPS, 21.75 MiB/s [2024-12-09T16:30:28.762Z] 5573.67 IOPS, 21.77 MiB/s [2024-12-09T16:30:28.762Z] 5583.10 IOPS, 21.81 MiB/s 00:19:59.583 Latency(us) 00:19:59.583 [2024-12-09T16:30:28.762Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.583 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:59.583 Verification LBA range: start 0x0 length 0x2000 00:19:59.583 TLSTESTn1 : 10.01 5588.84 21.83 0.00 0.00 22869.54 5149.26 31457.28 00:19:59.583 [2024-12-09T16:30:28.762Z] =================================================================================================================== 00:19:59.583 [2024-12-09T16:30:28.762Z] Total : 5588.84 21.83 0.00 0.00 22869.54 5149.26 31457.28 00:19:59.583 { 00:19:59.583 "results": [ 00:19:59.583 { 00:19:59.583 "job": "TLSTESTn1", 00:19:59.583 "core_mask": "0x4", 00:19:59.583 "workload": "verify", 00:19:59.583 "status": "finished", 00:19:59.583 "verify_range": { 00:19:59.583 "start": 0, 00:19:59.583 "length": 8192 00:19:59.583 }, 00:19:59.583 "queue_depth": 128, 00:19:59.583 "io_size": 4096, 00:19:59.583 "runtime": 10.012447, 00:19:59.583 "iops": 5588.843566412886, 00:19:59.583 "mibps": 21.831420181300334, 00:19:59.583 "io_failed": 0, 00:19:59.583 "io_timeout": 0, 00:19:59.583 "avg_latency_us": 22869.535289426254, 00:19:59.583 "min_latency_us": 5149.257142857143, 00:19:59.583 "max_latency_us": 31457.28 00:19:59.583 } 00:19:59.583 ], 00:19:59.583 "core_count": 1 00:19:59.583 } 00:19:59.583 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:59.583 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 2600262 00:19:59.583 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2600262 ']' 00:19:59.583 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2600262 00:19:59.583 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:59.583 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:59.583 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2600262 00:19:59.583 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:59.583 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:59.583 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2600262' 00:19:59.583 killing process with pid 2600262 00:19:59.583 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2600262 00:19:59.583 Received shutdown signal, test time was about 10.000000 seconds 00:19:59.583 00:19:59.583 Latency(us) 00:19:59.583 [2024-12-09T16:30:28.762Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.583 [2024-12-09T16:30:28.762Z] =================================================================================================================== 00:19:59.583 [2024-12-09T16:30:28.762Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:59.583 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2600262 00:19:59.583 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 2600142 00:19:59.583 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2600142 ']' 00:19:59.583 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2600142 00:19:59.584 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:59.584 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:59.584 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2600142 00:19:59.584 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:59.584 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:59.584 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2600142' 00:19:59.584 killing process with pid 2600142 00:19:59.584 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2600142 00:19:59.584 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2600142 00:19:59.842 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:19:59.842 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:59.842 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:59.842 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:59.842 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2602154 00:19:59.842 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2602154 00:19:59.842 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:59.842 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2602154 ']' 00:19:59.842 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:59.842 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:59.842 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:59.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:59.842 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:59.842 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:59.842 [2024-12-09 17:30:28.903692] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:19:59.842 [2024-12-09 17:30:28.903742] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:59.842 [2024-12-09 17:30:28.979204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:59.842 [2024-12-09 17:30:29.016376] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:59.842 [2024-12-09 17:30:29.016410] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:59.842 [2024-12-09 17:30:29.016417] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:59.842 [2024-12-09 17:30:29.016423] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:59.842 [2024-12-09 17:30:29.016428] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:59.842 [2024-12-09 17:30:29.016972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:00.101 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:00.101 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:00.101 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:00.101 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:00.101 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:00.101 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:00.101 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.zVkKwP74ju 00:20:00.101 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.zVkKwP74ju 00:20:00.101 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:00.359 [2024-12-09 17:30:29.319957] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:00.359 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:00.616 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:00.616 [2024-12-09 17:30:29.708941] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:00.616 [2024-12-09 17:30:29.709136] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:00.616 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:00.874 malloc0 00:20:00.874 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:01.132 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.zVkKwP74ju 00:20:01.390 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:01.390 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=2602459 00:20:01.390 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:01.390 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:01.390 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 2602459 /var/tmp/bdevperf.sock 00:20:01.390 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2602459 ']' 00:20:01.390 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:01.390 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:01.390 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:01.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:01.390 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:01.390 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:01.647 [2024-12-09 17:30:30.590168] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:20:01.647 [2024-12-09 17:30:30.590225] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2602459 ] 00:20:01.647 [2024-12-09 17:30:30.665537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.647 [2024-12-09 17:30:30.703966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:01.647 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:01.647 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:01.647 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zVkKwP74ju 00:20:01.905 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:02.162 [2024-12-09 17:30:31.155460] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:02.162 nvme0n1 00:20:02.162 17:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:02.162 Running I/O for 1 seconds... 00:20:03.537 5429.00 IOPS, 21.21 MiB/s 00:20:03.537 Latency(us) 00:20:03.537 [2024-12-09T16:30:32.716Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:03.537 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:03.537 Verification LBA range: start 0x0 length 0x2000 00:20:03.537 nvme0n1 : 1.01 5490.41 21.45 0.00 0.00 23162.04 4743.56 29709.65 00:20:03.537 [2024-12-09T16:30:32.716Z] =================================================================================================================== 00:20:03.537 [2024-12-09T16:30:32.716Z] Total : 5490.41 21.45 0.00 0.00 23162.04 4743.56 29709.65 00:20:03.537 { 00:20:03.537 "results": [ 00:20:03.537 { 00:20:03.537 "job": "nvme0n1", 00:20:03.537 "core_mask": "0x2", 00:20:03.537 "workload": "verify", 00:20:03.537 "status": "finished", 00:20:03.537 "verify_range": { 00:20:03.537 "start": 0, 00:20:03.537 "length": 8192 00:20:03.537 }, 00:20:03.537 "queue_depth": 128, 00:20:03.537 "io_size": 4096, 00:20:03.537 "runtime": 1.012129, 00:20:03.537 "iops": 5490.4068552526405, 00:20:03.537 "mibps": 21.446901778330627, 00:20:03.537 "io_failed": 0, 00:20:03.537 "io_timeout": 0, 00:20:03.537 "avg_latency_us": 23162.037086814573, 00:20:03.537 "min_latency_us": 4743.558095238095, 00:20:03.537 "max_latency_us": 29709.653333333332 00:20:03.537 } 00:20:03.537 ], 00:20:03.537 "core_count": 1 00:20:03.537 } 00:20:03.537 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 2602459 00:20:03.537 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2602459 ']' 00:20:03.537 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2602459 00:20:03.537 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:03.537 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:03.537 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2602459 00:20:03.537 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:03.537 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:03.537 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2602459' 00:20:03.537 killing process with pid 2602459 00:20:03.537 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2602459 00:20:03.537 Received shutdown signal, test time was about 1.000000 seconds 00:20:03.537 00:20:03.537 Latency(us) 00:20:03.537 [2024-12-09T16:30:32.716Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:03.537 [2024-12-09T16:30:32.716Z] =================================================================================================================== 00:20:03.537 [2024-12-09T16:30:32.716Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:03.537 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2602459 00:20:03.537 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 2602154 00:20:03.537 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2602154 ']' 00:20:03.537 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2602154 00:20:03.537 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:03.537 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:03.537 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2602154 00:20:03.537 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:03.537 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:03.537 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2602154' 00:20:03.537 killing process with pid 2602154 00:20:03.537 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2602154 00:20:03.537 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2602154 00:20:03.796 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:20:03.796 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:03.796 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:03.796 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:03.796 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2602723 00:20:03.796 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:03.796 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2602723 00:20:03.796 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2602723 ']' 00:20:03.796 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:03.796 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:03.796 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:03.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:03.796 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:03.796 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:03.796 [2024-12-09 17:30:32.858907] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:20:03.796 [2024-12-09 17:30:32.858955] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:03.796 [2024-12-09 17:30:32.932289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:03.796 [2024-12-09 17:30:32.971097] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:03.796 [2024-12-09 17:30:32.971133] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:03.796 [2024-12-09 17:30:32.971141] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:03.796 [2024-12-09 17:30:32.971147] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:03.796 [2024-12-09 17:30:32.971152] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:03.796 [2024-12-09 17:30:32.971707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:04.054 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:04.054 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:04.054 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:04.054 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:04.054 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:04.054 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:04.054 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:20:04.054 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.054 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:04.054 [2024-12-09 17:30:33.106376] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:04.054 malloc0 00:20:04.054 [2024-12-09 17:30:33.134409] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:04.054 [2024-12-09 17:30:33.134621] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:04.054 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.054 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=2602938 00:20:04.054 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:04.054 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 2602938 /var/tmp/bdevperf.sock 00:20:04.054 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2602938 ']' 00:20:04.054 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:04.054 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:04.054 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:04.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:04.054 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:04.054 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:04.054 [2024-12-09 17:30:33.210194] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:20:04.054 [2024-12-09 17:30:33.210238] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2602938 ] 00:20:04.313 [2024-12-09 17:30:33.284744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:04.313 [2024-12-09 17:30:33.323703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:04.313 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:04.313 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:04.313 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zVkKwP74ju 00:20:04.571 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:04.829 [2024-12-09 17:30:33.784000] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:04.829 nvme0n1 00:20:04.829 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:04.829 Running I/O for 1 seconds... 00:20:06.203 5190.00 IOPS, 20.27 MiB/s 00:20:06.203 Latency(us) 00:20:06.203 [2024-12-09T16:30:35.382Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:06.203 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:06.203 Verification LBA range: start 0x0 length 0x2000 00:20:06.204 nvme0n1 : 1.02 5229.39 20.43 0.00 0.00 24295.81 6459.98 28086.86 00:20:06.204 [2024-12-09T16:30:35.383Z] =================================================================================================================== 00:20:06.204 [2024-12-09T16:30:35.383Z] Total : 5229.39 20.43 0.00 0.00 24295.81 6459.98 28086.86 00:20:06.204 { 00:20:06.204 "results": [ 00:20:06.204 { 00:20:06.204 "job": "nvme0n1", 00:20:06.204 "core_mask": "0x2", 00:20:06.204 "workload": "verify", 00:20:06.204 "status": "finished", 00:20:06.204 "verify_range": { 00:20:06.204 "start": 0, 00:20:06.204 "length": 8192 00:20:06.204 }, 00:20:06.204 "queue_depth": 128, 00:20:06.204 "io_size": 4096, 00:20:06.204 "runtime": 1.016944, 00:20:06.204 "iops": 5229.393162258689, 00:20:06.204 "mibps": 20.427317040073003, 00:20:06.204 "io_failed": 0, 00:20:06.204 "io_timeout": 0, 00:20:06.204 "avg_latency_us": 24295.806283063805, 00:20:06.204 "min_latency_us": 6459.977142857143, 00:20:06.204 "max_latency_us": 28086.85714285714 00:20:06.204 } 00:20:06.204 ], 00:20:06.204 "core_count": 1 00:20:06.204 } 00:20:06.204 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:20:06.204 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.204 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:06.204 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.204 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:20:06.204 "subsystems": [ 00:20:06.204 { 00:20:06.204 "subsystem": "keyring", 00:20:06.204 "config": [ 00:20:06.204 { 00:20:06.204 "method": "keyring_file_add_key", 00:20:06.204 "params": { 00:20:06.204 "name": "key0", 00:20:06.204 "path": "/tmp/tmp.zVkKwP74ju" 00:20:06.204 } 00:20:06.204 } 00:20:06.204 ] 00:20:06.204 }, 00:20:06.204 { 00:20:06.204 "subsystem": "iobuf", 00:20:06.204 "config": [ 00:20:06.204 { 00:20:06.204 "method": "iobuf_set_options", 00:20:06.204 "params": { 00:20:06.204 "small_pool_count": 8192, 00:20:06.204 "large_pool_count": 1024, 00:20:06.204 "small_bufsize": 8192, 00:20:06.204 "large_bufsize": 135168, 00:20:06.204 "enable_numa": false 00:20:06.204 } 00:20:06.204 } 00:20:06.204 ] 00:20:06.204 }, 00:20:06.204 { 00:20:06.204 "subsystem": "sock", 00:20:06.204 "config": [ 00:20:06.204 { 00:20:06.204 "method": "sock_set_default_impl", 00:20:06.204 "params": { 00:20:06.204 "impl_name": "posix" 00:20:06.204 } 00:20:06.204 }, 00:20:06.204 { 00:20:06.204 "method": "sock_impl_set_options", 00:20:06.204 "params": { 00:20:06.204 "impl_name": "ssl", 00:20:06.204 "recv_buf_size": 4096, 00:20:06.204 "send_buf_size": 4096, 00:20:06.204 "enable_recv_pipe": true, 00:20:06.204 "enable_quickack": false, 00:20:06.204 "enable_placement_id": 0, 00:20:06.204 "enable_zerocopy_send_server": true, 00:20:06.204 "enable_zerocopy_send_client": false, 00:20:06.204 "zerocopy_threshold": 0, 00:20:06.204 "tls_version": 0, 00:20:06.204 "enable_ktls": false 00:20:06.204 } 00:20:06.204 }, 00:20:06.204 { 00:20:06.204 "method": "sock_impl_set_options", 00:20:06.204 "params": { 00:20:06.204 "impl_name": "posix", 00:20:06.204 "recv_buf_size": 2097152, 00:20:06.204 "send_buf_size": 2097152, 00:20:06.204 "enable_recv_pipe": true, 00:20:06.204 "enable_quickack": false, 00:20:06.204 "enable_placement_id": 0, 00:20:06.204 "enable_zerocopy_send_server": true, 00:20:06.204 "enable_zerocopy_send_client": false, 00:20:06.204 "zerocopy_threshold": 0, 00:20:06.204 "tls_version": 0, 00:20:06.204 "enable_ktls": false 00:20:06.204 } 00:20:06.204 } 00:20:06.204 ] 00:20:06.204 }, 00:20:06.204 { 00:20:06.204 "subsystem": "vmd", 00:20:06.204 "config": [] 00:20:06.204 }, 00:20:06.204 { 00:20:06.204 "subsystem": "accel", 00:20:06.204 "config": [ 00:20:06.204 { 00:20:06.204 "method": "accel_set_options", 00:20:06.204 "params": { 00:20:06.204 "small_cache_size": 128, 00:20:06.204 "large_cache_size": 16, 00:20:06.204 "task_count": 2048, 00:20:06.204 "sequence_count": 2048, 00:20:06.204 "buf_count": 2048 00:20:06.204 } 00:20:06.204 } 00:20:06.204 ] 00:20:06.204 }, 00:20:06.204 { 00:20:06.204 "subsystem": "bdev", 00:20:06.204 "config": [ 00:20:06.204 { 00:20:06.204 "method": "bdev_set_options", 00:20:06.204 "params": { 00:20:06.204 "bdev_io_pool_size": 65535, 00:20:06.204 "bdev_io_cache_size": 256, 00:20:06.204 "bdev_auto_examine": true, 00:20:06.204 "iobuf_small_cache_size": 128, 00:20:06.204 "iobuf_large_cache_size": 16 00:20:06.204 } 00:20:06.204 }, 00:20:06.204 { 00:20:06.204 "method": "bdev_raid_set_options", 00:20:06.204 "params": { 00:20:06.204 "process_window_size_kb": 1024, 00:20:06.204 "process_max_bandwidth_mb_sec": 0 00:20:06.204 } 00:20:06.204 }, 00:20:06.204 { 00:20:06.204 "method": "bdev_iscsi_set_options", 00:20:06.204 "params": { 00:20:06.204 "timeout_sec": 30 00:20:06.204 } 00:20:06.204 }, 00:20:06.204 { 00:20:06.204 "method": "bdev_nvme_set_options", 00:20:06.204 "params": { 00:20:06.204 "action_on_timeout": "none", 00:20:06.204 "timeout_us": 0, 00:20:06.204 "timeout_admin_us": 0, 00:20:06.204 "keep_alive_timeout_ms": 10000, 00:20:06.204 "arbitration_burst": 0, 00:20:06.204 "low_priority_weight": 0, 00:20:06.204 "medium_priority_weight": 0, 00:20:06.204 "high_priority_weight": 0, 00:20:06.204 "nvme_adminq_poll_period_us": 10000, 00:20:06.204 "nvme_ioq_poll_period_us": 0, 00:20:06.204 "io_queue_requests": 0, 00:20:06.204 "delay_cmd_submit": true, 00:20:06.204 "transport_retry_count": 4, 00:20:06.204 "bdev_retry_count": 3, 00:20:06.204 "transport_ack_timeout": 0, 00:20:06.205 "ctrlr_loss_timeout_sec": 0, 00:20:06.205 "reconnect_delay_sec": 0, 00:20:06.205 "fast_io_fail_timeout_sec": 0, 00:20:06.205 "disable_auto_failback": false, 00:20:06.205 "generate_uuids": false, 00:20:06.205 "transport_tos": 0, 00:20:06.205 "nvme_error_stat": false, 00:20:06.205 "rdma_srq_size": 0, 00:20:06.205 "io_path_stat": false, 00:20:06.205 "allow_accel_sequence": false, 00:20:06.205 "rdma_max_cq_size": 0, 00:20:06.205 "rdma_cm_event_timeout_ms": 0, 00:20:06.205 "dhchap_digests": [ 00:20:06.205 "sha256", 00:20:06.205 "sha384", 00:20:06.205 "sha512" 00:20:06.205 ], 00:20:06.205 "dhchap_dhgroups": [ 00:20:06.205 "null", 00:20:06.205 "ffdhe2048", 00:20:06.205 "ffdhe3072", 00:20:06.205 "ffdhe4096", 00:20:06.205 "ffdhe6144", 00:20:06.205 "ffdhe8192" 00:20:06.205 ] 00:20:06.205 } 00:20:06.205 }, 00:20:06.205 { 00:20:06.205 "method": "bdev_nvme_set_hotplug", 00:20:06.205 "params": { 00:20:06.205 "period_us": 100000, 00:20:06.205 "enable": false 00:20:06.205 } 00:20:06.205 }, 00:20:06.205 { 00:20:06.205 "method": "bdev_malloc_create", 00:20:06.205 "params": { 00:20:06.205 "name": "malloc0", 00:20:06.205 "num_blocks": 8192, 00:20:06.205 "block_size": 4096, 00:20:06.205 "physical_block_size": 4096, 00:20:06.205 "uuid": "5bd19433-aeb0-4ad4-8cbc-ccc3a2d2fc17", 00:20:06.205 "optimal_io_boundary": 0, 00:20:06.205 "md_size": 0, 00:20:06.205 "dif_type": 0, 00:20:06.205 "dif_is_head_of_md": false, 00:20:06.205 "dif_pi_format": 0 00:20:06.205 } 00:20:06.205 }, 00:20:06.205 { 00:20:06.205 "method": "bdev_wait_for_examine" 00:20:06.205 } 00:20:06.205 ] 00:20:06.205 }, 00:20:06.205 { 00:20:06.205 "subsystem": "nbd", 00:20:06.205 "config": [] 00:20:06.205 }, 00:20:06.205 { 00:20:06.205 "subsystem": "scheduler", 00:20:06.205 "config": [ 00:20:06.205 { 00:20:06.205 "method": "framework_set_scheduler", 00:20:06.205 "params": { 00:20:06.205 "name": "static" 00:20:06.205 } 00:20:06.205 } 00:20:06.205 ] 00:20:06.205 }, 00:20:06.205 { 00:20:06.205 "subsystem": "nvmf", 00:20:06.205 "config": [ 00:20:06.205 { 00:20:06.205 "method": "nvmf_set_config", 00:20:06.205 "params": { 00:20:06.205 "discovery_filter": "match_any", 00:20:06.205 "admin_cmd_passthru": { 00:20:06.205 "identify_ctrlr": false 00:20:06.205 }, 00:20:06.205 "dhchap_digests": [ 00:20:06.205 "sha256", 00:20:06.205 "sha384", 00:20:06.205 "sha512" 00:20:06.205 ], 00:20:06.205 "dhchap_dhgroups": [ 00:20:06.205 "null", 00:20:06.205 "ffdhe2048", 00:20:06.205 "ffdhe3072", 00:20:06.205 "ffdhe4096", 00:20:06.205 "ffdhe6144", 00:20:06.205 "ffdhe8192" 00:20:06.205 ] 00:20:06.205 } 00:20:06.205 }, 00:20:06.205 { 00:20:06.205 "method": "nvmf_set_max_subsystems", 00:20:06.205 "params": { 00:20:06.205 "max_subsystems": 1024 00:20:06.205 } 00:20:06.205 }, 00:20:06.205 { 00:20:06.205 "method": "nvmf_set_crdt", 00:20:06.205 "params": { 00:20:06.205 "crdt1": 0, 00:20:06.205 "crdt2": 0, 00:20:06.205 "crdt3": 0 00:20:06.205 } 00:20:06.205 }, 00:20:06.205 { 00:20:06.205 "method": "nvmf_create_transport", 00:20:06.205 "params": { 00:20:06.205 "trtype": "TCP", 00:20:06.205 "max_queue_depth": 128, 00:20:06.205 "max_io_qpairs_per_ctrlr": 127, 00:20:06.205 "in_capsule_data_size": 4096, 00:20:06.205 "max_io_size": 131072, 00:20:06.205 "io_unit_size": 131072, 00:20:06.205 "max_aq_depth": 128, 00:20:06.205 "num_shared_buffers": 511, 00:20:06.205 "buf_cache_size": 4294967295, 00:20:06.205 "dif_insert_or_strip": false, 00:20:06.205 "zcopy": false, 00:20:06.205 "c2h_success": false, 00:20:06.205 "sock_priority": 0, 00:20:06.205 "abort_timeout_sec": 1, 00:20:06.205 "ack_timeout": 0, 00:20:06.205 "data_wr_pool_size": 0 00:20:06.205 } 00:20:06.205 }, 00:20:06.205 { 00:20:06.205 "method": "nvmf_create_subsystem", 00:20:06.205 "params": { 00:20:06.205 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:06.205 "allow_any_host": false, 00:20:06.205 "serial_number": "00000000000000000000", 00:20:06.205 "model_number": "SPDK bdev Controller", 00:20:06.205 "max_namespaces": 32, 00:20:06.205 "min_cntlid": 1, 00:20:06.205 "max_cntlid": 65519, 00:20:06.205 "ana_reporting": false 00:20:06.205 } 00:20:06.205 }, 00:20:06.205 { 00:20:06.205 "method": "nvmf_subsystem_add_host", 00:20:06.205 "params": { 00:20:06.205 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:06.205 "host": "nqn.2016-06.io.spdk:host1", 00:20:06.205 "psk": "key0" 00:20:06.205 } 00:20:06.205 }, 00:20:06.205 { 00:20:06.205 "method": "nvmf_subsystem_add_ns", 00:20:06.205 "params": { 00:20:06.205 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:06.205 "namespace": { 00:20:06.205 "nsid": 1, 00:20:06.205 "bdev_name": "malloc0", 00:20:06.205 "nguid": "5BD19433AEB04AD48CBCCCC3A2D2FC17", 00:20:06.205 "uuid": "5bd19433-aeb0-4ad4-8cbc-ccc3a2d2fc17", 00:20:06.205 "no_auto_visible": false 00:20:06.205 } 00:20:06.205 } 00:20:06.205 }, 00:20:06.205 { 00:20:06.205 "method": "nvmf_subsystem_add_listener", 00:20:06.205 "params": { 00:20:06.205 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:06.205 "listen_address": { 00:20:06.205 "trtype": "TCP", 00:20:06.205 "adrfam": "IPv4", 00:20:06.205 "traddr": "10.0.0.2", 00:20:06.205 "trsvcid": "4420" 00:20:06.205 }, 00:20:06.205 "secure_channel": false, 00:20:06.205 "sock_impl": "ssl" 00:20:06.205 } 00:20:06.205 } 00:20:06.205 ] 00:20:06.205 } 00:20:06.205 ] 00:20:06.205 }' 00:20:06.205 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:06.464 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:20:06.464 "subsystems": [ 00:20:06.464 { 00:20:06.464 "subsystem": "keyring", 00:20:06.464 "config": [ 00:20:06.464 { 00:20:06.464 "method": "keyring_file_add_key", 00:20:06.464 "params": { 00:20:06.464 "name": "key0", 00:20:06.464 "path": "/tmp/tmp.zVkKwP74ju" 00:20:06.464 } 00:20:06.464 } 00:20:06.464 ] 00:20:06.464 }, 00:20:06.464 { 00:20:06.464 "subsystem": "iobuf", 00:20:06.464 "config": [ 00:20:06.464 { 00:20:06.464 "method": "iobuf_set_options", 00:20:06.464 "params": { 00:20:06.464 "small_pool_count": 8192, 00:20:06.464 "large_pool_count": 1024, 00:20:06.464 "small_bufsize": 8192, 00:20:06.464 "large_bufsize": 135168, 00:20:06.464 "enable_numa": false 00:20:06.464 } 00:20:06.464 } 00:20:06.464 ] 00:20:06.464 }, 00:20:06.464 { 00:20:06.464 "subsystem": "sock", 00:20:06.464 "config": [ 00:20:06.464 { 00:20:06.464 "method": "sock_set_default_impl", 00:20:06.464 "params": { 00:20:06.464 "impl_name": "posix" 00:20:06.464 } 00:20:06.464 }, 00:20:06.464 { 00:20:06.464 "method": "sock_impl_set_options", 00:20:06.464 "params": { 00:20:06.464 "impl_name": "ssl", 00:20:06.464 "recv_buf_size": 4096, 00:20:06.464 "send_buf_size": 4096, 00:20:06.464 "enable_recv_pipe": true, 00:20:06.464 "enable_quickack": false, 00:20:06.464 "enable_placement_id": 0, 00:20:06.464 "enable_zerocopy_send_server": true, 00:20:06.464 "enable_zerocopy_send_client": false, 00:20:06.464 "zerocopy_threshold": 0, 00:20:06.464 "tls_version": 0, 00:20:06.464 "enable_ktls": false 00:20:06.464 } 00:20:06.464 }, 00:20:06.464 { 00:20:06.464 "method": "sock_impl_set_options", 00:20:06.464 "params": { 00:20:06.464 "impl_name": "posix", 00:20:06.464 "recv_buf_size": 2097152, 00:20:06.464 "send_buf_size": 2097152, 00:20:06.464 "enable_recv_pipe": true, 00:20:06.464 "enable_quickack": false, 00:20:06.464 "enable_placement_id": 0, 00:20:06.464 "enable_zerocopy_send_server": true, 00:20:06.464 "enable_zerocopy_send_client": false, 00:20:06.464 "zerocopy_threshold": 0, 00:20:06.464 "tls_version": 0, 00:20:06.464 "enable_ktls": false 00:20:06.464 } 00:20:06.464 } 00:20:06.464 ] 00:20:06.464 }, 00:20:06.464 { 00:20:06.464 "subsystem": "vmd", 00:20:06.464 "config": [] 00:20:06.464 }, 00:20:06.464 { 00:20:06.464 "subsystem": "accel", 00:20:06.464 "config": [ 00:20:06.464 { 00:20:06.464 "method": "accel_set_options", 00:20:06.464 "params": { 00:20:06.464 "small_cache_size": 128, 00:20:06.464 "large_cache_size": 16, 00:20:06.464 "task_count": 2048, 00:20:06.464 "sequence_count": 2048, 00:20:06.464 "buf_count": 2048 00:20:06.464 } 00:20:06.464 } 00:20:06.464 ] 00:20:06.464 }, 00:20:06.464 { 00:20:06.464 "subsystem": "bdev", 00:20:06.464 "config": [ 00:20:06.464 { 00:20:06.464 "method": "bdev_set_options", 00:20:06.464 "params": { 00:20:06.464 "bdev_io_pool_size": 65535, 00:20:06.464 "bdev_io_cache_size": 256, 00:20:06.464 "bdev_auto_examine": true, 00:20:06.464 "iobuf_small_cache_size": 128, 00:20:06.464 "iobuf_large_cache_size": 16 00:20:06.464 } 00:20:06.464 }, 00:20:06.464 { 00:20:06.464 "method": "bdev_raid_set_options", 00:20:06.464 "params": { 00:20:06.464 "process_window_size_kb": 1024, 00:20:06.464 "process_max_bandwidth_mb_sec": 0 00:20:06.464 } 00:20:06.464 }, 00:20:06.464 { 00:20:06.464 "method": "bdev_iscsi_set_options", 00:20:06.464 "params": { 00:20:06.464 "timeout_sec": 30 00:20:06.464 } 00:20:06.464 }, 00:20:06.464 { 00:20:06.464 "method": "bdev_nvme_set_options", 00:20:06.464 "params": { 00:20:06.464 "action_on_timeout": "none", 00:20:06.464 "timeout_us": 0, 00:20:06.464 "timeout_admin_us": 0, 00:20:06.464 "keep_alive_timeout_ms": 10000, 00:20:06.464 "arbitration_burst": 0, 00:20:06.464 "low_priority_weight": 0, 00:20:06.464 "medium_priority_weight": 0, 00:20:06.464 "high_priority_weight": 0, 00:20:06.464 "nvme_adminq_poll_period_us": 10000, 00:20:06.464 "nvme_ioq_poll_period_us": 0, 00:20:06.464 "io_queue_requests": 512, 00:20:06.464 "delay_cmd_submit": true, 00:20:06.464 "transport_retry_count": 4, 00:20:06.464 "bdev_retry_count": 3, 00:20:06.464 "transport_ack_timeout": 0, 00:20:06.465 "ctrlr_loss_timeout_sec": 0, 00:20:06.465 "reconnect_delay_sec": 0, 00:20:06.465 "fast_io_fail_timeout_sec": 0, 00:20:06.465 "disable_auto_failback": false, 00:20:06.465 "generate_uuids": false, 00:20:06.465 "transport_tos": 0, 00:20:06.465 "nvme_error_stat": false, 00:20:06.465 "rdma_srq_size": 0, 00:20:06.465 "io_path_stat": false, 00:20:06.465 "allow_accel_sequence": false, 00:20:06.465 "rdma_max_cq_size": 0, 00:20:06.465 "rdma_cm_event_timeout_ms": 0, 00:20:06.465 "dhchap_digests": [ 00:20:06.465 "sha256", 00:20:06.465 "sha384", 00:20:06.465 "sha512" 00:20:06.465 ], 00:20:06.465 "dhchap_dhgroups": [ 00:20:06.465 "null", 00:20:06.465 "ffdhe2048", 00:20:06.465 "ffdhe3072", 00:20:06.465 "ffdhe4096", 00:20:06.465 "ffdhe6144", 00:20:06.465 "ffdhe8192" 00:20:06.465 ] 00:20:06.465 } 00:20:06.465 }, 00:20:06.465 { 00:20:06.465 "method": "bdev_nvme_attach_controller", 00:20:06.465 "params": { 00:20:06.465 "name": "nvme0", 00:20:06.465 "trtype": "TCP", 00:20:06.465 "adrfam": "IPv4", 00:20:06.465 "traddr": "10.0.0.2", 00:20:06.465 "trsvcid": "4420", 00:20:06.465 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:06.465 "prchk_reftag": false, 00:20:06.465 "prchk_guard": false, 00:20:06.465 "ctrlr_loss_timeout_sec": 0, 00:20:06.465 "reconnect_delay_sec": 0, 00:20:06.465 "fast_io_fail_timeout_sec": 0, 00:20:06.465 "psk": "key0", 00:20:06.465 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:06.465 "hdgst": false, 00:20:06.465 "ddgst": false, 00:20:06.465 "multipath": "multipath" 00:20:06.465 } 00:20:06.465 }, 00:20:06.465 { 00:20:06.465 "method": "bdev_nvme_set_hotplug", 00:20:06.465 "params": { 00:20:06.465 "period_us": 100000, 00:20:06.465 "enable": false 00:20:06.465 } 00:20:06.465 }, 00:20:06.465 { 00:20:06.465 "method": "bdev_enable_histogram", 00:20:06.465 "params": { 00:20:06.465 "name": "nvme0n1", 00:20:06.465 "enable": true 00:20:06.465 } 00:20:06.465 }, 00:20:06.465 { 00:20:06.465 "method": "bdev_wait_for_examine" 00:20:06.465 } 00:20:06.465 ] 00:20:06.465 }, 00:20:06.465 { 00:20:06.465 "subsystem": "nbd", 00:20:06.465 "config": [] 00:20:06.465 } 00:20:06.465 ] 00:20:06.465 }' 00:20:06.465 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 2602938 00:20:06.465 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2602938 ']' 00:20:06.465 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2602938 00:20:06.465 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:06.465 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:06.465 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2602938 00:20:06.465 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:06.465 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:06.465 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2602938' 00:20:06.465 killing process with pid 2602938 00:20:06.465 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2602938 00:20:06.465 Received shutdown signal, test time was about 1.000000 seconds 00:20:06.465 00:20:06.465 Latency(us) 00:20:06.465 [2024-12-09T16:30:35.644Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:06.465 [2024-12-09T16:30:35.644Z] =================================================================================================================== 00:20:06.465 [2024-12-09T16:30:35.644Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:06.465 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2602938 00:20:06.465 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 2602723 00:20:06.465 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2602723 ']' 00:20:06.465 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2602723 00:20:06.465 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:06.465 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:06.465 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2602723 00:20:06.724 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:06.724 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:06.724 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2602723' 00:20:06.724 killing process with pid 2602723 00:20:06.724 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2602723 00:20:06.724 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2602723 00:20:06.724 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:20:06.724 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:06.724 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:06.724 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:20:06.724 "subsystems": [ 00:20:06.724 { 00:20:06.724 "subsystem": "keyring", 00:20:06.724 "config": [ 00:20:06.724 { 00:20:06.724 "method": "keyring_file_add_key", 00:20:06.724 "params": { 00:20:06.724 "name": "key0", 00:20:06.724 "path": "/tmp/tmp.zVkKwP74ju" 00:20:06.724 } 00:20:06.724 } 00:20:06.724 ] 00:20:06.724 }, 00:20:06.724 { 00:20:06.724 "subsystem": "iobuf", 00:20:06.724 "config": [ 00:20:06.724 { 00:20:06.724 "method": "iobuf_set_options", 00:20:06.724 "params": { 00:20:06.724 "small_pool_count": 8192, 00:20:06.724 "large_pool_count": 1024, 00:20:06.724 "small_bufsize": 8192, 00:20:06.724 "large_bufsize": 135168, 00:20:06.724 "enable_numa": false 00:20:06.724 } 00:20:06.724 } 00:20:06.724 ] 00:20:06.724 }, 00:20:06.724 { 00:20:06.724 "subsystem": "sock", 00:20:06.724 "config": [ 00:20:06.724 { 00:20:06.724 "method": "sock_set_default_impl", 00:20:06.724 "params": { 00:20:06.724 "impl_name": "posix" 00:20:06.724 } 00:20:06.724 }, 00:20:06.724 { 00:20:06.724 "method": "sock_impl_set_options", 00:20:06.724 "params": { 00:20:06.724 "impl_name": "ssl", 00:20:06.724 "recv_buf_size": 4096, 00:20:06.724 "send_buf_size": 4096, 00:20:06.724 "enable_recv_pipe": true, 00:20:06.724 "enable_quickack": false, 00:20:06.724 "enable_placement_id": 0, 00:20:06.724 "enable_zerocopy_send_server": true, 00:20:06.724 "enable_zerocopy_send_client": false, 00:20:06.724 "zerocopy_threshold": 0, 00:20:06.724 "tls_version": 0, 00:20:06.724 "enable_ktls": false 00:20:06.724 } 00:20:06.724 }, 00:20:06.724 { 00:20:06.724 "method": "sock_impl_set_options", 00:20:06.724 "params": { 00:20:06.724 "impl_name": "posix", 00:20:06.724 "recv_buf_size": 2097152, 00:20:06.724 "send_buf_size": 2097152, 00:20:06.724 "enable_recv_pipe": true, 00:20:06.724 "enable_quickack": false, 00:20:06.724 "enable_placement_id": 0, 00:20:06.724 "enable_zerocopy_send_server": true, 00:20:06.724 "enable_zerocopy_send_client": false, 00:20:06.724 "zerocopy_threshold": 0, 00:20:06.724 "tls_version": 0, 00:20:06.724 "enable_ktls": false 00:20:06.724 } 00:20:06.724 } 00:20:06.724 ] 00:20:06.724 }, 00:20:06.724 { 00:20:06.724 "subsystem": "vmd", 00:20:06.724 "config": [] 00:20:06.724 }, 00:20:06.724 { 00:20:06.724 "subsystem": "accel", 00:20:06.724 "config": [ 00:20:06.724 { 00:20:06.725 "method": "accel_set_options", 00:20:06.725 "params": { 00:20:06.725 "small_cache_size": 128, 00:20:06.725 "large_cache_size": 16, 00:20:06.725 "task_count": 2048, 00:20:06.725 "sequence_count": 2048, 00:20:06.725 "buf_count": 2048 00:20:06.725 } 00:20:06.725 } 00:20:06.725 ] 00:20:06.725 }, 00:20:06.725 { 00:20:06.725 "subsystem": "bdev", 00:20:06.725 "config": [ 00:20:06.725 { 00:20:06.725 "method": "bdev_set_options", 00:20:06.725 "params": { 00:20:06.725 "bdev_io_pool_size": 65535, 00:20:06.725 "bdev_io_cache_size": 256, 00:20:06.725 "bdev_auto_examine": true, 00:20:06.725 "iobuf_small_cache_size": 128, 00:20:06.725 "iobuf_large_cache_size": 16 00:20:06.725 } 00:20:06.725 }, 00:20:06.725 { 00:20:06.725 "method": "bdev_raid_set_options", 00:20:06.725 "params": { 00:20:06.725 "process_window_size_kb": 1024, 00:20:06.725 "process_max_bandwidth_mb_sec": 0 00:20:06.725 } 00:20:06.725 }, 00:20:06.725 { 00:20:06.725 "method": "bdev_iscsi_set_options", 00:20:06.725 "params": { 00:20:06.725 "timeout_sec": 30 00:20:06.725 } 00:20:06.725 }, 00:20:06.725 { 00:20:06.725 "method": "bdev_nvme_set_options", 00:20:06.725 "params": { 00:20:06.725 "action_on_timeout": "none", 00:20:06.725 "timeout_us": 0, 00:20:06.725 "timeout_admin_us": 0, 00:20:06.725 "keep_alive_timeout_ms": 10000, 00:20:06.725 "arbitration_burst": 0, 00:20:06.725 "low_priority_weight": 0, 00:20:06.725 "medium_priority_weight": 0, 00:20:06.725 "high_priority_weight": 0, 00:20:06.725 "nvme_adminq_poll_period_us": 10000, 00:20:06.725 "nvme_ioq_poll_period_us": 0, 00:20:06.725 "io_queue_requests": 0, 00:20:06.725 "delay_cmd_submit": true, 00:20:06.725 "transport_retry_count": 4, 00:20:06.725 "bdev_retry_count": 3, 00:20:06.725 "transport_ack_timeout": 0, 00:20:06.725 "ctrlr_loss_timeout_sec": 0, 00:20:06.725 "reconnect_delay_sec": 0, 00:20:06.725 "fast_io_fail_timeout_sec": 0, 00:20:06.725 "disable_auto_failback": false, 00:20:06.725 "generate_uuids": false, 00:20:06.725 "transport_tos": 0, 00:20:06.725 "nvme_error_stat": false, 00:20:06.725 "rdma_srq_size": 0, 00:20:06.725 "io_path_stat": false, 00:20:06.725 "allow_accel_sequence": false, 00:20:06.725 "rdma_max_cq_size": 0, 00:20:06.725 "rdma_cm_event_timeout_ms": 0, 00:20:06.725 "dhchap_digests": [ 00:20:06.725 "sha256", 00:20:06.725 "sha384", 00:20:06.725 "sha512" 00:20:06.725 ], 00:20:06.725 "dhchap_dhgroups": [ 00:20:06.725 "null", 00:20:06.725 "ffdhe2048", 00:20:06.725 "ffdhe3072", 00:20:06.725 "ffdhe4096", 00:20:06.725 "ffdhe6144", 00:20:06.725 "ffdhe8192" 00:20:06.725 ] 00:20:06.725 } 00:20:06.725 }, 00:20:06.725 { 00:20:06.725 "method": "bdev_nvme_set_hotplug", 00:20:06.725 "params": { 00:20:06.725 "period_us": 100000, 00:20:06.725 "enable": false 00:20:06.725 } 00:20:06.725 }, 00:20:06.725 { 00:20:06.725 "method": "bdev_malloc_create", 00:20:06.725 "params": { 00:20:06.725 "name": "malloc0", 00:20:06.725 "num_blocks": 8192, 00:20:06.725 "block_size": 4096, 00:20:06.725 "physical_block_size": 4096, 00:20:06.725 "uuid": "5bd19433-aeb0-4ad4-8cbc-ccc3a2d2fc17", 00:20:06.725 "optimal_io_boundary": 0, 00:20:06.725 "md_size": 0, 00:20:06.725 "dif_type": 0, 00:20:06.725 "dif_is_head_of_md": false, 00:20:06.725 "dif_pi_format": 0 00:20:06.725 } 00:20:06.725 }, 00:20:06.725 { 00:20:06.725 "method": "bdev_wait_for_examine" 00:20:06.725 } 00:20:06.725 ] 00:20:06.725 }, 00:20:06.725 { 00:20:06.725 "subsystem": "nbd", 00:20:06.725 "config": [] 00:20:06.725 }, 00:20:06.725 { 00:20:06.725 "subsystem": "scheduler", 00:20:06.725 "config": [ 00:20:06.725 { 00:20:06.725 "method": "framework_set_scheduler", 00:20:06.725 "params": { 00:20:06.725 "name": "static" 00:20:06.725 } 00:20:06.725 } 00:20:06.725 ] 00:20:06.725 }, 00:20:06.725 { 00:20:06.725 "subsystem": "nvmf", 00:20:06.725 "config": [ 00:20:06.725 { 00:20:06.725 "method": "nvmf_set_config", 00:20:06.725 "params": { 00:20:06.725 "discovery_filter": "match_any", 00:20:06.725 "admin_cmd_passthru": { 00:20:06.725 "identify_ctrlr": false 00:20:06.725 }, 00:20:06.725 "dhchap_digests": [ 00:20:06.725 "sha256", 00:20:06.725 "sha384", 00:20:06.725 "sha512" 00:20:06.725 ], 00:20:06.725 "dhchap_dhgroups": [ 00:20:06.725 "null", 00:20:06.725 "ffdhe2048", 00:20:06.725 "ffdhe3072", 00:20:06.725 "ffdhe4096", 00:20:06.725 "ffdhe6144", 00:20:06.725 "ffdhe8192" 00:20:06.725 ] 00:20:06.725 } 00:20:06.725 }, 00:20:06.725 { 00:20:06.725 "method": "nvmf_set_max_subsystems", 00:20:06.725 "params": { 00:20:06.725 "max_subsystems": 1024 00:20:06.725 } 00:20:06.725 }, 00:20:06.725 { 00:20:06.725 "method": "nvmf_set_crdt", 00:20:06.725 "params": { 00:20:06.725 "crdt1": 0, 00:20:06.725 "crdt2": 0, 00:20:06.725 "crdt3": 0 00:20:06.725 } 00:20:06.725 }, 00:20:06.725 { 00:20:06.725 "method": "nvmf_create_transport", 00:20:06.725 "params": { 00:20:06.725 "trtype": "TCP", 00:20:06.725 "max_queue_depth": 128, 00:20:06.725 "max_io_qpairs_per_ctrlr": 127, 00:20:06.725 "in_capsule_data_size": 4096, 00:20:06.725 "max_io_size": 131072, 00:20:06.725 "io_unit_size": 131072, 00:20:06.725 "max_aq_depth": 128, 00:20:06.725 "num_shared_buffers": 511, 00:20:06.725 "buf_cache_size": 4294967295, 00:20:06.725 "dif_insert_or_strip": false, 00:20:06.725 "zcopy": false, 00:20:06.725 "c2h_success": false, 00:20:06.725 "sock_priority": 0, 00:20:06.725 "abort_timeout_sec": 1, 00:20:06.725 "ack_timeout": 0, 00:20:06.725 "data_wr_pool_size": 0 00:20:06.725 } 00:20:06.725 }, 00:20:06.725 { 00:20:06.725 "method": "nvmf_create_subsystem", 00:20:06.725 "params": { 00:20:06.725 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:06.725 "allow_any_host": false, 00:20:06.725 "serial_number": "00000000000000000000", 00:20:06.725 "model_number": "SPDK bdev Controller", 00:20:06.725 "max_namespaces": 32, 00:20:06.725 "min_cntlid": 1, 00:20:06.725 "max_cntlid": 65519, 00:20:06.725 "ana_reporting": false 00:20:06.725 } 00:20:06.725 }, 00:20:06.725 { 00:20:06.725 "method": "nvmf_subsystem_add_host", 00:20:06.725 "params": { 00:20:06.725 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:06.725 "host": "nqn.2016-06.io.spdk:host1", 00:20:06.725 "psk": "key0" 00:20:06.725 } 00:20:06.725 }, 00:20:06.725 { 00:20:06.725 "method": "nvmf_subsystem_add_ns", 00:20:06.725 "params": { 00:20:06.725 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:06.725 "namespace": { 00:20:06.725 "nsid": 1, 00:20:06.725 "bdev_name": "malloc0", 00:20:06.725 "nguid": "5BD19433AEB04AD48CBCCCC3A2D2FC17", 00:20:06.725 "uuid": "5bd19433-aeb0-4ad4-8cbc-ccc3a2d2fc17", 00:20:06.725 "no_auto_visible": false 00:20:06.725 } 00:20:06.725 } 00:20:06.725 }, 00:20:06.725 { 00:20:06.725 "method": "nvmf_subsystem_add_listener", 00:20:06.725 "params": { 00:20:06.725 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:06.725 "listen_address": { 00:20:06.725 "trtype": "TCP", 00:20:06.725 "adrfam": "IPv4", 00:20:06.725 "traddr": "10.0.0.2", 00:20:06.725 "trsvcid": "4420" 00:20:06.725 }, 00:20:06.725 "secure_channel": false, 00:20:06.725 "sock_impl": "ssl" 00:20:06.725 } 00:20:06.725 } 00:20:06.725 ] 00:20:06.725 } 00:20:06.725 ] 00:20:06.725 }' 00:20:06.725 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:06.725 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2603321 00:20:06.725 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:06.725 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2603321 00:20:06.725 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2603321 ']' 00:20:06.725 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:06.725 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:06.725 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:06.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:06.725 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:06.725 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:06.725 [2024-12-09 17:30:35.868200] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:20:06.725 [2024-12-09 17:30:35.868255] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:06.984 [2024-12-09 17:30:35.945425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.984 [2024-12-09 17:30:35.982089] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:06.984 [2024-12-09 17:30:35.982126] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:06.984 [2024-12-09 17:30:35.982134] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:06.984 [2024-12-09 17:30:35.982140] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:06.984 [2024-12-09 17:30:35.982144] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:06.984 [2024-12-09 17:30:35.982735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:07.242 [2024-12-09 17:30:36.194237] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:07.242 [2024-12-09 17:30:36.226271] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:07.242 [2024-12-09 17:30:36.226462] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:07.808 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:07.808 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:07.808 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:07.808 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:07.808 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:07.808 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:07.808 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=2603441 00:20:07.808 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 2603441 /var/tmp/bdevperf.sock 00:20:07.808 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2603441 ']' 00:20:07.808 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:07.808 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:07.808 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:07.808 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:07.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:07.808 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:20:07.808 "subsystems": [ 00:20:07.808 { 00:20:07.808 "subsystem": "keyring", 00:20:07.808 "config": [ 00:20:07.808 { 00:20:07.808 "method": "keyring_file_add_key", 00:20:07.808 "params": { 00:20:07.808 "name": "key0", 00:20:07.808 "path": "/tmp/tmp.zVkKwP74ju" 00:20:07.808 } 00:20:07.808 } 00:20:07.808 ] 00:20:07.808 }, 00:20:07.808 { 00:20:07.808 "subsystem": "iobuf", 00:20:07.808 "config": [ 00:20:07.808 { 00:20:07.808 "method": "iobuf_set_options", 00:20:07.808 "params": { 00:20:07.808 "small_pool_count": 8192, 00:20:07.808 "large_pool_count": 1024, 00:20:07.808 "small_bufsize": 8192, 00:20:07.808 "large_bufsize": 135168, 00:20:07.808 "enable_numa": false 00:20:07.808 } 00:20:07.808 } 00:20:07.808 ] 00:20:07.808 }, 00:20:07.808 { 00:20:07.808 "subsystem": "sock", 00:20:07.808 "config": [ 00:20:07.808 { 00:20:07.808 "method": "sock_set_default_impl", 00:20:07.808 "params": { 00:20:07.808 "impl_name": "posix" 00:20:07.808 } 00:20:07.808 }, 00:20:07.808 { 00:20:07.808 "method": "sock_impl_set_options", 00:20:07.808 "params": { 00:20:07.808 "impl_name": "ssl", 00:20:07.808 "recv_buf_size": 4096, 00:20:07.808 "send_buf_size": 4096, 00:20:07.808 "enable_recv_pipe": true, 00:20:07.808 "enable_quickack": false, 00:20:07.808 "enable_placement_id": 0, 00:20:07.808 "enable_zerocopy_send_server": true, 00:20:07.808 "enable_zerocopy_send_client": false, 00:20:07.808 "zerocopy_threshold": 0, 00:20:07.808 "tls_version": 0, 00:20:07.808 "enable_ktls": false 00:20:07.808 } 00:20:07.808 }, 00:20:07.808 { 00:20:07.808 "method": "sock_impl_set_options", 00:20:07.808 "params": { 00:20:07.808 "impl_name": "posix", 00:20:07.808 "recv_buf_size": 2097152, 00:20:07.808 "send_buf_size": 2097152, 00:20:07.808 "enable_recv_pipe": true, 00:20:07.808 "enable_quickack": false, 00:20:07.808 "enable_placement_id": 0, 00:20:07.808 "enable_zerocopy_send_server": true, 00:20:07.808 "enable_zerocopy_send_client": false, 00:20:07.808 "zerocopy_threshold": 0, 00:20:07.808 "tls_version": 0, 00:20:07.808 "enable_ktls": false 00:20:07.808 } 00:20:07.808 } 00:20:07.808 ] 00:20:07.808 }, 00:20:07.808 { 00:20:07.808 "subsystem": "vmd", 00:20:07.808 "config": [] 00:20:07.808 }, 00:20:07.808 { 00:20:07.808 "subsystem": "accel", 00:20:07.808 "config": [ 00:20:07.808 { 00:20:07.808 "method": "accel_set_options", 00:20:07.808 "params": { 00:20:07.808 "small_cache_size": 128, 00:20:07.808 "large_cache_size": 16, 00:20:07.808 "task_count": 2048, 00:20:07.808 "sequence_count": 2048, 00:20:07.808 "buf_count": 2048 00:20:07.808 } 00:20:07.808 } 00:20:07.808 ] 00:20:07.808 }, 00:20:07.808 { 00:20:07.808 "subsystem": "bdev", 00:20:07.808 "config": [ 00:20:07.808 { 00:20:07.808 "method": "bdev_set_options", 00:20:07.808 "params": { 00:20:07.808 "bdev_io_pool_size": 65535, 00:20:07.808 "bdev_io_cache_size": 256, 00:20:07.808 "bdev_auto_examine": true, 00:20:07.808 "iobuf_small_cache_size": 128, 00:20:07.808 "iobuf_large_cache_size": 16 00:20:07.808 } 00:20:07.808 }, 00:20:07.808 { 00:20:07.808 "method": "bdev_raid_set_options", 00:20:07.808 "params": { 00:20:07.808 "process_window_size_kb": 1024, 00:20:07.808 "process_max_bandwidth_mb_sec": 0 00:20:07.808 } 00:20:07.808 }, 00:20:07.808 { 00:20:07.808 "method": "bdev_iscsi_set_options", 00:20:07.808 "params": { 00:20:07.808 "timeout_sec": 30 00:20:07.808 } 00:20:07.808 }, 00:20:07.808 { 00:20:07.808 "method": "bdev_nvme_set_options", 00:20:07.808 "params": { 00:20:07.808 "action_on_timeout": "none", 00:20:07.808 "timeout_us": 0, 00:20:07.808 "timeout_admin_us": 0, 00:20:07.808 "keep_alive_timeout_ms": 10000, 00:20:07.808 "arbitration_burst": 0, 00:20:07.808 "low_priority_weight": 0, 00:20:07.808 "medium_priority_weight": 0, 00:20:07.808 "high_priority_weight": 0, 00:20:07.808 "nvme_adminq_poll_period_us": 10000, 00:20:07.808 "nvme_ioq_poll_period_us": 0, 00:20:07.808 "io_queue_requests": 512, 00:20:07.808 "delay_cmd_submit": true, 00:20:07.808 "transport_retry_count": 4, 00:20:07.808 "bdev_retry_count": 3, 00:20:07.808 "transport_ack_timeout": 0, 00:20:07.808 "ctrlr_loss_timeout_sec": 0, 00:20:07.808 "reconnect_delay_sec": 0, 00:20:07.808 "fast_io_fail_timeout_sec": 0, 00:20:07.808 "disable_auto_failback": false, 00:20:07.808 "generate_uuids": false, 00:20:07.808 "transport_tos": 0, 00:20:07.808 "nvme_error_stat": false, 00:20:07.808 "rdma_srq_size": 0, 00:20:07.808 "io_path_stat": false, 00:20:07.808 "allow_accel_sequence": false, 00:20:07.808 "rdma_max_cq_size": 0, 00:20:07.808 "rdma_cm_event_timeout_ms": 0, 00:20:07.808 "dhchap_digests": [ 00:20:07.808 "sha256", 00:20:07.808 "sha384", 00:20:07.808 "sha512" 00:20:07.808 ], 00:20:07.808 "dhchap_dhgroups": [ 00:20:07.808 "null", 00:20:07.808 "ffdhe2048", 00:20:07.808 "ffdhe3072", 00:20:07.808 "ffdhe4096", 00:20:07.808 "ffdhe6144", 00:20:07.808 "ffdhe8192" 00:20:07.808 ] 00:20:07.808 } 00:20:07.808 }, 00:20:07.808 { 00:20:07.808 "method": "bdev_nvme_attach_controller", 00:20:07.808 "params": { 00:20:07.808 "name": "nvme0", 00:20:07.808 "trtype": "TCP", 00:20:07.808 "adrfam": "IPv4", 00:20:07.808 "traddr": "10.0.0.2", 00:20:07.808 "trsvcid": "4420", 00:20:07.808 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:07.808 "prchk_reftag": false, 00:20:07.808 "prchk_guard": false, 00:20:07.808 "ctrlr_loss_timeout_sec": 0, 00:20:07.808 "reconnect_delay_sec": 0, 00:20:07.808 "fast_io_fail_timeout_sec": 0, 00:20:07.808 "psk": "key0", 00:20:07.808 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:07.808 "hdgst": false, 00:20:07.808 "ddgst": false, 00:20:07.808 "multipath": "multipath" 00:20:07.808 } 00:20:07.808 }, 00:20:07.808 { 00:20:07.808 "method": "bdev_nvme_set_hotplug", 00:20:07.808 "params": { 00:20:07.808 "period_us": 100000, 00:20:07.808 "enable": false 00:20:07.808 } 00:20:07.808 }, 00:20:07.808 { 00:20:07.808 "method": "bdev_enable_histogram", 00:20:07.808 "params": { 00:20:07.809 "name": "nvme0n1", 00:20:07.809 "enable": true 00:20:07.809 } 00:20:07.809 }, 00:20:07.809 { 00:20:07.809 "method": "bdev_wait_for_examine" 00:20:07.809 } 00:20:07.809 ] 00:20:07.809 }, 00:20:07.809 { 00:20:07.809 "subsystem": "nbd", 00:20:07.809 "config": [] 00:20:07.809 } 00:20:07.809 ] 00:20:07.809 }' 00:20:07.809 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:07.809 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:07.809 [2024-12-09 17:30:36.793518] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:20:07.809 [2024-12-09 17:30:36.793567] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2603441 ] 00:20:07.809 [2024-12-09 17:30:36.868041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:07.809 [2024-12-09 17:30:36.907112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:08.067 [2024-12-09 17:30:37.061334] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:08.632 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:08.632 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:08.632 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:20:08.632 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:08.890 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.890 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:08.890 Running I/O for 1 seconds... 00:20:09.825 5514.00 IOPS, 21.54 MiB/s 00:20:09.825 Latency(us) 00:20:09.825 [2024-12-09T16:30:39.004Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:09.825 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:09.825 Verification LBA range: start 0x0 length 0x2000 00:20:09.825 nvme0n1 : 1.01 5560.97 21.72 0.00 0.00 22852.56 5960.66 29584.82 00:20:09.825 [2024-12-09T16:30:39.004Z] =================================================================================================================== 00:20:09.825 [2024-12-09T16:30:39.004Z] Total : 5560.97 21.72 0.00 0.00 22852.56 5960.66 29584.82 00:20:09.825 { 00:20:09.825 "results": [ 00:20:09.825 { 00:20:09.825 "job": "nvme0n1", 00:20:09.825 "core_mask": "0x2", 00:20:09.825 "workload": "verify", 00:20:09.825 "status": "finished", 00:20:09.825 "verify_range": { 00:20:09.825 "start": 0, 00:20:09.825 "length": 8192 00:20:09.825 }, 00:20:09.825 "queue_depth": 128, 00:20:09.825 "io_size": 4096, 00:20:09.825 "runtime": 1.014571, 00:20:09.825 "iops": 5560.971090244054, 00:20:09.825 "mibps": 21.722543321265835, 00:20:09.825 "io_failed": 0, 00:20:09.825 "io_timeout": 0, 00:20:09.826 "avg_latency_us": 22852.558961530023, 00:20:09.826 "min_latency_us": 5960.655238095238, 00:20:09.826 "max_latency_us": 29584.822857142855 00:20:09.826 } 00:20:09.826 ], 00:20:09.826 "core_count": 1 00:20:09.826 } 00:20:09.826 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:20:09.826 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:20:09.826 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:09.826 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:20:09.826 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:20:09.826 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:20:09.826 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:09.826 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:20:09.826 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:20:09.826 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:20:09.826 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:09.826 nvmf_trace.0 00:20:10.084 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:20:10.084 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 2603441 00:20:10.084 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2603441 ']' 00:20:10.084 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2603441 00:20:10.084 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:10.085 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:10.085 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2603441 00:20:10.085 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:10.085 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:10.085 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2603441' 00:20:10.085 killing process with pid 2603441 00:20:10.085 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2603441 00:20:10.085 Received shutdown signal, test time was about 1.000000 seconds 00:20:10.085 00:20:10.085 Latency(us) 00:20:10.085 [2024-12-09T16:30:39.264Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:10.085 [2024-12-09T16:30:39.264Z] =================================================================================================================== 00:20:10.085 [2024-12-09T16:30:39.264Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:10.085 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2603441 00:20:10.085 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:10.085 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:10.085 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:20:10.344 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:10.344 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:20:10.344 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:10.344 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:10.344 rmmod nvme_tcp 00:20:10.344 rmmod nvme_fabrics 00:20:10.344 rmmod nvme_keyring 00:20:10.344 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:10.344 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:20:10.344 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:20:10.344 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 2603321 ']' 00:20:10.344 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 2603321 00:20:10.344 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2603321 ']' 00:20:10.344 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2603321 00:20:10.344 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:10.344 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:10.344 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2603321 00:20:10.344 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:10.344 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:10.344 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2603321' 00:20:10.344 killing process with pid 2603321 00:20:10.344 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2603321 00:20:10.344 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2603321 00:20:10.603 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:10.603 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:10.603 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:10.603 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:20:10.603 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:20:10.603 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:10.603 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:20:10.603 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:10.603 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:10.603 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:10.603 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:10.603 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:12.506 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:12.506 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.3BG4kTd7WS /tmp/tmp.aP1mxlhMMi /tmp/tmp.zVkKwP74ju 00:20:12.506 00:20:12.506 real 1m19.329s 00:20:12.506 user 2m1.722s 00:20:12.506 sys 0m30.108s 00:20:12.506 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:12.506 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:12.506 ************************************ 00:20:12.506 END TEST nvmf_tls 00:20:12.506 ************************************ 00:20:12.506 17:30:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:12.506 17:30:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:12.506 17:30:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:12.506 17:30:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:12.766 ************************************ 00:20:12.766 START TEST nvmf_fips 00:20:12.766 ************************************ 00:20:12.766 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:12.766 * Looking for test storage... 00:20:12.766 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:12.766 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:12.766 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:20:12.766 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:12.766 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:12.766 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:12.766 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:12.766 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:12.766 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:12.766 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:12.766 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:12.766 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:12.766 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:20:12.766 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:20:12.766 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:20:12.766 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:12.766 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:12.766 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:12.766 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:12.766 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:12.766 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:12.766 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:12.766 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:12.766 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:12.766 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:12.766 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:20:12.766 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:20:12.766 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:12.766 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:20:12.766 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:20:12.766 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:12.766 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:12.766 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:20:12.766 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:12.766 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:12.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:12.766 --rc genhtml_branch_coverage=1 00:20:12.766 --rc genhtml_function_coverage=1 00:20:12.766 --rc genhtml_legend=1 00:20:12.766 --rc geninfo_all_blocks=1 00:20:12.766 --rc geninfo_unexecuted_blocks=1 00:20:12.766 00:20:12.766 ' 00:20:12.766 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:12.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:12.766 --rc genhtml_branch_coverage=1 00:20:12.766 --rc genhtml_function_coverage=1 00:20:12.766 --rc genhtml_legend=1 00:20:12.766 --rc geninfo_all_blocks=1 00:20:12.766 --rc geninfo_unexecuted_blocks=1 00:20:12.766 00:20:12.766 ' 00:20:12.766 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:12.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:12.766 --rc genhtml_branch_coverage=1 00:20:12.766 --rc genhtml_function_coverage=1 00:20:12.766 --rc genhtml_legend=1 00:20:12.766 --rc geninfo_all_blocks=1 00:20:12.766 --rc geninfo_unexecuted_blocks=1 00:20:12.766 00:20:12.766 ' 00:20:12.766 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:12.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:12.766 --rc genhtml_branch_coverage=1 00:20:12.766 --rc genhtml_function_coverage=1 00:20:12.766 --rc genhtml_legend=1 00:20:12.766 --rc geninfo_all_blocks=1 00:20:12.766 --rc geninfo_unexecuted_blocks=1 00:20:12.766 00:20:12.766 ' 00:20:12.766 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:12.766 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:12.766 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:12.766 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:12.766 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:12.766 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:12.766 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:12.766 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:12.766 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:12.766 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:12.766 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:12.766 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:12.767 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:20:12.767 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:20:12.767 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:12.767 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:12.767 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:12.767 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:12.767 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:12.767 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:20:12.767 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:12.767 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:12.767 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:12.767 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.767 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.767 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.767 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:12.767 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.767 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:20:12.767 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:12.767 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:12.767 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:12.767 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:12.767 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:12.767 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:12.767 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:12.767 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:12.767 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:12.767 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:12.767 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:12.767 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:20:12.767 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:20:12.767 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:20:12.767 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:20:12.767 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:20:12.767 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:20:12.767 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:12.767 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:12.767 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:12.767 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:12.767 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:12.767 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:12.767 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:20:12.767 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:20:12.767 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:20:12.767 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:12.767 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:12.767 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:20:12.767 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:12.767 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:12.767 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:20:12.767 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:12.767 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:12.767 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:12.767 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:20:12.767 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:20:12.767 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:12.767 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:12.767 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:12.767 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:20:12.767 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:12.767 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:12.767 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:20:12.767 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:12.767 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:12.767 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:12.767 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:12.767 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:12.767 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:12.767 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:20:13.066 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:20:13.066 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:13.066 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:20:13.066 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:20:13.066 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:13.066 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:20:13.066 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:20:13.066 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:13.066 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:20:13.066 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:13.066 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:13.066 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:20:13.066 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:20:13.066 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:20:13.066 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:20:13.066 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:20:13.066 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:20:13.066 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:13.066 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:20:13.066 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:20:13.066 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:20:13.066 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:20:13.066 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:20:13.066 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:20:13.066 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:13.066 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:20:13.066 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:20:13.066 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:20:13.066 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:13.066 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:20:13.066 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:13.066 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:20:13.066 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:13.066 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:20:13.066 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:13.066 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:20:13.066 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:20:13.066 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:20:13.066 Error setting digest 00:20:13.066 40B2B04DA07F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:20:13.066 40B2B04DA07F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:20:13.066 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:20:13.066 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:13.066 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:13.066 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:13.066 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:20:13.066 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:13.066 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:13.066 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:13.066 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:13.066 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:13.066 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:13.066 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:13.066 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:13.066 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:13.066 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:13.066 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:20:13.066 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:19.711 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:19.711 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:20:19.711 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:19.711 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:19.711 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:19.711 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:19.711 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:19.711 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:20:19.711 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:19.711 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:20:19.711 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:20:19.711 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:20:19.711 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:20:19.711 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:20:19.711 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:20:19.711 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:19.711 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:19.711 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:19.711 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:19.711 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:19.711 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:19.711 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:19.711 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:19.711 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:19.711 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:19.711 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:19.711 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:19.711 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:19.711 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:19.711 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:19.711 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:19.711 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:19.711 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:19.711 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:19.711 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:19.711 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:19.711 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:19.711 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:19.711 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:19.712 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:19.712 Found net devices under 0000:af:00.0: cvl_0_0 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:19.712 Found net devices under 0000:af:00.1: cvl_0_1 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:19.712 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:19.712 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.273 ms 00:20:19.712 00:20:19.712 --- 10.0.0.2 ping statistics --- 00:20:19.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:19.712 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:19.712 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:19.712 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:20:19.712 00:20:19.712 --- 10.0.0.1 ping statistics --- 00:20:19.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:19.712 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=2607444 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 2607444 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2607444 ']' 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:19.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:19.712 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:19.712 [2024-12-09 17:30:48.032971] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:20:19.712 [2024-12-09 17:30:48.033022] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:19.712 [2024-12-09 17:30:48.110256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:19.712 [2024-12-09 17:30:48.147264] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:19.712 [2024-12-09 17:30:48.147297] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:19.712 [2024-12-09 17:30:48.147304] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:19.712 [2024-12-09 17:30:48.147309] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:19.712 [2024-12-09 17:30:48.147314] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:19.712 [2024-12-09 17:30:48.147858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:19.712 17:30:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:19.712 17:30:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:20:19.712 17:30:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:19.712 17:30:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:19.712 17:30:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:19.712 17:30:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:19.972 17:30:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:20:19.972 17:30:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:19.972 17:30:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:20:19.972 17:30:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.RiC 00:20:19.972 17:30:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:19.972 17:30:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.RiC 00:20:19.972 17:30:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.RiC 00:20:19.972 17:30:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.RiC 00:20:19.972 17:30:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:19.972 [2024-12-09 17:30:49.069150] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:19.972 [2024-12-09 17:30:49.085153] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:19.972 [2024-12-09 17:30:49.085384] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:19.972 malloc0 00:20:20.231 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:20.231 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=2607690 00:20:20.231 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:20.231 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 2607690 /var/tmp/bdevperf.sock 00:20:20.231 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2607690 ']' 00:20:20.231 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:20.231 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:20.231 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:20.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:20.231 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:20.231 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:20.231 [2024-12-09 17:30:49.214413] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:20:20.231 [2024-12-09 17:30:49.214466] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2607690 ] 00:20:20.231 [2024-12-09 17:30:49.288772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.231 [2024-12-09 17:30:49.327838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:21.167 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:21.167 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:20:21.168 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.RiC 00:20:21.168 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:21.426 [2024-12-09 17:30:50.425030] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:21.426 TLSTESTn1 00:20:21.426 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:21.685 Running I/O for 10 seconds... 00:20:23.556 5540.00 IOPS, 21.64 MiB/s [2024-12-09T16:30:53.672Z] 5552.00 IOPS, 21.69 MiB/s [2024-12-09T16:30:55.050Z] 5568.00 IOPS, 21.75 MiB/s [2024-12-09T16:30:55.987Z] 5572.50 IOPS, 21.77 MiB/s [2024-12-09T16:30:56.923Z] 5599.40 IOPS, 21.87 MiB/s [2024-12-09T16:30:57.860Z] 5617.33 IOPS, 21.94 MiB/s [2024-12-09T16:30:58.798Z] 5614.00 IOPS, 21.93 MiB/s [2024-12-09T16:30:59.734Z] 5619.00 IOPS, 21.95 MiB/s [2024-12-09T16:31:00.670Z] 5600.67 IOPS, 21.88 MiB/s [2024-12-09T16:31:00.670Z] 5539.70 IOPS, 21.64 MiB/s 00:20:31.491 Latency(us) 00:20:31.491 [2024-12-09T16:31:00.670Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:31.491 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:31.491 Verification LBA range: start 0x0 length 0x2000 00:20:31.491 TLSTESTn1 : 10.02 5541.75 21.65 0.00 0.00 23061.82 5242.88 28960.67 00:20:31.491 [2024-12-09T16:31:00.670Z] =================================================================================================================== 00:20:31.491 [2024-12-09T16:31:00.670Z] Total : 5541.75 21.65 0.00 0.00 23061.82 5242.88 28960.67 00:20:31.491 { 00:20:31.491 "results": [ 00:20:31.491 { 00:20:31.491 "job": "TLSTESTn1", 00:20:31.491 "core_mask": "0x4", 00:20:31.491 "workload": "verify", 00:20:31.491 "status": "finished", 00:20:31.491 "verify_range": { 00:20:31.491 "start": 0, 00:20:31.491 "length": 8192 00:20:31.491 }, 00:20:31.491 "queue_depth": 128, 00:20:31.491 "io_size": 4096, 00:20:31.491 "runtime": 10.019221, 00:20:31.491 "iops": 5541.7482057736825, 00:20:31.491 "mibps": 21.647453928803447, 00:20:31.491 "io_failed": 0, 00:20:31.491 "io_timeout": 0, 00:20:31.491 "avg_latency_us": 23061.81783472441, 00:20:31.491 "min_latency_us": 5242.88, 00:20:31.491 "max_latency_us": 28960.670476190477 00:20:31.491 } 00:20:31.491 ], 00:20:31.491 "core_count": 1 00:20:31.491 } 00:20:31.750 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:20:31.751 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:20:31.751 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:20:31.751 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:20:31.751 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:20:31.751 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:31.751 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:20:31.751 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:20:31.751 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:20:31.751 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:31.751 nvmf_trace.0 00:20:31.751 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:20:31.751 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2607690 00:20:31.751 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2607690 ']' 00:20:31.751 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2607690 00:20:31.751 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:20:31.751 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:31.751 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2607690 00:20:31.751 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:31.751 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:31.751 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2607690' 00:20:31.751 killing process with pid 2607690 00:20:31.751 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2607690 00:20:31.751 Received shutdown signal, test time was about 10.000000 seconds 00:20:31.751 00:20:31.751 Latency(us) 00:20:31.751 [2024-12-09T16:31:00.930Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:31.751 [2024-12-09T16:31:00.930Z] =================================================================================================================== 00:20:31.751 [2024-12-09T16:31:00.930Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:31.751 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2607690 00:20:32.009 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:20:32.009 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:32.009 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:20:32.009 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:32.009 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:20:32.009 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:32.009 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:32.009 rmmod nvme_tcp 00:20:32.009 rmmod nvme_fabrics 00:20:32.009 rmmod nvme_keyring 00:20:32.010 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:32.010 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:20:32.010 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:20:32.010 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 2607444 ']' 00:20:32.010 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 2607444 00:20:32.010 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2607444 ']' 00:20:32.010 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2607444 00:20:32.010 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:20:32.010 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:32.010 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2607444 00:20:32.010 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:32.010 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:32.010 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2607444' 00:20:32.010 killing process with pid 2607444 00:20:32.010 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2607444 00:20:32.010 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2607444 00:20:32.269 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:32.269 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:32.269 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:32.269 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:20:32.269 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:20:32.269 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:32.269 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:20:32.269 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:32.269 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:32.269 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:32.269 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:32.269 17:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:34.173 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:34.173 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.RiC 00:20:34.173 00:20:34.173 real 0m21.634s 00:20:34.173 user 0m23.485s 00:20:34.173 sys 0m9.627s 00:20:34.173 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:34.173 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:34.173 ************************************ 00:20:34.173 END TEST nvmf_fips 00:20:34.173 ************************************ 00:20:34.432 17:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:34.432 17:31:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:34.432 17:31:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:34.432 17:31:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:34.432 ************************************ 00:20:34.432 START TEST nvmf_control_msg_list 00:20:34.432 ************************************ 00:20:34.432 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:34.432 * Looking for test storage... 00:20:34.432 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:34.432 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:34.432 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:20:34.432 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:34.432 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:34.432 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:34.432 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:34.432 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:34.432 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:20:34.432 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:20:34.432 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:20:34.432 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:20:34.432 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:20:34.432 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:20:34.432 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:20:34.432 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:34.432 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:20:34.432 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:20:34.432 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:34.432 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:34.432 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:20:34.432 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:20:34.432 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:34.432 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:20:34.433 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:20:34.433 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:20:34.433 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:20:34.433 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:34.433 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:20:34.433 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:20:34.433 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:34.433 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:34.433 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:20:34.433 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:34.433 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:34.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:34.433 --rc genhtml_branch_coverage=1 00:20:34.433 --rc genhtml_function_coverage=1 00:20:34.433 --rc genhtml_legend=1 00:20:34.433 --rc geninfo_all_blocks=1 00:20:34.433 --rc geninfo_unexecuted_blocks=1 00:20:34.433 00:20:34.433 ' 00:20:34.433 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:34.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:34.433 --rc genhtml_branch_coverage=1 00:20:34.433 --rc genhtml_function_coverage=1 00:20:34.433 --rc genhtml_legend=1 00:20:34.433 --rc geninfo_all_blocks=1 00:20:34.433 --rc geninfo_unexecuted_blocks=1 00:20:34.433 00:20:34.433 ' 00:20:34.433 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:34.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:34.433 --rc genhtml_branch_coverage=1 00:20:34.433 --rc genhtml_function_coverage=1 00:20:34.433 --rc genhtml_legend=1 00:20:34.433 --rc geninfo_all_blocks=1 00:20:34.433 --rc geninfo_unexecuted_blocks=1 00:20:34.433 00:20:34.433 ' 00:20:34.433 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:34.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:34.433 --rc genhtml_branch_coverage=1 00:20:34.433 --rc genhtml_function_coverage=1 00:20:34.433 --rc genhtml_legend=1 00:20:34.433 --rc geninfo_all_blocks=1 00:20:34.433 --rc geninfo_unexecuted_blocks=1 00:20:34.433 00:20:34.433 ' 00:20:34.433 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:34.433 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:20:34.433 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:34.433 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:34.433 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:34.433 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:34.433 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:34.433 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:34.433 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:34.433 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:34.433 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:34.433 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:34.433 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:20:34.433 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:20:34.433 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:34.433 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:34.433 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:34.433 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:34.693 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:34.693 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:20:34.693 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:34.693 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:34.693 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:34.693 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.693 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.693 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.693 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:20:34.693 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.693 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:20:34.693 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:34.693 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:34.693 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:34.693 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:34.693 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:34.693 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:34.693 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:34.693 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:34.693 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:34.693 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:34.693 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:20:34.693 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:34.693 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:34.693 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:34.693 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:34.693 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:34.693 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:34.693 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:34.693 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:34.693 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:34.693 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:34.693 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:20:34.693 17:31:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:41.264 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:41.264 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:41.264 Found net devices under 0000:af:00.0: cvl_0_0 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:41.264 Found net devices under 0000:af:00.1: cvl_0_1 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:41.264 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:41.265 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:41.265 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:41.265 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:41.265 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:41.265 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.327 ms 00:20:41.265 00:20:41.265 --- 10.0.0.2 ping statistics --- 00:20:41.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:41.265 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:20:41.265 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:41.265 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:41.265 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:20:41.265 00:20:41.265 --- 10.0.0.1 ping statistics --- 00:20:41.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:41.265 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:20:41.265 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:41.265 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:20:41.265 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:41.265 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:41.265 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:41.265 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:41.265 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:41.265 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:41.265 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:41.265 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:20:41.265 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:41.265 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:41.265 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:41.265 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=2613007 00:20:41.265 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 2613007 00:20:41.265 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:41.265 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 2613007 ']' 00:20:41.265 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:41.265 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:41.265 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:41.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:41.265 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:41.265 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:41.265 [2024-12-09 17:31:09.605069] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:20:41.265 [2024-12-09 17:31:09.605110] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:41.265 [2024-12-09 17:31:09.684067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:41.265 [2024-12-09 17:31:09.725297] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:41.265 [2024-12-09 17:31:09.725326] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:41.265 [2024-12-09 17:31:09.725334] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:41.265 [2024-12-09 17:31:09.725340] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:41.265 [2024-12-09 17:31:09.725346] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:41.265 [2024-12-09 17:31:09.725893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:41.265 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:41.265 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:20:41.265 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:41.265 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:41.265 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:41.265 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:41.265 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:41.265 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:41.265 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:20:41.265 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.265 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:41.265 [2024-12-09 17:31:09.862083] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:41.265 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.265 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:20:41.265 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.265 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:41.265 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.265 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:41.265 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.265 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:41.265 Malloc0 00:20:41.265 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.265 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:41.265 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.265 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:41.265 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.265 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:41.265 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.265 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:41.265 [2024-12-09 17:31:09.902198] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:41.265 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.265 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=2613136 00:20:41.265 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:41.265 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=2613138 00:20:41.265 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:41.265 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=2613139 00:20:41.265 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:41.265 17:31:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 2613136 00:20:41.265 [2024-12-09 17:31:09.990896] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:41.265 [2024-12-09 17:31:09.991075] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:41.265 [2024-12-09 17:31:09.991233] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:42.200 Initializing NVMe Controllers 00:20:42.200 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:42.200 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:20:42.200 Initialization complete. Launching workers. 00:20:42.200 ======================================================== 00:20:42.200 Latency(us) 00:20:42.200 Device Information : IOPS MiB/s Average min max 00:20:42.200 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 6172.00 24.11 161.67 122.85 40531.91 00:20:42.200 ======================================================== 00:20:42.200 Total : 6172.00 24.11 161.67 122.85 40531.91 00:20:42.200 00:20:42.200 Initializing NVMe Controllers 00:20:42.200 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:42.200 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:20:42.200 Initialization complete. Launching workers. 00:20:42.200 ======================================================== 00:20:42.200 Latency(us) 00:20:42.200 Device Information : IOPS MiB/s Average min max 00:20:42.200 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 40932.50 40819.56 41715.35 00:20:42.200 ======================================================== 00:20:42.200 Total : 25.00 0.10 40932.50 40819.56 41715.35 00:20:42.200 00:20:42.200 Initializing NVMe Controllers 00:20:42.200 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:42.200 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:20:42.200 Initialization complete. Launching workers. 00:20:42.200 ======================================================== 00:20:42.200 Latency(us) 00:20:42.200 Device Information : IOPS MiB/s Average min max 00:20:42.200 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 6330.00 24.73 157.63 128.45 381.93 00:20:42.200 ======================================================== 00:20:42.200 Total : 6330.00 24.73 157.63 128.45 381.93 00:20:42.200 00:20:42.200 17:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 2613138 00:20:42.200 17:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 2613139 00:20:42.200 17:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:42.200 17:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:20:42.200 17:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:42.200 17:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:20:42.200 17:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:42.200 17:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:20:42.200 17:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:42.200 17:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:42.200 rmmod nvme_tcp 00:20:42.200 rmmod nvme_fabrics 00:20:42.200 rmmod nvme_keyring 00:20:42.200 17:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:42.200 17:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:20:42.200 17:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:20:42.200 17:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 2613007 ']' 00:20:42.200 17:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 2613007 00:20:42.200 17:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 2613007 ']' 00:20:42.200 17:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 2613007 00:20:42.200 17:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:20:42.200 17:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:42.200 17:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2613007 00:20:42.200 17:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:42.200 17:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:42.200 17:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2613007' 00:20:42.200 killing process with pid 2613007 00:20:42.201 17:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 2613007 00:20:42.201 17:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 2613007 00:20:42.460 17:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:42.460 17:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:42.460 17:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:42.460 17:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:20:42.460 17:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:20:42.460 17:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:42.460 17:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:20:42.460 17:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:42.460 17:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:42.460 17:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:42.460 17:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:42.460 17:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:44.365 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:44.365 00:20:44.365 real 0m10.045s 00:20:44.365 user 0m6.419s 00:20:44.365 sys 0m5.447s 00:20:44.365 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:44.365 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:44.365 ************************************ 00:20:44.365 END TEST nvmf_control_msg_list 00:20:44.365 ************************************ 00:20:44.365 17:31:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:44.365 17:31:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:44.365 17:31:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:44.365 17:31:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:44.366 ************************************ 00:20:44.366 START TEST nvmf_wait_for_buf 00:20:44.366 ************************************ 00:20:44.366 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:44.625 * Looking for test storage... 00:20:44.626 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:44.626 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:44.626 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:20:44.626 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:44.626 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:44.626 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:44.626 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:44.626 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:44.626 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:20:44.626 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:20:44.626 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:20:44.626 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:20:44.626 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:20:44.626 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:20:44.626 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:20:44.626 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:44.626 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:20:44.626 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:20:44.626 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:44.626 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:44.626 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:20:44.626 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:20:44.626 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:44.626 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:20:44.626 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:20:44.626 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:20:44.626 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:20:44.626 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:44.626 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:20:44.626 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:20:44.626 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:44.626 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:44.626 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:20:44.626 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:44.626 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:44.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:44.626 --rc genhtml_branch_coverage=1 00:20:44.626 --rc genhtml_function_coverage=1 00:20:44.626 --rc genhtml_legend=1 00:20:44.626 --rc geninfo_all_blocks=1 00:20:44.626 --rc geninfo_unexecuted_blocks=1 00:20:44.626 00:20:44.626 ' 00:20:44.626 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:44.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:44.626 --rc genhtml_branch_coverage=1 00:20:44.626 --rc genhtml_function_coverage=1 00:20:44.626 --rc genhtml_legend=1 00:20:44.626 --rc geninfo_all_blocks=1 00:20:44.626 --rc geninfo_unexecuted_blocks=1 00:20:44.626 00:20:44.626 ' 00:20:44.626 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:44.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:44.626 --rc genhtml_branch_coverage=1 00:20:44.626 --rc genhtml_function_coverage=1 00:20:44.626 --rc genhtml_legend=1 00:20:44.626 --rc geninfo_all_blocks=1 00:20:44.626 --rc geninfo_unexecuted_blocks=1 00:20:44.626 00:20:44.626 ' 00:20:44.626 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:44.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:44.626 --rc genhtml_branch_coverage=1 00:20:44.626 --rc genhtml_function_coverage=1 00:20:44.626 --rc genhtml_legend=1 00:20:44.626 --rc geninfo_all_blocks=1 00:20:44.626 --rc geninfo_unexecuted_blocks=1 00:20:44.626 00:20:44.626 ' 00:20:44.626 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:44.626 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:20:44.626 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:44.626 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:44.626 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:44.626 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:44.626 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:44.626 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:44.626 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:44.626 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:44.626 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:44.626 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:44.626 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:20:44.626 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:20:44.626 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:44.626 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:44.626 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:44.626 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:44.626 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:44.626 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:20:44.626 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:44.626 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:44.626 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:44.626 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.626 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.626 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.626 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:20:44.626 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.626 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:20:44.626 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:44.626 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:44.626 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:44.626 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:44.626 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:44.626 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:44.626 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:44.626 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:44.626 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:44.626 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:44.626 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:20:44.626 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:44.626 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:44.627 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:44.627 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:44.627 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:44.627 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:44.627 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:44.627 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:44.627 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:44.627 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:44.627 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:20:44.627 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:51.197 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:51.197 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:51.197 Found net devices under 0000:af:00.0: cvl_0_0 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:51.197 Found net devices under 0000:af:00.1: cvl_0_1 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:51.197 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:51.198 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:51.198 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:51.198 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:51.198 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:51.198 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:51.198 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:51.198 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.299 ms 00:20:51.198 00:20:51.198 --- 10.0.0.2 ping statistics --- 00:20:51.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:51.198 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:20:51.198 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:51.198 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:51.198 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:20:51.198 00:20:51.198 --- 10.0.0.1 ping statistics --- 00:20:51.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:51.198 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:20:51.198 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:51.198 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:20:51.198 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:51.198 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:51.198 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:51.198 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:51.198 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:51.198 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:51.198 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:51.198 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:20:51.198 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:51.198 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:51.198 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:51.198 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=2616744 00:20:51.198 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:51.198 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 2616744 00:20:51.198 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 2616744 ']' 00:20:51.198 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:51.198 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:51.198 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:51.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:51.198 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:51.198 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:51.198 [2024-12-09 17:31:19.685737] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:20:51.198 [2024-12-09 17:31:19.685791] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:51.198 [2024-12-09 17:31:19.766114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:51.198 [2024-12-09 17:31:19.804109] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:51.198 [2024-12-09 17:31:19.804143] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:51.198 [2024-12-09 17:31:19.804150] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:51.198 [2024-12-09 17:31:19.804156] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:51.198 [2024-12-09 17:31:19.804161] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:51.198 [2024-12-09 17:31:19.804723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:51.198 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:51.198 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:20:51.198 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:51.198 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:51.198 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:51.198 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:51.198 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:51.198 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:51.198 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:20:51.198 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.198 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:51.198 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.198 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:20:51.198 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.198 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:51.198 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.198 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:20:51.198 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.198 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:51.198 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.198 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:51.198 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.198 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:51.198 Malloc0 00:20:51.198 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.198 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:20:51.198 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.198 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:51.198 [2024-12-09 17:31:19.994952] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:51.198 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.198 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:20:51.198 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.198 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:51.198 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.198 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:51.198 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.198 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:51.198 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.198 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:51.198 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.198 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:51.198 [2024-12-09 17:31:20.023148] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:51.198 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.198 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:51.198 [2024-12-09 17:31:20.109313] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:52.575 Initializing NVMe Controllers 00:20:52.575 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:52.575 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:20:52.575 Initialization complete. Launching workers. 00:20:52.575 ======================================================== 00:20:52.575 Latency(us) 00:20:52.575 Device Information : IOPS MiB/s Average min max 00:20:52.575 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 129.00 16.12 32239.68 7317.21 63847.15 00:20:52.575 ======================================================== 00:20:52.575 Total : 129.00 16.12 32239.68 7317.21 63847.15 00:20:52.575 00:20:52.575 17:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:20:52.575 17:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:20:52.575 17:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.575 17:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:52.575 17:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.575 17:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:20:52.575 17:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:20:52.575 17:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:52.575 17:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:20:52.575 17:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:52.575 17:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:20:52.575 17:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:52.575 17:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:20:52.575 17:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:52.575 17:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:52.575 rmmod nvme_tcp 00:20:52.575 rmmod nvme_fabrics 00:20:52.575 rmmod nvme_keyring 00:20:52.575 17:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:52.834 17:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:20:52.834 17:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:20:52.834 17:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 2616744 ']' 00:20:52.834 17:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 2616744 00:20:52.834 17:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 2616744 ']' 00:20:52.834 17:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 2616744 00:20:52.834 17:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:20:52.834 17:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:52.834 17:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2616744 00:20:52.834 17:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:52.834 17:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:52.834 17:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2616744' 00:20:52.834 killing process with pid 2616744 00:20:52.834 17:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 2616744 00:20:52.834 17:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 2616744 00:20:52.834 17:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:52.834 17:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:52.834 17:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:52.834 17:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:20:52.834 17:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:20:52.834 17:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:52.834 17:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:20:52.834 17:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:52.834 17:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:52.834 17:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:52.834 17:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:52.834 17:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:55.370 17:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:55.370 00:20:55.370 real 0m10.505s 00:20:55.370 user 0m4.067s 00:20:55.370 sys 0m4.899s 00:20:55.370 17:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:55.370 17:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:55.370 ************************************ 00:20:55.370 END TEST nvmf_wait_for_buf 00:20:55.370 ************************************ 00:20:55.370 17:31:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:20:55.370 17:31:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:20:55.370 17:31:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:20:55.370 17:31:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:20:55.370 17:31:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:20:55.370 17:31:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:00.645 17:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:00.645 17:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:21:00.645 17:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:00.645 17:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:00.645 17:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:00.645 17:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:00.645 17:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:00.645 17:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:21:00.645 17:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:00.645 17:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:21:00.645 17:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:21:00.645 17:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:21:00.645 17:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:21:00.645 17:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:21:00.645 17:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:21:00.645 17:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:00.645 17:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:00.645 17:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:00.645 17:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:00.645 17:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:00.645 17:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:00.645 17:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:00.645 17:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:00.645 17:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:00.645 17:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:00.645 17:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:00.645 17:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:00.645 17:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:00.645 17:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:00.645 17:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:00.645 17:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:00.645 17:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:00.645 17:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:00.645 17:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:00.645 17:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:00.645 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:00.645 17:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:00.645 17:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:00.645 17:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:00.645 17:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:00.645 17:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:00.645 17:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:00.645 17:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:00.645 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:00.645 17:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:00.645 17:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:00.645 17:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:00.645 17:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:00.645 17:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:00.645 17:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:00.645 17:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:00.645 17:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:00.645 17:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:00.645 17:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:00.645 17:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:00.645 17:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:00.645 17:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:00.645 17:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:00.645 17:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:00.645 17:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:00.645 Found net devices under 0000:af:00.0: cvl_0_0 00:21:00.645 17:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:00.646 17:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:00.646 17:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:00.646 17:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:00.646 17:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:00.646 17:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:00.646 17:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:00.646 17:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:00.646 17:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:00.646 Found net devices under 0000:af:00.1: cvl_0_1 00:21:00.646 17:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:00.646 17:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:00.646 17:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:00.646 17:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:21:00.646 17:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:00.646 17:31:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:00.646 17:31:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:00.646 17:31:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:00.646 ************************************ 00:21:00.646 START TEST nvmf_perf_adq 00:21:00.646 ************************************ 00:21:00.646 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:00.646 * Looking for test storage... 00:21:00.646 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:00.646 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:00.646 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lcov --version 00:21:00.646 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:00.905 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:00.905 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:00.905 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:00.905 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:00.905 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:21:00.905 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:21:00.905 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:21:00.905 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:21:00.905 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:21:00.905 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:21:00.905 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:21:00.905 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:00.905 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:21:00.905 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:21:00.905 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:00.905 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:00.905 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:21:00.905 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:21:00.905 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:00.905 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:21:00.905 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:21:00.905 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:21:00.905 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:21:00.905 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:00.905 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:21:00.905 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:21:00.905 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:00.905 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:00.905 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:21:00.905 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:00.905 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:00.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:00.905 --rc genhtml_branch_coverage=1 00:21:00.905 --rc genhtml_function_coverage=1 00:21:00.905 --rc genhtml_legend=1 00:21:00.905 --rc geninfo_all_blocks=1 00:21:00.905 --rc geninfo_unexecuted_blocks=1 00:21:00.905 00:21:00.905 ' 00:21:00.905 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:00.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:00.905 --rc genhtml_branch_coverage=1 00:21:00.905 --rc genhtml_function_coverage=1 00:21:00.905 --rc genhtml_legend=1 00:21:00.905 --rc geninfo_all_blocks=1 00:21:00.905 --rc geninfo_unexecuted_blocks=1 00:21:00.905 00:21:00.905 ' 00:21:00.905 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:00.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:00.905 --rc genhtml_branch_coverage=1 00:21:00.905 --rc genhtml_function_coverage=1 00:21:00.905 --rc genhtml_legend=1 00:21:00.905 --rc geninfo_all_blocks=1 00:21:00.905 --rc geninfo_unexecuted_blocks=1 00:21:00.905 00:21:00.905 ' 00:21:00.905 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:00.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:00.905 --rc genhtml_branch_coverage=1 00:21:00.905 --rc genhtml_function_coverage=1 00:21:00.905 --rc genhtml_legend=1 00:21:00.905 --rc geninfo_all_blocks=1 00:21:00.905 --rc geninfo_unexecuted_blocks=1 00:21:00.905 00:21:00.905 ' 00:21:00.905 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:00.905 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:21:00.905 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:00.906 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:00.906 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:00.906 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:00.906 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:00.906 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:00.906 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:00.906 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:00.906 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:00.906 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:00.906 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:21:00.906 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:21:00.906 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:00.906 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:00.906 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:00.906 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:00.906 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:00.906 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:21:00.906 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:00.906 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:00.906 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:00.906 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.906 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.906 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.906 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:21:00.906 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.906 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:21:00.906 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:00.906 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:00.906 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:00.906 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:00.906 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:00.906 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:00.906 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:00.906 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:00.906 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:00.906 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:00.906 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:21:00.906 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:00.906 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:07.475 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:07.475 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:07.475 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:07.475 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:07.475 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:07.475 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:07.475 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:07.475 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:07.475 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:07.475 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:07.475 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:07.475 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:07.475 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:07.475 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:07.475 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:07.475 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:07.475 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:07.475 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:07.475 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:07.475 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:07.475 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:07.475 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:07.475 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:07.475 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:07.475 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:07.475 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:07.475 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:07.475 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:07.475 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:07.475 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:07.475 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:07.475 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:07.475 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:07.475 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:07.475 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:07.475 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:07.475 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:07.475 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:07.475 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:07.475 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:07.475 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:07.475 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:07.475 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:07.475 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:07.475 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:07.475 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:07.475 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:07.475 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:07.475 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:07.475 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:07.475 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:07.475 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:07.475 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:07.475 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:07.475 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:07.475 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:07.475 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:07.475 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:07.475 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:07.475 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:07.475 Found net devices under 0000:af:00.0: cvl_0_0 00:21:07.475 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:07.476 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:07.476 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:07.476 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:07.476 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:07.476 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:07.476 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:07.476 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:07.476 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:07.476 Found net devices under 0000:af:00.1: cvl_0_1 00:21:07.476 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:07.476 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:07.476 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:07.476 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:21:07.476 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:07.476 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:21:07.476 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:07.476 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:07.734 17:31:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:10.265 17:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:15.536 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:21:15.536 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:15.536 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:15.536 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:15.536 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:15.536 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:15.536 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:15.536 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:15.536 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:15.536 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:15.536 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:15.536 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:15.536 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:15.536 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:15.536 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:15.536 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:15.536 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:15.536 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:15.536 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:15.536 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:15.536 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:15.536 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:15.536 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:15.536 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:15.536 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:15.536 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:15.536 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:15.536 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:15.536 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:15.536 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:15.536 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:15.536 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:15.536 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:15.536 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:15.536 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:15.536 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:15.536 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:15.536 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:15.536 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:15.536 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:15.536 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:15.536 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:15.536 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:15.536 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:15.536 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:15.536 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:15.536 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:15.536 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:15.536 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:15.536 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:15.536 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:15.536 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:15.536 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:15.536 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:15.536 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:15.536 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:15.536 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:15.536 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:15.536 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:15.536 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:15.536 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:15.536 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:15.536 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:15.536 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:15.536 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:15.536 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:15.536 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:15.536 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:15.536 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:15.536 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:15.536 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:15.536 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:15.536 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:15.536 Found net devices under 0000:af:00.0: cvl_0_0 00:21:15.536 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:15.536 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:15.536 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:15.536 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:15.536 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:15.536 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:15.536 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:15.537 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:15.537 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:15.537 Found net devices under 0000:af:00.1: cvl_0_1 00:21:15.537 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:15.537 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:15.537 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:21:15.537 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:15.537 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:15.537 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:15.537 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:15.537 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:15.537 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:15.537 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:15.537 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:15.537 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:15.537 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:15.537 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:15.537 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:15.537 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:15.537 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:15.537 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:15.537 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:15.537 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:15.537 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:15.537 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:15.537 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:15.537 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:15.537 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:15.537 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:15.795 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:15.795 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:15.795 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:15.795 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:15.795 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.755 ms 00:21:15.795 00:21:15.795 --- 10.0.0.2 ping statistics --- 00:21:15.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:15.795 rtt min/avg/max/mdev = 0.755/0.755/0.755/0.000 ms 00:21:15.795 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:15.795 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:15.795 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:21:15.796 00:21:15.796 --- 10.0.0.1 ping statistics --- 00:21:15.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:15.796 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:21:15.796 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:15.796 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:21:15.796 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:15.796 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:15.796 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:15.796 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:15.796 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:15.796 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:15.796 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:15.796 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:15.796 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:15.796 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:15.796 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:15.796 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2625323 00:21:15.796 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:15.796 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2625323 00:21:15.796 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2625323 ']' 00:21:15.796 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:15.796 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:15.796 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:15.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:15.796 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:15.796 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:15.796 [2024-12-09 17:31:44.827762] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:21:15.796 [2024-12-09 17:31:44.827803] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:15.796 [2024-12-09 17:31:44.902811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:15.796 [2024-12-09 17:31:44.945334] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:15.796 [2024-12-09 17:31:44.945367] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:15.796 [2024-12-09 17:31:44.945374] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:15.796 [2024-12-09 17:31:44.945380] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:15.796 [2024-12-09 17:31:44.945384] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:15.796 [2024-12-09 17:31:44.950239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:15.796 [2024-12-09 17:31:44.950265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:15.796 [2024-12-09 17:31:44.950378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:15.796 [2024-12-09 17:31:44.950378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:16.733 17:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:16.733 17:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:21:16.733 17:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:16.733 17:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:16.733 17:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:16.733 17:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:16.733 17:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:21:16.733 17:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:16.733 17:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:16.733 17:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.733 17:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:16.733 17:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.733 17:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:16.733 17:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:21:16.733 17:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.733 17:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:16.733 17:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.733 17:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:16.733 17:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.733 17:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:16.733 17:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.733 17:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:21:16.733 17:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.733 17:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:16.733 [2024-12-09 17:31:45.846244] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:16.733 17:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.733 17:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:16.733 17:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.733 17:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:16.733 Malloc1 00:21:16.733 17:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.734 17:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:16.734 17:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.734 17:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:16.734 17:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.734 17:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:16.734 17:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.734 17:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:16.734 17:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.734 17:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:16.734 17:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.734 17:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:16.734 [2024-12-09 17:31:45.908991] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:16.991 17:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.991 17:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=2625570 00:21:16.991 17:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:21:16.991 17:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:19.022 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:21:19.022 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.022 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:19.022 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.022 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:21:19.022 "tick_rate": 2100000000, 00:21:19.022 "poll_groups": [ 00:21:19.022 { 00:21:19.022 "name": "nvmf_tgt_poll_group_000", 00:21:19.022 "admin_qpairs": 1, 00:21:19.022 "io_qpairs": 1, 00:21:19.022 "current_admin_qpairs": 1, 00:21:19.022 "current_io_qpairs": 1, 00:21:19.022 "pending_bdev_io": 0, 00:21:19.022 "completed_nvme_io": 19516, 00:21:19.022 "transports": [ 00:21:19.022 { 00:21:19.022 "trtype": "TCP" 00:21:19.022 } 00:21:19.022 ] 00:21:19.022 }, 00:21:19.022 { 00:21:19.022 "name": "nvmf_tgt_poll_group_001", 00:21:19.022 "admin_qpairs": 0, 00:21:19.022 "io_qpairs": 1, 00:21:19.022 "current_admin_qpairs": 0, 00:21:19.022 "current_io_qpairs": 1, 00:21:19.022 "pending_bdev_io": 0, 00:21:19.022 "completed_nvme_io": 19867, 00:21:19.022 "transports": [ 00:21:19.022 { 00:21:19.022 "trtype": "TCP" 00:21:19.022 } 00:21:19.022 ] 00:21:19.022 }, 00:21:19.022 { 00:21:19.022 "name": "nvmf_tgt_poll_group_002", 00:21:19.022 "admin_qpairs": 0, 00:21:19.022 "io_qpairs": 1, 00:21:19.022 "current_admin_qpairs": 0, 00:21:19.022 "current_io_qpairs": 1, 00:21:19.022 "pending_bdev_io": 0, 00:21:19.022 "completed_nvme_io": 19565, 00:21:19.022 "transports": [ 00:21:19.022 { 00:21:19.022 "trtype": "TCP" 00:21:19.022 } 00:21:19.022 ] 00:21:19.022 }, 00:21:19.022 { 00:21:19.022 "name": "nvmf_tgt_poll_group_003", 00:21:19.022 "admin_qpairs": 0, 00:21:19.022 "io_qpairs": 1, 00:21:19.022 "current_admin_qpairs": 0, 00:21:19.022 "current_io_qpairs": 1, 00:21:19.022 "pending_bdev_io": 0, 00:21:19.022 "completed_nvme_io": 19521, 00:21:19.022 "transports": [ 00:21:19.022 { 00:21:19.022 "trtype": "TCP" 00:21:19.022 } 00:21:19.022 ] 00:21:19.022 } 00:21:19.022 ] 00:21:19.022 }' 00:21:19.022 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:21:19.022 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:21:19.022 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:21:19.022 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:21:19.022 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 2625570 00:21:27.124 Initializing NVMe Controllers 00:21:27.124 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:27.124 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:27.124 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:27.124 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:27.124 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:27.124 Initialization complete. Launching workers. 00:21:27.124 ======================================================== 00:21:27.124 Latency(us) 00:21:27.124 Device Information : IOPS MiB/s Average min max 00:21:27.124 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10441.10 40.79 6129.54 2202.83 11123.40 00:21:27.124 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10604.70 41.42 6034.85 1806.09 10476.57 00:21:27.124 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10524.30 41.11 6082.19 1850.00 10352.90 00:21:27.124 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10501.80 41.02 6093.05 2399.12 10066.42 00:21:27.124 ======================================================== 00:21:27.124 Total : 42071.90 164.34 6084.72 1806.09 11123.40 00:21:27.124 00:21:27.124 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:21:27.124 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:27.124 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:21:27.124 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:27.124 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:21:27.124 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:27.124 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:27.124 rmmod nvme_tcp 00:21:27.124 rmmod nvme_fabrics 00:21:27.124 rmmod nvme_keyring 00:21:27.124 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:27.124 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:21:27.124 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:21:27.124 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2625323 ']' 00:21:27.124 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2625323 00:21:27.124 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2625323 ']' 00:21:27.124 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2625323 00:21:27.124 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:21:27.124 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:27.124 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2625323 00:21:27.124 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:27.124 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:27.124 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2625323' 00:21:27.124 killing process with pid 2625323 00:21:27.124 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2625323 00:21:27.124 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2625323 00:21:27.383 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:27.383 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:27.383 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:27.383 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:21:27.383 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:21:27.383 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:27.383 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:21:27.383 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:27.383 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:27.383 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:27.383 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:27.383 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:29.283 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:29.283 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:21:29.283 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:29.542 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:30.919 17:31:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:33.463 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:38.737 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:38.737 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:38.737 Found net devices under 0000:af:00.0: cvl_0_0 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:38.737 Found net devices under 0000:af:00.1: cvl_0_1 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:38.737 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:38.738 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:38.738 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:38.738 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:38.738 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:38.738 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:38.738 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:38.738 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:38.738 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:38.738 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.711 ms 00:21:38.738 00:21:38.738 --- 10.0.0.2 ping statistics --- 00:21:38.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:38.738 rtt min/avg/max/mdev = 0.711/0.711/0.711/0.000 ms 00:21:38.738 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:38.738 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:38.738 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:21:38.738 00:21:38.738 --- 10.0.0.1 ping statistics --- 00:21:38.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:38.738 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:21:38.738 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:38.738 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:21:38.738 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:38.738 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:38.738 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:38.738 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:38.738 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:38.738 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:38.738 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:38.738 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:21:38.738 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:21:38.738 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:21:38.738 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:21:38.738 net.core.busy_poll = 1 00:21:38.738 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:21:38.738 net.core.busy_read = 1 00:21:38.738 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:21:38.738 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:21:38.738 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:21:38.738 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:21:38.738 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:21:38.738 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:38.738 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:38.738 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:38.738 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:38.997 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2629436 00:21:38.997 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2629436 00:21:38.997 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:38.997 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2629436 ']' 00:21:38.997 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:38.997 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:38.997 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:38.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:38.997 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:38.997 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:38.997 [2024-12-09 17:32:07.970654] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:21:38.997 [2024-12-09 17:32:07.970699] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:38.997 [2024-12-09 17:32:08.051986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:38.997 [2024-12-09 17:32:08.094008] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:38.997 [2024-12-09 17:32:08.094044] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:38.997 [2024-12-09 17:32:08.094051] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:38.997 [2024-12-09 17:32:08.094057] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:38.997 [2024-12-09 17:32:08.094063] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:38.997 [2024-12-09 17:32:08.095512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:38.997 [2024-12-09 17:32:08.095617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:38.997 [2024-12-09 17:32:08.095713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:38.997 [2024-12-09 17:32:08.095715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:39.931 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:39.931 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:21:39.931 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:39.931 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:39.931 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:39.931 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:39.931 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:21:39.931 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:39.931 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:39.931 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.931 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:39.931 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.931 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:39.931 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:21:39.931 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.931 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:39.931 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.931 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:39.931 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.931 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:39.931 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.931 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:21:39.931 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.931 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:39.931 [2024-12-09 17:32:08.975562] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:39.931 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.931 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:39.931 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.931 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:39.931 Malloc1 00:21:39.931 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.931 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:39.931 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.931 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:39.931 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.931 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:39.931 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.931 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:39.931 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.931 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:39.931 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.931 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:39.931 [2024-12-09 17:32:09.035005] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:39.931 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.931 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=2629684 00:21:39.931 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:21:39.931 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:42.456 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:21:42.456 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.456 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:42.456 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.456 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:21:42.456 "tick_rate": 2100000000, 00:21:42.456 "poll_groups": [ 00:21:42.456 { 00:21:42.456 "name": "nvmf_tgt_poll_group_000", 00:21:42.456 "admin_qpairs": 1, 00:21:42.456 "io_qpairs": 2, 00:21:42.456 "current_admin_qpairs": 1, 00:21:42.456 "current_io_qpairs": 2, 00:21:42.456 "pending_bdev_io": 0, 00:21:42.456 "completed_nvme_io": 28551, 00:21:42.456 "transports": [ 00:21:42.456 { 00:21:42.456 "trtype": "TCP" 00:21:42.456 } 00:21:42.456 ] 00:21:42.456 }, 00:21:42.456 { 00:21:42.456 "name": "nvmf_tgt_poll_group_001", 00:21:42.456 "admin_qpairs": 0, 00:21:42.456 "io_qpairs": 2, 00:21:42.456 "current_admin_qpairs": 0, 00:21:42.456 "current_io_qpairs": 2, 00:21:42.456 "pending_bdev_io": 0, 00:21:42.456 "completed_nvme_io": 28085, 00:21:42.456 "transports": [ 00:21:42.456 { 00:21:42.456 "trtype": "TCP" 00:21:42.456 } 00:21:42.456 ] 00:21:42.456 }, 00:21:42.456 { 00:21:42.456 "name": "nvmf_tgt_poll_group_002", 00:21:42.456 "admin_qpairs": 0, 00:21:42.456 "io_qpairs": 0, 00:21:42.456 "current_admin_qpairs": 0, 00:21:42.456 "current_io_qpairs": 0, 00:21:42.456 "pending_bdev_io": 0, 00:21:42.456 "completed_nvme_io": 0, 00:21:42.456 "transports": [ 00:21:42.456 { 00:21:42.456 "trtype": "TCP" 00:21:42.456 } 00:21:42.456 ] 00:21:42.456 }, 00:21:42.456 { 00:21:42.456 "name": "nvmf_tgt_poll_group_003", 00:21:42.456 "admin_qpairs": 0, 00:21:42.456 "io_qpairs": 0, 00:21:42.456 "current_admin_qpairs": 0, 00:21:42.456 "current_io_qpairs": 0, 00:21:42.456 "pending_bdev_io": 0, 00:21:42.456 "completed_nvme_io": 0, 00:21:42.456 "transports": [ 00:21:42.456 { 00:21:42.456 "trtype": "TCP" 00:21:42.456 } 00:21:42.456 ] 00:21:42.456 } 00:21:42.456 ] 00:21:42.456 }' 00:21:42.456 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:21:42.456 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:21:42.456 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:21:42.456 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:21:42.456 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 2629684 00:21:50.559 Initializing NVMe Controllers 00:21:50.559 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:50.559 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:50.559 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:50.559 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:50.559 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:50.559 Initialization complete. Launching workers. 00:21:50.559 ======================================================== 00:21:50.559 Latency(us) 00:21:50.559 Device Information : IOPS MiB/s Average min max 00:21:50.559 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7963.80 31.11 8063.04 1216.44 52473.77 00:21:50.559 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 8539.70 33.36 7525.00 1451.86 52433.09 00:21:50.559 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6878.30 26.87 9305.55 1476.98 55425.88 00:21:50.559 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6933.80 27.09 9228.50 1288.66 52923.20 00:21:50.559 ======================================================== 00:21:50.559 Total : 30315.59 118.42 8459.96 1216.44 55425.88 00:21:50.559 00:21:50.559 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:21:50.559 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:50.559 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:21:50.559 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:50.559 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:21:50.559 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:50.559 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:50.559 rmmod nvme_tcp 00:21:50.559 rmmod nvme_fabrics 00:21:50.559 rmmod nvme_keyring 00:21:50.559 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:50.559 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:21:50.559 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:21:50.559 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2629436 ']' 00:21:50.559 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2629436 00:21:50.559 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2629436 ']' 00:21:50.559 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2629436 00:21:50.559 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:21:50.559 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:50.559 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2629436 00:21:50.559 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:50.559 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:50.559 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2629436' 00:21:50.559 killing process with pid 2629436 00:21:50.559 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2629436 00:21:50.559 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2629436 00:21:50.559 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:50.559 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:50.559 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:50.559 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:21:50.559 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:21:50.559 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:50.559 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:21:50.559 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:50.559 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:50.559 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:50.559 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:50.559 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:53.846 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:53.847 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:21:53.847 00:21:53.847 real 0m52.921s 00:21:53.847 user 2m49.772s 00:21:53.847 sys 0m10.286s 00:21:53.847 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:53.847 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:53.847 ************************************ 00:21:53.847 END TEST nvmf_perf_adq 00:21:53.847 ************************************ 00:21:53.847 17:32:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:53.847 17:32:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:53.847 17:32:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:53.847 17:32:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:53.847 ************************************ 00:21:53.847 START TEST nvmf_shutdown 00:21:53.847 ************************************ 00:21:53.847 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:53.847 * Looking for test storage... 00:21:53.847 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:53.847 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:53.847 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:21:53.847 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:53.847 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:53.847 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:53.847 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:53.847 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:53.847 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:21:53.847 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:21:53.847 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:21:53.847 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:21:53.847 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:21:53.847 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:21:53.847 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:21:53.847 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:53.847 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:21:53.847 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:21:53.847 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:53.847 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:53.847 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:21:53.847 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:21:53.847 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:53.847 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:21:53.847 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:21:53.847 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:21:53.847 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:21:53.847 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:53.847 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:21:53.847 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:21:53.847 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:53.847 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:53.847 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:21:53.847 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:53.847 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:53.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:53.847 --rc genhtml_branch_coverage=1 00:21:53.847 --rc genhtml_function_coverage=1 00:21:53.847 --rc genhtml_legend=1 00:21:53.847 --rc geninfo_all_blocks=1 00:21:53.847 --rc geninfo_unexecuted_blocks=1 00:21:53.847 00:21:53.847 ' 00:21:53.847 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:53.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:53.847 --rc genhtml_branch_coverage=1 00:21:53.847 --rc genhtml_function_coverage=1 00:21:53.847 --rc genhtml_legend=1 00:21:53.847 --rc geninfo_all_blocks=1 00:21:53.847 --rc geninfo_unexecuted_blocks=1 00:21:53.847 00:21:53.847 ' 00:21:53.847 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:53.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:53.847 --rc genhtml_branch_coverage=1 00:21:53.847 --rc genhtml_function_coverage=1 00:21:53.847 --rc genhtml_legend=1 00:21:53.847 --rc geninfo_all_blocks=1 00:21:53.847 --rc geninfo_unexecuted_blocks=1 00:21:53.847 00:21:53.847 ' 00:21:53.847 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:53.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:53.847 --rc genhtml_branch_coverage=1 00:21:53.847 --rc genhtml_function_coverage=1 00:21:53.847 --rc genhtml_legend=1 00:21:53.847 --rc geninfo_all_blocks=1 00:21:53.847 --rc geninfo_unexecuted_blocks=1 00:21:53.847 00:21:53.847 ' 00:21:53.847 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:53.847 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:21:53.847 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:53.847 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:53.847 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:53.847 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:53.847 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:53.847 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:53.847 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:53.847 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:53.847 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:53.847 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:53.847 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:21:53.847 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:21:53.847 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:53.847 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:53.847 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:53.847 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:53.847 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:53.847 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:21:53.847 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:53.847 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:53.847 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:53.847 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.847 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.847 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.847 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:21:53.847 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.847 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:21:53.847 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:53.847 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:53.847 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:53.848 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:53.848 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:53.848 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:53.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:53.848 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:53.848 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:53.848 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:53.848 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:53.848 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:53.848 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:21:53.848 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:53.848 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:53.848 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:53.848 ************************************ 00:21:53.848 START TEST nvmf_shutdown_tc1 00:21:53.848 ************************************ 00:21:53.848 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:21:53.848 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:21:53.848 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:53.848 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:53.848 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:53.848 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:53.848 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:53.848 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:53.848 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:53.848 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:53.848 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:53.848 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:53.848 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:53.848 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:53.848 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:00.413 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:00.413 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:00.413 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:00.413 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:00.413 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:00.413 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:00.413 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:00.413 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:22:00.413 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:00.413 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:22:00.413 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:22:00.413 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:22:00.413 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:22:00.413 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:22:00.413 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:00.413 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:00.413 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:00.413 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:00.413 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:00.413 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:00.413 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:00.413 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:00.413 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:00.413 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:00.413 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:00.413 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:00.413 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:00.413 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:00.413 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:00.413 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:00.413 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:00.413 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:00.413 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:00.413 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:00.413 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:00.413 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:00.413 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:00.413 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:00.413 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:00.413 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:00.413 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:00.413 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:00.413 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:00.413 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:00.413 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:00.413 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:00.413 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:00.413 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:00.413 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:00.413 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:00.413 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:00.413 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:00.413 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:00.413 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:00.413 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:00.413 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:00.413 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:00.413 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:00.413 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:00.413 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:00.413 Found net devices under 0000:af:00.0: cvl_0_0 00:22:00.413 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:00.413 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:00.413 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:00.413 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:00.413 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:00.413 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:00.414 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:00.414 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:00.414 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:00.414 Found net devices under 0000:af:00.1: cvl_0_1 00:22:00.414 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:00.414 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:00.414 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:00.414 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:00.414 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:00.414 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:00.414 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:00.414 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:00.414 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:00.414 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:00.414 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:00.414 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:00.414 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:00.414 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:00.414 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:00.414 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:00.414 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:00.414 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:00.414 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:00.414 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:00.414 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:00.414 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:00.414 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:00.414 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:00.414 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:00.414 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:00.414 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:00.414 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:00.414 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:00.414 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:00.414 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.303 ms 00:22:00.414 00:22:00.414 --- 10.0.0.2 ping statistics --- 00:22:00.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:00.414 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:22:00.414 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:00.414 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:00.414 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:22:00.414 00:22:00.414 --- 10.0.0.1 ping statistics --- 00:22:00.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:00.414 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:22:00.414 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:00.414 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:22:00.414 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:00.414 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:00.414 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:00.414 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:00.414 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:00.414 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:00.414 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:00.414 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:00.414 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:00.414 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:00.414 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:00.414 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=2635085 00:22:00.414 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:00.414 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 2635085 00:22:00.414 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2635085 ']' 00:22:00.414 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:00.414 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:00.414 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:00.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:00.414 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:00.414 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:00.414 [2024-12-09 17:32:28.912990] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:22:00.414 [2024-12-09 17:32:28.913032] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:00.414 [2024-12-09 17:32:28.992063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:00.414 [2024-12-09 17:32:29.032690] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:00.414 [2024-12-09 17:32:29.032725] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:00.414 [2024-12-09 17:32:29.032732] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:00.414 [2024-12-09 17:32:29.032738] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:00.414 [2024-12-09 17:32:29.032743] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:00.414 [2024-12-09 17:32:29.034331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:00.414 [2024-12-09 17:32:29.034442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:00.414 [2024-12-09 17:32:29.034548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:00.414 [2024-12-09 17:32:29.034550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:00.414 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:00.414 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:22:00.414 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:00.414 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:00.414 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:00.414 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:00.414 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:00.414 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.414 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:00.414 [2024-12-09 17:32:29.171863] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:00.414 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.414 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:00.414 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:00.414 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:00.414 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:00.414 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:00.414 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:00.414 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:00.414 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:00.414 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:00.414 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:00.414 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:00.414 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:00.415 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:00.415 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:00.415 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:00.415 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:00.415 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:00.415 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:00.415 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:00.415 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:00.415 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:00.415 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:00.415 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:00.415 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:00.415 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:00.415 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:00.415 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.415 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:00.415 Malloc1 00:22:00.415 [2024-12-09 17:32:29.282870] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:00.415 Malloc2 00:22:00.415 Malloc3 00:22:00.415 Malloc4 00:22:00.415 Malloc5 00:22:00.415 Malloc6 00:22:00.415 Malloc7 00:22:00.415 Malloc8 00:22:00.673 Malloc9 00:22:00.673 Malloc10 00:22:00.673 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.673 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:00.673 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:00.673 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:00.673 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=2635351 00:22:00.673 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 2635351 /var/tmp/bdevperf.sock 00:22:00.673 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2635351 ']' 00:22:00.673 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:00.673 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:22:00.673 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:00.673 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:00.673 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:00.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:00.673 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:22:00.673 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:00.673 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:22:00.673 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:00.673 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:00.673 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:00.673 { 00:22:00.673 "params": { 00:22:00.673 "name": "Nvme$subsystem", 00:22:00.673 "trtype": "$TEST_TRANSPORT", 00:22:00.673 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:00.673 "adrfam": "ipv4", 00:22:00.673 "trsvcid": "$NVMF_PORT", 00:22:00.673 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:00.673 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:00.673 "hdgst": ${hdgst:-false}, 00:22:00.673 "ddgst": ${ddgst:-false} 00:22:00.673 }, 00:22:00.673 "method": "bdev_nvme_attach_controller" 00:22:00.673 } 00:22:00.673 EOF 00:22:00.673 )") 00:22:00.673 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:00.673 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:00.673 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:00.673 { 00:22:00.673 "params": { 00:22:00.673 "name": "Nvme$subsystem", 00:22:00.673 "trtype": "$TEST_TRANSPORT", 00:22:00.673 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:00.673 "adrfam": "ipv4", 00:22:00.673 "trsvcid": "$NVMF_PORT", 00:22:00.673 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:00.673 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:00.673 "hdgst": ${hdgst:-false}, 00:22:00.673 "ddgst": ${ddgst:-false} 00:22:00.673 }, 00:22:00.673 "method": "bdev_nvme_attach_controller" 00:22:00.673 } 00:22:00.673 EOF 00:22:00.673 )") 00:22:00.673 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:00.673 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:00.673 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:00.673 { 00:22:00.673 "params": { 00:22:00.673 "name": "Nvme$subsystem", 00:22:00.673 "trtype": "$TEST_TRANSPORT", 00:22:00.673 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:00.673 "adrfam": "ipv4", 00:22:00.673 "trsvcid": "$NVMF_PORT", 00:22:00.673 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:00.673 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:00.673 "hdgst": ${hdgst:-false}, 00:22:00.673 "ddgst": ${ddgst:-false} 00:22:00.673 }, 00:22:00.673 "method": "bdev_nvme_attach_controller" 00:22:00.673 } 00:22:00.673 EOF 00:22:00.673 )") 00:22:00.673 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:00.673 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:00.673 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:00.673 { 00:22:00.673 "params": { 00:22:00.673 "name": "Nvme$subsystem", 00:22:00.673 "trtype": "$TEST_TRANSPORT", 00:22:00.673 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:00.673 "adrfam": "ipv4", 00:22:00.673 "trsvcid": "$NVMF_PORT", 00:22:00.673 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:00.673 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:00.673 "hdgst": ${hdgst:-false}, 00:22:00.673 "ddgst": ${ddgst:-false} 00:22:00.673 }, 00:22:00.673 "method": "bdev_nvme_attach_controller" 00:22:00.673 } 00:22:00.673 EOF 00:22:00.673 )") 00:22:00.673 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:00.673 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:00.673 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:00.673 { 00:22:00.673 "params": { 00:22:00.673 "name": "Nvme$subsystem", 00:22:00.673 "trtype": "$TEST_TRANSPORT", 00:22:00.673 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:00.673 "adrfam": "ipv4", 00:22:00.673 "trsvcid": "$NVMF_PORT", 00:22:00.673 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:00.673 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:00.673 "hdgst": ${hdgst:-false}, 00:22:00.673 "ddgst": ${ddgst:-false} 00:22:00.673 }, 00:22:00.673 "method": "bdev_nvme_attach_controller" 00:22:00.673 } 00:22:00.673 EOF 00:22:00.673 )") 00:22:00.673 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:00.673 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:00.673 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:00.673 { 00:22:00.673 "params": { 00:22:00.673 "name": "Nvme$subsystem", 00:22:00.673 "trtype": "$TEST_TRANSPORT", 00:22:00.673 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:00.673 "adrfam": "ipv4", 00:22:00.674 "trsvcid": "$NVMF_PORT", 00:22:00.674 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:00.674 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:00.674 "hdgst": ${hdgst:-false}, 00:22:00.674 "ddgst": ${ddgst:-false} 00:22:00.674 }, 00:22:00.674 "method": "bdev_nvme_attach_controller" 00:22:00.674 } 00:22:00.674 EOF 00:22:00.674 )") 00:22:00.674 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:00.674 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:00.674 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:00.674 { 00:22:00.674 "params": { 00:22:00.674 "name": "Nvme$subsystem", 00:22:00.674 "trtype": "$TEST_TRANSPORT", 00:22:00.674 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:00.674 "adrfam": "ipv4", 00:22:00.674 "trsvcid": "$NVMF_PORT", 00:22:00.674 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:00.674 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:00.674 "hdgst": ${hdgst:-false}, 00:22:00.674 "ddgst": ${ddgst:-false} 00:22:00.674 }, 00:22:00.674 "method": "bdev_nvme_attach_controller" 00:22:00.674 } 00:22:00.674 EOF 00:22:00.674 )") 00:22:00.674 [2024-12-09 17:32:29.764390] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:22:00.674 [2024-12-09 17:32:29.764436] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:00.674 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:00.674 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:00.674 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:00.674 { 00:22:00.674 "params": { 00:22:00.674 "name": "Nvme$subsystem", 00:22:00.674 "trtype": "$TEST_TRANSPORT", 00:22:00.674 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:00.674 "adrfam": "ipv4", 00:22:00.674 "trsvcid": "$NVMF_PORT", 00:22:00.674 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:00.674 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:00.674 "hdgst": ${hdgst:-false}, 00:22:00.674 "ddgst": ${ddgst:-false} 00:22:00.674 }, 00:22:00.674 "method": "bdev_nvme_attach_controller" 00:22:00.674 } 00:22:00.674 EOF 00:22:00.674 )") 00:22:00.674 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:00.674 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:00.674 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:00.674 { 00:22:00.674 "params": { 00:22:00.674 "name": "Nvme$subsystem", 00:22:00.674 "trtype": "$TEST_TRANSPORT", 00:22:00.674 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:00.674 "adrfam": "ipv4", 00:22:00.674 "trsvcid": "$NVMF_PORT", 00:22:00.674 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:00.674 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:00.674 "hdgst": ${hdgst:-false}, 00:22:00.674 "ddgst": ${ddgst:-false} 00:22:00.674 }, 00:22:00.674 "method": "bdev_nvme_attach_controller" 00:22:00.674 } 00:22:00.674 EOF 00:22:00.674 )") 00:22:00.674 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:00.674 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:00.674 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:00.674 { 00:22:00.674 "params": { 00:22:00.674 "name": "Nvme$subsystem", 00:22:00.674 "trtype": "$TEST_TRANSPORT", 00:22:00.674 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:00.674 "adrfam": "ipv4", 00:22:00.674 "trsvcid": "$NVMF_PORT", 00:22:00.674 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:00.674 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:00.674 "hdgst": ${hdgst:-false}, 00:22:00.674 "ddgst": ${ddgst:-false} 00:22:00.674 }, 00:22:00.674 "method": "bdev_nvme_attach_controller" 00:22:00.674 } 00:22:00.674 EOF 00:22:00.674 )") 00:22:00.674 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:00.674 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:22:00.674 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:22:00.674 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:00.674 "params": { 00:22:00.674 "name": "Nvme1", 00:22:00.674 "trtype": "tcp", 00:22:00.674 "traddr": "10.0.0.2", 00:22:00.674 "adrfam": "ipv4", 00:22:00.674 "trsvcid": "4420", 00:22:00.674 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:00.674 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:00.674 "hdgst": false, 00:22:00.674 "ddgst": false 00:22:00.674 }, 00:22:00.674 "method": "bdev_nvme_attach_controller" 00:22:00.674 },{ 00:22:00.674 "params": { 00:22:00.674 "name": "Nvme2", 00:22:00.674 "trtype": "tcp", 00:22:00.674 "traddr": "10.0.0.2", 00:22:00.674 "adrfam": "ipv4", 00:22:00.674 "trsvcid": "4420", 00:22:00.674 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:00.674 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:00.674 "hdgst": false, 00:22:00.674 "ddgst": false 00:22:00.674 }, 00:22:00.674 "method": "bdev_nvme_attach_controller" 00:22:00.674 },{ 00:22:00.674 "params": { 00:22:00.674 "name": "Nvme3", 00:22:00.674 "trtype": "tcp", 00:22:00.674 "traddr": "10.0.0.2", 00:22:00.674 "adrfam": "ipv4", 00:22:00.674 "trsvcid": "4420", 00:22:00.674 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:00.674 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:00.674 "hdgst": false, 00:22:00.674 "ddgst": false 00:22:00.674 }, 00:22:00.674 "method": "bdev_nvme_attach_controller" 00:22:00.674 },{ 00:22:00.674 "params": { 00:22:00.674 "name": "Nvme4", 00:22:00.674 "trtype": "tcp", 00:22:00.674 "traddr": "10.0.0.2", 00:22:00.674 "adrfam": "ipv4", 00:22:00.674 "trsvcid": "4420", 00:22:00.674 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:00.674 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:00.674 "hdgst": false, 00:22:00.674 "ddgst": false 00:22:00.674 }, 00:22:00.674 "method": "bdev_nvme_attach_controller" 00:22:00.674 },{ 00:22:00.674 "params": { 00:22:00.674 "name": "Nvme5", 00:22:00.674 "trtype": "tcp", 00:22:00.674 "traddr": "10.0.0.2", 00:22:00.674 "adrfam": "ipv4", 00:22:00.674 "trsvcid": "4420", 00:22:00.674 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:00.674 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:00.674 "hdgst": false, 00:22:00.674 "ddgst": false 00:22:00.674 }, 00:22:00.674 "method": "bdev_nvme_attach_controller" 00:22:00.674 },{ 00:22:00.674 "params": { 00:22:00.674 "name": "Nvme6", 00:22:00.674 "trtype": "tcp", 00:22:00.674 "traddr": "10.0.0.2", 00:22:00.674 "adrfam": "ipv4", 00:22:00.674 "trsvcid": "4420", 00:22:00.674 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:00.674 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:00.674 "hdgst": false, 00:22:00.674 "ddgst": false 00:22:00.674 }, 00:22:00.674 "method": "bdev_nvme_attach_controller" 00:22:00.674 },{ 00:22:00.674 "params": { 00:22:00.674 "name": "Nvme7", 00:22:00.674 "trtype": "tcp", 00:22:00.674 "traddr": "10.0.0.2", 00:22:00.674 "adrfam": "ipv4", 00:22:00.674 "trsvcid": "4420", 00:22:00.674 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:00.674 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:00.674 "hdgst": false, 00:22:00.674 "ddgst": false 00:22:00.674 }, 00:22:00.674 "method": "bdev_nvme_attach_controller" 00:22:00.674 },{ 00:22:00.674 "params": { 00:22:00.674 "name": "Nvme8", 00:22:00.674 "trtype": "tcp", 00:22:00.674 "traddr": "10.0.0.2", 00:22:00.674 "adrfam": "ipv4", 00:22:00.674 "trsvcid": "4420", 00:22:00.674 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:00.674 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:00.674 "hdgst": false, 00:22:00.674 "ddgst": false 00:22:00.674 }, 00:22:00.674 "method": "bdev_nvme_attach_controller" 00:22:00.674 },{ 00:22:00.674 "params": { 00:22:00.674 "name": "Nvme9", 00:22:00.674 "trtype": "tcp", 00:22:00.674 "traddr": "10.0.0.2", 00:22:00.674 "adrfam": "ipv4", 00:22:00.674 "trsvcid": "4420", 00:22:00.674 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:00.674 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:00.674 "hdgst": false, 00:22:00.674 "ddgst": false 00:22:00.674 }, 00:22:00.674 "method": "bdev_nvme_attach_controller" 00:22:00.674 },{ 00:22:00.674 "params": { 00:22:00.674 "name": "Nvme10", 00:22:00.674 "trtype": "tcp", 00:22:00.675 "traddr": "10.0.0.2", 00:22:00.675 "adrfam": "ipv4", 00:22:00.675 "trsvcid": "4420", 00:22:00.675 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:00.675 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:00.675 "hdgst": false, 00:22:00.675 "ddgst": false 00:22:00.675 }, 00:22:00.675 "method": "bdev_nvme_attach_controller" 00:22:00.675 }' 00:22:00.675 [2024-12-09 17:32:29.841437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:00.932 [2024-12-09 17:32:29.881718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:02.825 17:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:02.826 17:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:22:02.826 17:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:02.826 17:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.826 17:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:02.826 17:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.826 17:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 2635351 00:22:02.826 17:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:22:02.826 17:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:22:03.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 2635351 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:22:03.757 17:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 2635085 00:22:03.757 17:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:22:03.757 17:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:03.757 17:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:22:03.757 17:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:22:03.757 17:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:03.757 17:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:03.757 { 00:22:03.757 "params": { 00:22:03.757 "name": "Nvme$subsystem", 00:22:03.757 "trtype": "$TEST_TRANSPORT", 00:22:03.757 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:03.757 "adrfam": "ipv4", 00:22:03.757 "trsvcid": "$NVMF_PORT", 00:22:03.757 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:03.757 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:03.757 "hdgst": ${hdgst:-false}, 00:22:03.757 "ddgst": ${ddgst:-false} 00:22:03.757 }, 00:22:03.757 "method": "bdev_nvme_attach_controller" 00:22:03.757 } 00:22:03.757 EOF 00:22:03.757 )") 00:22:03.757 17:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:03.757 17:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:03.757 17:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:03.757 { 00:22:03.757 "params": { 00:22:03.757 "name": "Nvme$subsystem", 00:22:03.757 "trtype": "$TEST_TRANSPORT", 00:22:03.757 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:03.757 "adrfam": "ipv4", 00:22:03.757 "trsvcid": "$NVMF_PORT", 00:22:03.757 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:03.757 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:03.757 "hdgst": ${hdgst:-false}, 00:22:03.757 "ddgst": ${ddgst:-false} 00:22:03.757 }, 00:22:03.757 "method": "bdev_nvme_attach_controller" 00:22:03.757 } 00:22:03.757 EOF 00:22:03.757 )") 00:22:03.757 17:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:03.757 17:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:03.757 17:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:03.757 { 00:22:03.757 "params": { 00:22:03.757 "name": "Nvme$subsystem", 00:22:03.757 "trtype": "$TEST_TRANSPORT", 00:22:03.757 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:03.757 "adrfam": "ipv4", 00:22:03.757 "trsvcid": "$NVMF_PORT", 00:22:03.757 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:03.757 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:03.757 "hdgst": ${hdgst:-false}, 00:22:03.757 "ddgst": ${ddgst:-false} 00:22:03.757 }, 00:22:03.757 "method": "bdev_nvme_attach_controller" 00:22:03.757 } 00:22:03.757 EOF 00:22:03.757 )") 00:22:03.757 17:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:03.757 17:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:03.757 17:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:03.757 { 00:22:03.757 "params": { 00:22:03.757 "name": "Nvme$subsystem", 00:22:03.757 "trtype": "$TEST_TRANSPORT", 00:22:03.757 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:03.757 "adrfam": "ipv4", 00:22:03.757 "trsvcid": "$NVMF_PORT", 00:22:03.757 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:03.757 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:03.757 "hdgst": ${hdgst:-false}, 00:22:03.757 "ddgst": ${ddgst:-false} 00:22:03.757 }, 00:22:03.757 "method": "bdev_nvme_attach_controller" 00:22:03.757 } 00:22:03.757 EOF 00:22:03.757 )") 00:22:03.757 17:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:03.757 17:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:03.757 17:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:03.757 { 00:22:03.757 "params": { 00:22:03.757 "name": "Nvme$subsystem", 00:22:03.757 "trtype": "$TEST_TRANSPORT", 00:22:03.757 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:03.757 "adrfam": "ipv4", 00:22:03.757 "trsvcid": "$NVMF_PORT", 00:22:03.757 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:03.757 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:03.757 "hdgst": ${hdgst:-false}, 00:22:03.757 "ddgst": ${ddgst:-false} 00:22:03.757 }, 00:22:03.757 "method": "bdev_nvme_attach_controller" 00:22:03.757 } 00:22:03.757 EOF 00:22:03.757 )") 00:22:03.757 17:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:03.757 17:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:03.757 17:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:03.757 { 00:22:03.757 "params": { 00:22:03.757 "name": "Nvme$subsystem", 00:22:03.757 "trtype": "$TEST_TRANSPORT", 00:22:03.757 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:03.757 "adrfam": "ipv4", 00:22:03.757 "trsvcid": "$NVMF_PORT", 00:22:03.757 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:03.757 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:03.757 "hdgst": ${hdgst:-false}, 00:22:03.757 "ddgst": ${ddgst:-false} 00:22:03.757 }, 00:22:03.757 "method": "bdev_nvme_attach_controller" 00:22:03.757 } 00:22:03.757 EOF 00:22:03.757 )") 00:22:03.757 17:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:03.757 17:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:03.757 17:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:03.757 { 00:22:03.757 "params": { 00:22:03.757 "name": "Nvme$subsystem", 00:22:03.757 "trtype": "$TEST_TRANSPORT", 00:22:03.757 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:03.757 "adrfam": "ipv4", 00:22:03.757 "trsvcid": "$NVMF_PORT", 00:22:03.757 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:03.757 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:03.757 "hdgst": ${hdgst:-false}, 00:22:03.757 "ddgst": ${ddgst:-false} 00:22:03.757 }, 00:22:03.757 "method": "bdev_nvme_attach_controller" 00:22:03.757 } 00:22:03.757 EOF 00:22:03.757 )") 00:22:03.757 17:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:03.757 [2024-12-09 17:32:32.693723] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:22:03.758 [2024-12-09 17:32:32.693775] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2635837 ] 00:22:03.758 17:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:03.758 17:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:03.758 { 00:22:03.758 "params": { 00:22:03.758 "name": "Nvme$subsystem", 00:22:03.758 "trtype": "$TEST_TRANSPORT", 00:22:03.758 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:03.758 "adrfam": "ipv4", 00:22:03.758 "trsvcid": "$NVMF_PORT", 00:22:03.758 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:03.758 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:03.758 "hdgst": ${hdgst:-false}, 00:22:03.758 "ddgst": ${ddgst:-false} 00:22:03.758 }, 00:22:03.758 "method": "bdev_nvme_attach_controller" 00:22:03.758 } 00:22:03.758 EOF 00:22:03.758 )") 00:22:03.758 17:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:03.758 17:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:03.758 17:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:03.758 { 00:22:03.758 "params": { 00:22:03.758 "name": "Nvme$subsystem", 00:22:03.758 "trtype": "$TEST_TRANSPORT", 00:22:03.758 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:03.758 "adrfam": "ipv4", 00:22:03.758 "trsvcid": "$NVMF_PORT", 00:22:03.758 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:03.758 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:03.758 "hdgst": ${hdgst:-false}, 00:22:03.758 "ddgst": ${ddgst:-false} 00:22:03.758 }, 00:22:03.758 "method": "bdev_nvme_attach_controller" 00:22:03.758 } 00:22:03.758 EOF 00:22:03.758 )") 00:22:03.758 17:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:03.758 17:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:03.758 17:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:03.758 { 00:22:03.758 "params": { 00:22:03.758 "name": "Nvme$subsystem", 00:22:03.758 "trtype": "$TEST_TRANSPORT", 00:22:03.758 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:03.758 "adrfam": "ipv4", 00:22:03.758 "trsvcid": "$NVMF_PORT", 00:22:03.758 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:03.758 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:03.758 "hdgst": ${hdgst:-false}, 00:22:03.758 "ddgst": ${ddgst:-false} 00:22:03.758 }, 00:22:03.758 "method": "bdev_nvme_attach_controller" 00:22:03.758 } 00:22:03.758 EOF 00:22:03.758 )") 00:22:03.758 17:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:03.758 17:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:22:03.758 17:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:22:03.758 17:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:03.758 "params": { 00:22:03.758 "name": "Nvme1", 00:22:03.758 "trtype": "tcp", 00:22:03.758 "traddr": "10.0.0.2", 00:22:03.758 "adrfam": "ipv4", 00:22:03.758 "trsvcid": "4420", 00:22:03.758 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:03.758 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:03.758 "hdgst": false, 00:22:03.758 "ddgst": false 00:22:03.758 }, 00:22:03.758 "method": "bdev_nvme_attach_controller" 00:22:03.758 },{ 00:22:03.758 "params": { 00:22:03.758 "name": "Nvme2", 00:22:03.758 "trtype": "tcp", 00:22:03.758 "traddr": "10.0.0.2", 00:22:03.758 "adrfam": "ipv4", 00:22:03.758 "trsvcid": "4420", 00:22:03.758 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:03.758 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:03.758 "hdgst": false, 00:22:03.758 "ddgst": false 00:22:03.758 }, 00:22:03.758 "method": "bdev_nvme_attach_controller" 00:22:03.758 },{ 00:22:03.758 "params": { 00:22:03.758 "name": "Nvme3", 00:22:03.758 "trtype": "tcp", 00:22:03.758 "traddr": "10.0.0.2", 00:22:03.758 "adrfam": "ipv4", 00:22:03.758 "trsvcid": "4420", 00:22:03.758 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:03.758 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:03.758 "hdgst": false, 00:22:03.758 "ddgst": false 00:22:03.758 }, 00:22:03.758 "method": "bdev_nvme_attach_controller" 00:22:03.758 },{ 00:22:03.758 "params": { 00:22:03.758 "name": "Nvme4", 00:22:03.758 "trtype": "tcp", 00:22:03.758 "traddr": "10.0.0.2", 00:22:03.758 "adrfam": "ipv4", 00:22:03.758 "trsvcid": "4420", 00:22:03.758 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:03.758 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:03.758 "hdgst": false, 00:22:03.758 "ddgst": false 00:22:03.758 }, 00:22:03.758 "method": "bdev_nvme_attach_controller" 00:22:03.758 },{ 00:22:03.758 "params": { 00:22:03.758 "name": "Nvme5", 00:22:03.758 "trtype": "tcp", 00:22:03.758 "traddr": "10.0.0.2", 00:22:03.758 "adrfam": "ipv4", 00:22:03.758 "trsvcid": "4420", 00:22:03.758 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:03.758 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:03.758 "hdgst": false, 00:22:03.758 "ddgst": false 00:22:03.758 }, 00:22:03.758 "method": "bdev_nvme_attach_controller" 00:22:03.758 },{ 00:22:03.758 "params": { 00:22:03.758 "name": "Nvme6", 00:22:03.758 "trtype": "tcp", 00:22:03.758 "traddr": "10.0.0.2", 00:22:03.758 "adrfam": "ipv4", 00:22:03.758 "trsvcid": "4420", 00:22:03.758 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:03.758 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:03.758 "hdgst": false, 00:22:03.758 "ddgst": false 00:22:03.758 }, 00:22:03.758 "method": "bdev_nvme_attach_controller" 00:22:03.758 },{ 00:22:03.758 "params": { 00:22:03.758 "name": "Nvme7", 00:22:03.758 "trtype": "tcp", 00:22:03.758 "traddr": "10.0.0.2", 00:22:03.758 "adrfam": "ipv4", 00:22:03.758 "trsvcid": "4420", 00:22:03.758 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:03.758 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:03.758 "hdgst": false, 00:22:03.758 "ddgst": false 00:22:03.758 }, 00:22:03.758 "method": "bdev_nvme_attach_controller" 00:22:03.758 },{ 00:22:03.758 "params": { 00:22:03.758 "name": "Nvme8", 00:22:03.758 "trtype": "tcp", 00:22:03.758 "traddr": "10.0.0.2", 00:22:03.758 "adrfam": "ipv4", 00:22:03.758 "trsvcid": "4420", 00:22:03.758 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:03.758 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:03.758 "hdgst": false, 00:22:03.758 "ddgst": false 00:22:03.758 }, 00:22:03.758 "method": "bdev_nvme_attach_controller" 00:22:03.758 },{ 00:22:03.758 "params": { 00:22:03.758 "name": "Nvme9", 00:22:03.758 "trtype": "tcp", 00:22:03.758 "traddr": "10.0.0.2", 00:22:03.758 "adrfam": "ipv4", 00:22:03.758 "trsvcid": "4420", 00:22:03.758 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:03.758 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:03.758 "hdgst": false, 00:22:03.758 "ddgst": false 00:22:03.758 }, 00:22:03.758 "method": "bdev_nvme_attach_controller" 00:22:03.758 },{ 00:22:03.758 "params": { 00:22:03.758 "name": "Nvme10", 00:22:03.758 "trtype": "tcp", 00:22:03.758 "traddr": "10.0.0.2", 00:22:03.758 "adrfam": "ipv4", 00:22:03.758 "trsvcid": "4420", 00:22:03.758 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:03.758 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:03.758 "hdgst": false, 00:22:03.758 "ddgst": false 00:22:03.758 }, 00:22:03.758 "method": "bdev_nvme_attach_controller" 00:22:03.758 }' 00:22:03.758 [2024-12-09 17:32:32.771976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:03.758 [2024-12-09 17:32:32.811673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:05.129 Running I/O for 1 seconds... 00:22:06.500 2323.00 IOPS, 145.19 MiB/s 00:22:06.500 Latency(us) 00:22:06.500 [2024-12-09T16:32:35.679Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:06.500 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:06.500 Verification LBA range: start 0x0 length 0x400 00:22:06.500 Nvme1n1 : 1.16 280.58 17.54 0.00 0.00 224673.40 7177.75 215707.06 00:22:06.500 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:06.500 Verification LBA range: start 0x0 length 0x400 00:22:06.500 Nvme2n1 : 1.17 274.19 17.14 0.00 0.00 228145.64 17101.78 214708.42 00:22:06.500 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:06.500 Verification LBA range: start 0x0 length 0x400 00:22:06.500 Nvme3n1 : 1.16 279.46 17.47 0.00 0.00 220139.22 4181.82 226692.14 00:22:06.500 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:06.500 Verification LBA range: start 0x0 length 0x400 00:22:06.500 Nvme4n1 : 1.15 304.52 19.03 0.00 0.00 192403.49 7708.28 193736.90 00:22:06.500 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:06.500 Verification LBA range: start 0x0 length 0x400 00:22:06.500 Nvme5n1 : 1.17 272.41 17.03 0.00 0.00 220229.83 29085.50 212711.13 00:22:06.500 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:06.500 Verification LBA range: start 0x0 length 0x400 00:22:06.500 Nvme6n1 : 1.17 276.58 17.29 0.00 0.00 213377.55 4493.90 211712.49 00:22:06.500 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:06.500 Verification LBA range: start 0x0 length 0x400 00:22:06.500 Nvme7n1 : 1.14 279.63 17.48 0.00 0.00 207882.83 13419.28 214708.42 00:22:06.500 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:06.500 Verification LBA range: start 0x0 length 0x400 00:22:06.500 Nvme8n1 : 1.17 272.93 17.06 0.00 0.00 210682.44 13606.52 228689.43 00:22:06.500 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:06.500 Verification LBA range: start 0x0 length 0x400 00:22:06.500 Nvme9n1 : 1.22 262.60 16.41 0.00 0.00 209376.50 15166.90 217704.35 00:22:06.500 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:06.500 Verification LBA range: start 0x0 length 0x400 00:22:06.500 Nvme10n1 : 1.18 271.33 16.96 0.00 0.00 205822.05 20222.54 232684.01 00:22:06.500 [2024-12-09T16:32:35.679Z] =================================================================================================================== 00:22:06.500 [2024-12-09T16:32:35.679Z] Total : 2774.22 173.39 0.00 0.00 213106.32 4181.82 232684.01 00:22:06.500 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:22:06.500 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:06.500 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:06.501 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:06.501 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:06.501 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:06.501 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:22:06.501 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:06.501 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:22:06.501 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:06.501 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:06.501 rmmod nvme_tcp 00:22:06.501 rmmod nvme_fabrics 00:22:06.758 rmmod nvme_keyring 00:22:06.758 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:06.758 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:22:06.758 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:22:06.758 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 2635085 ']' 00:22:06.758 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 2635085 00:22:06.758 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 2635085 ']' 00:22:06.758 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 2635085 00:22:06.758 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:22:06.758 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:06.758 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2635085 00:22:06.758 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:06.758 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:06.758 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2635085' 00:22:06.758 killing process with pid 2635085 00:22:06.758 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 2635085 00:22:06.758 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 2635085 00:22:07.016 17:32:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:07.016 17:32:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:07.016 17:32:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:07.016 17:32:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:22:07.016 17:32:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:22:07.016 17:32:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:07.016 17:32:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:22:07.016 17:32:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:07.016 17:32:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:07.016 17:32:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:07.016 17:32:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:07.017 17:32:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:09.550 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:09.550 00:22:09.550 real 0m15.268s 00:22:09.550 user 0m34.212s 00:22:09.550 sys 0m5.797s 00:22:09.550 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:09.550 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:09.550 ************************************ 00:22:09.550 END TEST nvmf_shutdown_tc1 00:22:09.550 ************************************ 00:22:09.550 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:22:09.550 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:09.550 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:09.550 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:09.550 ************************************ 00:22:09.550 START TEST nvmf_shutdown_tc2 00:22:09.550 ************************************ 00:22:09.550 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:22:09.550 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:22:09.550 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:09.550 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:09.550 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:09.550 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:09.550 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:09.550 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:09.550 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:09.550 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:09.550 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:09.550 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:09.550 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:09.550 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:09.550 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:09.550 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:09.550 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:09.550 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:09.550 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:09.550 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:09.550 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:09.550 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:09.550 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:22:09.550 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:09.550 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:22:09.550 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:22:09.550 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:22:09.550 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:22:09.550 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:22:09.550 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:09.550 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:09.550 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:09.550 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:09.550 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:09.550 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:09.550 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:09.550 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:09.550 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:09.550 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:09.550 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:09.550 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:09.550 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:09.550 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:09.550 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:09.550 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:09.551 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:09.551 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:09.551 Found net devices under 0000:af:00.0: cvl_0_0 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:09.551 Found net devices under 0000:af:00.1: cvl_0_1 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:09.551 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:09.551 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.329 ms 00:22:09.551 00:22:09.551 --- 10.0.0.2 ping statistics --- 00:22:09.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:09.551 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:09.551 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:09.551 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:22:09.551 00:22:09.551 --- 10.0.0.1 ping statistics --- 00:22:09.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:09.551 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2636860 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2636860 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2636860 ']' 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:09.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:09.551 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:09.551 [2024-12-09 17:32:38.625399] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:22:09.552 [2024-12-09 17:32:38.625446] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:09.552 [2024-12-09 17:32:38.703968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:09.810 [2024-12-09 17:32:38.745940] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:09.810 [2024-12-09 17:32:38.745974] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:09.810 [2024-12-09 17:32:38.745981] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:09.810 [2024-12-09 17:32:38.745986] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:09.810 [2024-12-09 17:32:38.745991] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:09.810 [2024-12-09 17:32:38.747415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:09.810 [2024-12-09 17:32:38.747522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:09.810 [2024-12-09 17:32:38.747626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:09.810 [2024-12-09 17:32:38.747627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:09.810 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:09.810 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:22:09.810 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:09.810 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:09.810 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:09.810 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:09.810 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:09.810 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.810 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:09.810 [2024-12-09 17:32:38.884872] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:09.810 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.810 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:09.810 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:09.810 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:09.810 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:09.810 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:09.810 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:09.810 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:09.810 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:09.810 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:09.810 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:09.810 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:09.810 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:09.810 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:09.810 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:09.810 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:09.810 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:09.810 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:09.810 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:09.810 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:09.810 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:09.810 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:09.810 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:09.810 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:09.810 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:09.810 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:09.810 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:09.810 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.810 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:09.810 Malloc1 00:22:10.067 [2024-12-09 17:32:39.006950] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:10.068 Malloc2 00:22:10.068 Malloc3 00:22:10.068 Malloc4 00:22:10.068 Malloc5 00:22:10.068 Malloc6 00:22:10.068 Malloc7 00:22:10.325 Malloc8 00:22:10.326 Malloc9 00:22:10.326 Malloc10 00:22:10.326 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.326 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:10.326 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:10.326 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:10.326 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=2637057 00:22:10.326 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 2637057 /var/tmp/bdevperf.sock 00:22:10.326 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2637057 ']' 00:22:10.326 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:10.326 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:10.326 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:10.326 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:10.326 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:10.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:10.326 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:22:10.326 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:10.326 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:22:10.326 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:10.326 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:10.326 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:10.326 { 00:22:10.326 "params": { 00:22:10.326 "name": "Nvme$subsystem", 00:22:10.326 "trtype": "$TEST_TRANSPORT", 00:22:10.326 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:10.326 "adrfam": "ipv4", 00:22:10.326 "trsvcid": "$NVMF_PORT", 00:22:10.326 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:10.326 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:10.326 "hdgst": ${hdgst:-false}, 00:22:10.326 "ddgst": ${ddgst:-false} 00:22:10.326 }, 00:22:10.326 "method": "bdev_nvme_attach_controller" 00:22:10.326 } 00:22:10.326 EOF 00:22:10.326 )") 00:22:10.326 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:10.326 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:10.326 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:10.326 { 00:22:10.326 "params": { 00:22:10.326 "name": "Nvme$subsystem", 00:22:10.326 "trtype": "$TEST_TRANSPORT", 00:22:10.326 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:10.326 "adrfam": "ipv4", 00:22:10.326 "trsvcid": "$NVMF_PORT", 00:22:10.326 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:10.326 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:10.326 "hdgst": ${hdgst:-false}, 00:22:10.326 "ddgst": ${ddgst:-false} 00:22:10.326 }, 00:22:10.326 "method": "bdev_nvme_attach_controller" 00:22:10.326 } 00:22:10.326 EOF 00:22:10.326 )") 00:22:10.326 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:10.326 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:10.326 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:10.326 { 00:22:10.326 "params": { 00:22:10.326 "name": "Nvme$subsystem", 00:22:10.326 "trtype": "$TEST_TRANSPORT", 00:22:10.326 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:10.326 "adrfam": "ipv4", 00:22:10.326 "trsvcid": "$NVMF_PORT", 00:22:10.326 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:10.326 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:10.326 "hdgst": ${hdgst:-false}, 00:22:10.326 "ddgst": ${ddgst:-false} 00:22:10.326 }, 00:22:10.326 "method": "bdev_nvme_attach_controller" 00:22:10.326 } 00:22:10.326 EOF 00:22:10.326 )") 00:22:10.326 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:10.326 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:10.326 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:10.326 { 00:22:10.326 "params": { 00:22:10.326 "name": "Nvme$subsystem", 00:22:10.326 "trtype": "$TEST_TRANSPORT", 00:22:10.326 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:10.326 "adrfam": "ipv4", 00:22:10.326 "trsvcid": "$NVMF_PORT", 00:22:10.326 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:10.326 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:10.326 "hdgst": ${hdgst:-false}, 00:22:10.326 "ddgst": ${ddgst:-false} 00:22:10.326 }, 00:22:10.326 "method": "bdev_nvme_attach_controller" 00:22:10.326 } 00:22:10.326 EOF 00:22:10.326 )") 00:22:10.326 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:10.326 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:10.326 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:10.326 { 00:22:10.326 "params": { 00:22:10.326 "name": "Nvme$subsystem", 00:22:10.326 "trtype": "$TEST_TRANSPORT", 00:22:10.326 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:10.326 "adrfam": "ipv4", 00:22:10.326 "trsvcid": "$NVMF_PORT", 00:22:10.326 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:10.326 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:10.326 "hdgst": ${hdgst:-false}, 00:22:10.326 "ddgst": ${ddgst:-false} 00:22:10.326 }, 00:22:10.326 "method": "bdev_nvme_attach_controller" 00:22:10.326 } 00:22:10.326 EOF 00:22:10.326 )") 00:22:10.326 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:10.326 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:10.326 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:10.326 { 00:22:10.326 "params": { 00:22:10.326 "name": "Nvme$subsystem", 00:22:10.326 "trtype": "$TEST_TRANSPORT", 00:22:10.326 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:10.326 "adrfam": "ipv4", 00:22:10.326 "trsvcid": "$NVMF_PORT", 00:22:10.326 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:10.326 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:10.326 "hdgst": ${hdgst:-false}, 00:22:10.326 "ddgst": ${ddgst:-false} 00:22:10.326 }, 00:22:10.326 "method": "bdev_nvme_attach_controller" 00:22:10.326 } 00:22:10.326 EOF 00:22:10.326 )") 00:22:10.326 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:10.326 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:10.326 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:10.326 { 00:22:10.326 "params": { 00:22:10.326 "name": "Nvme$subsystem", 00:22:10.326 "trtype": "$TEST_TRANSPORT", 00:22:10.326 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:10.326 "adrfam": "ipv4", 00:22:10.326 "trsvcid": "$NVMF_PORT", 00:22:10.326 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:10.326 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:10.326 "hdgst": ${hdgst:-false}, 00:22:10.326 "ddgst": ${ddgst:-false} 00:22:10.326 }, 00:22:10.326 "method": "bdev_nvme_attach_controller" 00:22:10.326 } 00:22:10.326 EOF 00:22:10.326 )") 00:22:10.326 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:10.326 [2024-12-09 17:32:39.482009] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:22:10.326 [2024-12-09 17:32:39.482063] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2637057 ] 00:22:10.326 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:10.326 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:10.326 { 00:22:10.326 "params": { 00:22:10.326 "name": "Nvme$subsystem", 00:22:10.326 "trtype": "$TEST_TRANSPORT", 00:22:10.326 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:10.326 "adrfam": "ipv4", 00:22:10.326 "trsvcid": "$NVMF_PORT", 00:22:10.326 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:10.326 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:10.326 "hdgst": ${hdgst:-false}, 00:22:10.326 "ddgst": ${ddgst:-false} 00:22:10.326 }, 00:22:10.326 "method": "bdev_nvme_attach_controller" 00:22:10.326 } 00:22:10.326 EOF 00:22:10.326 )") 00:22:10.326 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:10.326 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:10.326 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:10.326 { 00:22:10.326 "params": { 00:22:10.326 "name": "Nvme$subsystem", 00:22:10.326 "trtype": "$TEST_TRANSPORT", 00:22:10.326 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:10.326 "adrfam": "ipv4", 00:22:10.326 "trsvcid": "$NVMF_PORT", 00:22:10.326 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:10.326 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:10.327 "hdgst": ${hdgst:-false}, 00:22:10.327 "ddgst": ${ddgst:-false} 00:22:10.327 }, 00:22:10.327 "method": "bdev_nvme_attach_controller" 00:22:10.327 } 00:22:10.327 EOF 00:22:10.327 )") 00:22:10.327 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:10.327 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:10.327 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:10.327 { 00:22:10.327 "params": { 00:22:10.327 "name": "Nvme$subsystem", 00:22:10.327 "trtype": "$TEST_TRANSPORT", 00:22:10.327 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:10.327 "adrfam": "ipv4", 00:22:10.327 "trsvcid": "$NVMF_PORT", 00:22:10.327 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:10.327 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:10.327 "hdgst": ${hdgst:-false}, 00:22:10.327 "ddgst": ${ddgst:-false} 00:22:10.327 }, 00:22:10.327 "method": "bdev_nvme_attach_controller" 00:22:10.327 } 00:22:10.327 EOF 00:22:10.327 )") 00:22:10.327 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:10.584 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:22:10.584 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:22:10.584 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:10.584 "params": { 00:22:10.584 "name": "Nvme1", 00:22:10.584 "trtype": "tcp", 00:22:10.584 "traddr": "10.0.0.2", 00:22:10.584 "adrfam": "ipv4", 00:22:10.584 "trsvcid": "4420", 00:22:10.584 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:10.584 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:10.584 "hdgst": false, 00:22:10.584 "ddgst": false 00:22:10.584 }, 00:22:10.584 "method": "bdev_nvme_attach_controller" 00:22:10.584 },{ 00:22:10.584 "params": { 00:22:10.584 "name": "Nvme2", 00:22:10.584 "trtype": "tcp", 00:22:10.584 "traddr": "10.0.0.2", 00:22:10.584 "adrfam": "ipv4", 00:22:10.584 "trsvcid": "4420", 00:22:10.585 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:10.585 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:10.585 "hdgst": false, 00:22:10.585 "ddgst": false 00:22:10.585 }, 00:22:10.585 "method": "bdev_nvme_attach_controller" 00:22:10.585 },{ 00:22:10.585 "params": { 00:22:10.585 "name": "Nvme3", 00:22:10.585 "trtype": "tcp", 00:22:10.585 "traddr": "10.0.0.2", 00:22:10.585 "adrfam": "ipv4", 00:22:10.585 "trsvcid": "4420", 00:22:10.585 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:10.585 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:10.585 "hdgst": false, 00:22:10.585 "ddgst": false 00:22:10.585 }, 00:22:10.585 "method": "bdev_nvme_attach_controller" 00:22:10.585 },{ 00:22:10.585 "params": { 00:22:10.585 "name": "Nvme4", 00:22:10.585 "trtype": "tcp", 00:22:10.585 "traddr": "10.0.0.2", 00:22:10.585 "adrfam": "ipv4", 00:22:10.585 "trsvcid": "4420", 00:22:10.585 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:10.585 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:10.585 "hdgst": false, 00:22:10.585 "ddgst": false 00:22:10.585 }, 00:22:10.585 "method": "bdev_nvme_attach_controller" 00:22:10.585 },{ 00:22:10.585 "params": { 00:22:10.585 "name": "Nvme5", 00:22:10.585 "trtype": "tcp", 00:22:10.585 "traddr": "10.0.0.2", 00:22:10.585 "adrfam": "ipv4", 00:22:10.585 "trsvcid": "4420", 00:22:10.585 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:10.585 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:10.585 "hdgst": false, 00:22:10.585 "ddgst": false 00:22:10.585 }, 00:22:10.585 "method": "bdev_nvme_attach_controller" 00:22:10.585 },{ 00:22:10.585 "params": { 00:22:10.585 "name": "Nvme6", 00:22:10.585 "trtype": "tcp", 00:22:10.585 "traddr": "10.0.0.2", 00:22:10.585 "adrfam": "ipv4", 00:22:10.585 "trsvcid": "4420", 00:22:10.585 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:10.585 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:10.585 "hdgst": false, 00:22:10.585 "ddgst": false 00:22:10.585 }, 00:22:10.585 "method": "bdev_nvme_attach_controller" 00:22:10.585 },{ 00:22:10.585 "params": { 00:22:10.585 "name": "Nvme7", 00:22:10.585 "trtype": "tcp", 00:22:10.585 "traddr": "10.0.0.2", 00:22:10.585 "adrfam": "ipv4", 00:22:10.585 "trsvcid": "4420", 00:22:10.585 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:10.585 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:10.585 "hdgst": false, 00:22:10.585 "ddgst": false 00:22:10.585 }, 00:22:10.585 "method": "bdev_nvme_attach_controller" 00:22:10.585 },{ 00:22:10.585 "params": { 00:22:10.585 "name": "Nvme8", 00:22:10.585 "trtype": "tcp", 00:22:10.585 "traddr": "10.0.0.2", 00:22:10.585 "adrfam": "ipv4", 00:22:10.585 "trsvcid": "4420", 00:22:10.585 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:10.585 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:10.585 "hdgst": false, 00:22:10.585 "ddgst": false 00:22:10.585 }, 00:22:10.585 "method": "bdev_nvme_attach_controller" 00:22:10.585 },{ 00:22:10.585 "params": { 00:22:10.585 "name": "Nvme9", 00:22:10.585 "trtype": "tcp", 00:22:10.585 "traddr": "10.0.0.2", 00:22:10.585 "adrfam": "ipv4", 00:22:10.585 "trsvcid": "4420", 00:22:10.585 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:10.585 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:10.585 "hdgst": false, 00:22:10.585 "ddgst": false 00:22:10.585 }, 00:22:10.585 "method": "bdev_nvme_attach_controller" 00:22:10.585 },{ 00:22:10.585 "params": { 00:22:10.585 "name": "Nvme10", 00:22:10.585 "trtype": "tcp", 00:22:10.585 "traddr": "10.0.0.2", 00:22:10.585 "adrfam": "ipv4", 00:22:10.585 "trsvcid": "4420", 00:22:10.585 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:10.585 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:10.585 "hdgst": false, 00:22:10.585 "ddgst": false 00:22:10.585 }, 00:22:10.585 "method": "bdev_nvme_attach_controller" 00:22:10.585 }' 00:22:10.585 [2024-12-09 17:32:39.546275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:10.585 [2024-12-09 17:32:39.585883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:11.955 Running I/O for 10 seconds... 00:22:12.213 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:12.213 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:22:12.213 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:12.213 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.213 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:12.470 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.470 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:12.470 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:12.470 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:12.470 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:22:12.470 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:22:12.470 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:12.470 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:12.470 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:12.470 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:12.470 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.470 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:12.470 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.470 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:22:12.470 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:22:12.470 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:12.728 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:12.728 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:12.728 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:12.728 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:12.728 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.728 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:12.728 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.728 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=195 00:22:12.728 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 195 -ge 100 ']' 00:22:12.728 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:22:12.728 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:22:12.728 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:22:12.728 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 2637057 00:22:12.728 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2637057 ']' 00:22:12.728 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2637057 00:22:12.728 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:22:12.728 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:12.728 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2637057 00:22:12.728 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:12.728 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:12.728 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2637057' 00:22:12.728 killing process with pid 2637057 00:22:12.728 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2637057 00:22:12.728 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2637057 00:22:12.728 Received shutdown signal, test time was about 0.997724 seconds 00:22:12.728 00:22:12.728 Latency(us) 00:22:12.728 [2024-12-09T16:32:41.907Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:12.728 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:12.728 Verification LBA range: start 0x0 length 0x400 00:22:12.728 Nvme1n1 : 0.98 261.78 16.36 0.00 0.00 241881.48 15978.30 218702.99 00:22:12.728 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:12.728 Verification LBA range: start 0x0 length 0x400 00:22:12.728 Nvme2n1 : 0.99 334.82 20.93 0.00 0.00 185225.49 5554.96 183750.46 00:22:12.728 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:12.728 Verification LBA range: start 0x0 length 0x400 00:22:12.728 Nvme3n1 : 1.00 320.94 20.06 0.00 0.00 190557.92 12170.97 218702.99 00:22:12.728 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:12.728 Verification LBA range: start 0x0 length 0x400 00:22:12.728 Nvme4n1 : 0.99 323.49 20.22 0.00 0.00 186438.41 17725.93 206719.27 00:22:12.728 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:12.728 Verification LBA range: start 0x0 length 0x400 00:22:12.729 Nvme5n1 : 0.99 258.26 16.14 0.00 0.00 229879.47 20971.52 224694.86 00:22:12.729 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:12.729 Verification LBA range: start 0x0 length 0x400 00:22:12.729 Nvme6n1 : 0.98 260.43 16.28 0.00 0.00 223947.58 17601.10 217704.35 00:22:12.729 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:12.729 Verification LBA range: start 0x0 length 0x400 00:22:12.729 Nvme7n1 : 0.97 263.42 16.46 0.00 0.00 217275.73 32455.92 179755.89 00:22:12.729 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:12.729 Verification LBA range: start 0x0 length 0x400 00:22:12.729 Nvme8n1 : 0.97 262.91 16.43 0.00 0.00 213779.02 15416.56 196732.83 00:22:12.729 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:12.729 Verification LBA range: start 0x0 length 0x400 00:22:12.729 Nvme9n1 : 0.97 265.02 16.56 0.00 0.00 207873.58 14667.58 217704.35 00:22:12.729 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:12.729 Verification LBA range: start 0x0 length 0x400 00:22:12.729 Nvme10n1 : 0.99 257.59 16.10 0.00 0.00 211205.36 17476.27 244667.73 00:22:12.729 [2024-12-09T16:32:41.908Z] =================================================================================================================== 00:22:12.729 [2024-12-09T16:32:41.908Z] Total : 2808.66 175.54 0.00 0.00 209087.20 5554.96 244667.73 00:22:12.986 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:22:13.916 17:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 2636860 00:22:13.916 17:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:22:13.916 17:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:13.916 17:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:13.916 17:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:13.916 17:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:13.916 17:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:13.916 17:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:22:13.916 17:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:13.916 17:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:22:13.916 17:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:13.916 17:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:13.917 rmmod nvme_tcp 00:22:14.174 rmmod nvme_fabrics 00:22:14.174 rmmod nvme_keyring 00:22:14.174 17:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:14.174 17:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:22:14.174 17:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:22:14.174 17:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 2636860 ']' 00:22:14.174 17:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 2636860 00:22:14.174 17:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2636860 ']' 00:22:14.174 17:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2636860 00:22:14.174 17:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:22:14.174 17:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:14.174 17:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2636860 00:22:14.174 17:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:14.174 17:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:14.174 17:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2636860' 00:22:14.174 killing process with pid 2636860 00:22:14.174 17:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2636860 00:22:14.174 17:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2636860 00:22:14.432 17:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:14.432 17:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:14.432 17:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:14.432 17:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:22:14.432 17:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:22:14.432 17:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:14.432 17:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:22:14.432 17:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:14.432 17:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:14.432 17:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:14.432 17:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:14.432 17:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:16.969 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:16.969 00:22:16.969 real 0m7.358s 00:22:16.969 user 0m21.641s 00:22:16.969 sys 0m1.383s 00:22:16.969 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:16.969 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:16.969 ************************************ 00:22:16.969 END TEST nvmf_shutdown_tc2 00:22:16.969 ************************************ 00:22:16.969 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:22:16.969 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:16.969 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:16.969 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:16.969 ************************************ 00:22:16.969 START TEST nvmf_shutdown_tc3 00:22:16.969 ************************************ 00:22:16.969 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:22:16.969 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:22:16.969 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:16.969 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:16.969 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:16.969 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:16.969 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:16.969 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:16.969 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:16.969 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:16.969 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:16.969 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:16.969 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:16.969 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:16.969 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:16.969 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:16.969 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:16.969 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:16.969 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:16.969 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:16.969 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:16.969 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:16.969 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:22:16.969 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:16.969 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:22:16.969 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:22:16.969 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:22:16.969 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:22:16.969 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:22:16.969 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:16.969 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:16.969 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:16.969 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:16.969 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:16.969 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:16.969 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:16.969 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:16.969 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:16.969 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:16.969 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:16.969 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:16.969 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:16.969 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:16.969 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:16.969 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:16.969 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:16.969 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:16.969 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:16.969 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:16.969 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:16.969 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:16.969 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:16.969 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:16.969 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:16.969 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:16.969 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:16.969 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:16.969 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:16.969 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:16.969 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:16.969 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:16.969 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:16.969 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:16.969 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:16.969 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:16.969 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:16.969 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:16.969 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:16.969 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:16.969 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:16.969 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:16.969 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:16.969 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:16.969 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:16.969 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:16.969 Found net devices under 0000:af:00.0: cvl_0_0 00:22:16.970 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:16.970 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:16.970 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:16.970 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:16.970 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:16.970 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:16.970 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:16.970 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:16.970 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:16.970 Found net devices under 0000:af:00.1: cvl_0_1 00:22:16.970 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:16.970 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:16.970 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:16.970 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:16.970 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:16.970 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:16.970 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:16.970 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:16.970 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:16.970 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:16.970 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:16.970 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:16.970 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:16.970 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:16.970 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:16.970 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:16.970 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:16.970 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:16.970 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:16.970 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:16.970 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:16.970 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:16.970 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:16.970 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:16.970 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:16.970 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:16.970 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:16.970 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:16.970 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:16.970 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:16.970 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:22:16.970 00:22:16.970 --- 10.0.0.2 ping statistics --- 00:22:16.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:16.970 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:22:16.970 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:16.970 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:16.970 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:22:16.970 00:22:16.970 --- 10.0.0.1 ping statistics --- 00:22:16.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:16.970 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:22:16.970 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:16.970 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:22:16.970 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:16.970 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:16.970 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:16.970 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:16.970 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:16.970 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:16.970 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:16.970 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:16.970 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:16.970 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:16.970 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:16.970 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=2638165 00:22:16.970 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 2638165 00:22:16.970 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:16.970 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2638165 ']' 00:22:16.970 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:16.970 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:16.970 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:16.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:16.970 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:16.970 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:16.970 [2024-12-09 17:32:46.093864] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:22:16.970 [2024-12-09 17:32:46.093927] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:17.228 [2024-12-09 17:32:46.169592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:17.228 [2024-12-09 17:32:46.211839] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:17.228 [2024-12-09 17:32:46.211875] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:17.228 [2024-12-09 17:32:46.211882] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:17.228 [2024-12-09 17:32:46.211888] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:17.228 [2024-12-09 17:32:46.211893] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:17.228 [2024-12-09 17:32:46.213499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:17.228 [2024-12-09 17:32:46.213597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:17.228 [2024-12-09 17:32:46.213705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:17.228 [2024-12-09 17:32:46.213707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:17.792 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:17.792 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:22:17.792 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:17.792 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:17.792 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:17.792 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:17.792 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:17.792 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.792 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:17.792 [2024-12-09 17:32:46.957236] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:17.792 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.792 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:17.792 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:17.792 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:17.792 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:18.049 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:18.049 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:18.049 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:18.049 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:18.049 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:18.049 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:18.049 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:18.049 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:18.049 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:18.050 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:18.050 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:18.050 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:18.050 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:18.050 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:18.050 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:18.050 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:18.050 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:18.050 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:18.050 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:18.050 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:18.050 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:18.050 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:18.050 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.050 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:18.050 Malloc1 00:22:18.050 [2024-12-09 17:32:47.072635] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:18.050 Malloc2 00:22:18.050 Malloc3 00:22:18.050 Malloc4 00:22:18.050 Malloc5 00:22:18.307 Malloc6 00:22:18.307 Malloc7 00:22:18.307 Malloc8 00:22:18.307 Malloc9 00:22:18.307 Malloc10 00:22:18.307 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.307 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:18.307 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:18.307 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:18.567 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=2638440 00:22:18.567 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 2638440 /var/tmp/bdevperf.sock 00:22:18.567 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2638440 ']' 00:22:18.567 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:18.567 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:18.567 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:18.567 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:18.567 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:18.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:18.567 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:22:18.567 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:18.567 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:22:18.567 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:18.567 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:18.567 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:18.567 { 00:22:18.567 "params": { 00:22:18.567 "name": "Nvme$subsystem", 00:22:18.567 "trtype": "$TEST_TRANSPORT", 00:22:18.567 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:18.567 "adrfam": "ipv4", 00:22:18.567 "trsvcid": "$NVMF_PORT", 00:22:18.567 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:18.567 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:18.567 "hdgst": ${hdgst:-false}, 00:22:18.567 "ddgst": ${ddgst:-false} 00:22:18.567 }, 00:22:18.567 "method": "bdev_nvme_attach_controller" 00:22:18.567 } 00:22:18.567 EOF 00:22:18.567 )") 00:22:18.567 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:18.567 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:18.567 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:18.567 { 00:22:18.567 "params": { 00:22:18.567 "name": "Nvme$subsystem", 00:22:18.567 "trtype": "$TEST_TRANSPORT", 00:22:18.567 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:18.567 "adrfam": "ipv4", 00:22:18.567 "trsvcid": "$NVMF_PORT", 00:22:18.567 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:18.567 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:18.567 "hdgst": ${hdgst:-false}, 00:22:18.567 "ddgst": ${ddgst:-false} 00:22:18.567 }, 00:22:18.567 "method": "bdev_nvme_attach_controller" 00:22:18.567 } 00:22:18.567 EOF 00:22:18.567 )") 00:22:18.567 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:18.567 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:18.567 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:18.567 { 00:22:18.567 "params": { 00:22:18.567 "name": "Nvme$subsystem", 00:22:18.567 "trtype": "$TEST_TRANSPORT", 00:22:18.567 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:18.567 "adrfam": "ipv4", 00:22:18.567 "trsvcid": "$NVMF_PORT", 00:22:18.567 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:18.567 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:18.567 "hdgst": ${hdgst:-false}, 00:22:18.567 "ddgst": ${ddgst:-false} 00:22:18.567 }, 00:22:18.567 "method": "bdev_nvme_attach_controller" 00:22:18.567 } 00:22:18.567 EOF 00:22:18.567 )") 00:22:18.567 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:18.567 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:18.567 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:18.567 { 00:22:18.567 "params": { 00:22:18.567 "name": "Nvme$subsystem", 00:22:18.567 "trtype": "$TEST_TRANSPORT", 00:22:18.567 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:18.567 "adrfam": "ipv4", 00:22:18.567 "trsvcid": "$NVMF_PORT", 00:22:18.567 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:18.567 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:18.567 "hdgst": ${hdgst:-false}, 00:22:18.567 "ddgst": ${ddgst:-false} 00:22:18.567 }, 00:22:18.567 "method": "bdev_nvme_attach_controller" 00:22:18.567 } 00:22:18.567 EOF 00:22:18.567 )") 00:22:18.567 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:18.567 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:18.567 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:18.567 { 00:22:18.567 "params": { 00:22:18.567 "name": "Nvme$subsystem", 00:22:18.567 "trtype": "$TEST_TRANSPORT", 00:22:18.567 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:18.567 "adrfam": "ipv4", 00:22:18.567 "trsvcid": "$NVMF_PORT", 00:22:18.567 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:18.567 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:18.567 "hdgst": ${hdgst:-false}, 00:22:18.567 "ddgst": ${ddgst:-false} 00:22:18.567 }, 00:22:18.567 "method": "bdev_nvme_attach_controller" 00:22:18.567 } 00:22:18.567 EOF 00:22:18.567 )") 00:22:18.567 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:18.567 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:18.567 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:18.567 { 00:22:18.567 "params": { 00:22:18.567 "name": "Nvme$subsystem", 00:22:18.567 "trtype": "$TEST_TRANSPORT", 00:22:18.567 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:18.567 "adrfam": "ipv4", 00:22:18.567 "trsvcid": "$NVMF_PORT", 00:22:18.567 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:18.567 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:18.567 "hdgst": ${hdgst:-false}, 00:22:18.567 "ddgst": ${ddgst:-false} 00:22:18.567 }, 00:22:18.567 "method": "bdev_nvme_attach_controller" 00:22:18.567 } 00:22:18.567 EOF 00:22:18.567 )") 00:22:18.567 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:18.567 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:18.567 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:18.567 { 00:22:18.567 "params": { 00:22:18.567 "name": "Nvme$subsystem", 00:22:18.567 "trtype": "$TEST_TRANSPORT", 00:22:18.567 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:18.567 "adrfam": "ipv4", 00:22:18.567 "trsvcid": "$NVMF_PORT", 00:22:18.567 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:18.567 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:18.567 "hdgst": ${hdgst:-false}, 00:22:18.567 "ddgst": ${ddgst:-false} 00:22:18.567 }, 00:22:18.567 "method": "bdev_nvme_attach_controller" 00:22:18.567 } 00:22:18.567 EOF 00:22:18.567 )") 00:22:18.567 [2024-12-09 17:32:47.553699] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:22:18.567 [2024-12-09 17:32:47.553747] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2638440 ] 00:22:18.568 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:18.568 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:18.568 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:18.568 { 00:22:18.568 "params": { 00:22:18.568 "name": "Nvme$subsystem", 00:22:18.568 "trtype": "$TEST_TRANSPORT", 00:22:18.568 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:18.568 "adrfam": "ipv4", 00:22:18.568 "trsvcid": "$NVMF_PORT", 00:22:18.568 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:18.568 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:18.568 "hdgst": ${hdgst:-false}, 00:22:18.568 "ddgst": ${ddgst:-false} 00:22:18.568 }, 00:22:18.568 "method": "bdev_nvme_attach_controller" 00:22:18.568 } 00:22:18.568 EOF 00:22:18.568 )") 00:22:18.568 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:18.568 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:18.568 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:18.568 { 00:22:18.568 "params": { 00:22:18.568 "name": "Nvme$subsystem", 00:22:18.568 "trtype": "$TEST_TRANSPORT", 00:22:18.568 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:18.568 "adrfam": "ipv4", 00:22:18.568 "trsvcid": "$NVMF_PORT", 00:22:18.568 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:18.568 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:18.568 "hdgst": ${hdgst:-false}, 00:22:18.568 "ddgst": ${ddgst:-false} 00:22:18.568 }, 00:22:18.568 "method": "bdev_nvme_attach_controller" 00:22:18.568 } 00:22:18.568 EOF 00:22:18.568 )") 00:22:18.568 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:18.568 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:18.568 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:18.568 { 00:22:18.568 "params": { 00:22:18.568 "name": "Nvme$subsystem", 00:22:18.568 "trtype": "$TEST_TRANSPORT", 00:22:18.568 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:18.568 "adrfam": "ipv4", 00:22:18.568 "trsvcid": "$NVMF_PORT", 00:22:18.568 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:18.568 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:18.568 "hdgst": ${hdgst:-false}, 00:22:18.568 "ddgst": ${ddgst:-false} 00:22:18.568 }, 00:22:18.568 "method": "bdev_nvme_attach_controller" 00:22:18.568 } 00:22:18.568 EOF 00:22:18.568 )") 00:22:18.568 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:18.568 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:22:18.568 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:22:18.568 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:18.568 "params": { 00:22:18.568 "name": "Nvme1", 00:22:18.568 "trtype": "tcp", 00:22:18.568 "traddr": "10.0.0.2", 00:22:18.568 "adrfam": "ipv4", 00:22:18.568 "trsvcid": "4420", 00:22:18.568 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:18.568 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:18.568 "hdgst": false, 00:22:18.568 "ddgst": false 00:22:18.568 }, 00:22:18.568 "method": "bdev_nvme_attach_controller" 00:22:18.568 },{ 00:22:18.568 "params": { 00:22:18.568 "name": "Nvme2", 00:22:18.568 "trtype": "tcp", 00:22:18.568 "traddr": "10.0.0.2", 00:22:18.568 "adrfam": "ipv4", 00:22:18.568 "trsvcid": "4420", 00:22:18.568 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:18.568 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:18.568 "hdgst": false, 00:22:18.568 "ddgst": false 00:22:18.568 }, 00:22:18.568 "method": "bdev_nvme_attach_controller" 00:22:18.568 },{ 00:22:18.568 "params": { 00:22:18.568 "name": "Nvme3", 00:22:18.568 "trtype": "tcp", 00:22:18.568 "traddr": "10.0.0.2", 00:22:18.568 "adrfam": "ipv4", 00:22:18.568 "trsvcid": "4420", 00:22:18.568 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:18.568 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:18.568 "hdgst": false, 00:22:18.568 "ddgst": false 00:22:18.568 }, 00:22:18.568 "method": "bdev_nvme_attach_controller" 00:22:18.568 },{ 00:22:18.568 "params": { 00:22:18.568 "name": "Nvme4", 00:22:18.568 "trtype": "tcp", 00:22:18.568 "traddr": "10.0.0.2", 00:22:18.568 "adrfam": "ipv4", 00:22:18.568 "trsvcid": "4420", 00:22:18.568 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:18.568 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:18.568 "hdgst": false, 00:22:18.568 "ddgst": false 00:22:18.568 }, 00:22:18.568 "method": "bdev_nvme_attach_controller" 00:22:18.568 },{ 00:22:18.568 "params": { 00:22:18.568 "name": "Nvme5", 00:22:18.568 "trtype": "tcp", 00:22:18.568 "traddr": "10.0.0.2", 00:22:18.568 "adrfam": "ipv4", 00:22:18.568 "trsvcid": "4420", 00:22:18.568 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:18.568 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:18.568 "hdgst": false, 00:22:18.568 "ddgst": false 00:22:18.568 }, 00:22:18.568 "method": "bdev_nvme_attach_controller" 00:22:18.568 },{ 00:22:18.568 "params": { 00:22:18.568 "name": "Nvme6", 00:22:18.568 "trtype": "tcp", 00:22:18.568 "traddr": "10.0.0.2", 00:22:18.568 "adrfam": "ipv4", 00:22:18.568 "trsvcid": "4420", 00:22:18.568 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:18.568 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:18.568 "hdgst": false, 00:22:18.568 "ddgst": false 00:22:18.568 }, 00:22:18.568 "method": "bdev_nvme_attach_controller" 00:22:18.568 },{ 00:22:18.568 "params": { 00:22:18.568 "name": "Nvme7", 00:22:18.568 "trtype": "tcp", 00:22:18.568 "traddr": "10.0.0.2", 00:22:18.568 "adrfam": "ipv4", 00:22:18.568 "trsvcid": "4420", 00:22:18.568 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:18.568 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:18.568 "hdgst": false, 00:22:18.568 "ddgst": false 00:22:18.568 }, 00:22:18.568 "method": "bdev_nvme_attach_controller" 00:22:18.568 },{ 00:22:18.568 "params": { 00:22:18.568 "name": "Nvme8", 00:22:18.568 "trtype": "tcp", 00:22:18.568 "traddr": "10.0.0.2", 00:22:18.568 "adrfam": "ipv4", 00:22:18.568 "trsvcid": "4420", 00:22:18.568 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:18.568 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:18.568 "hdgst": false, 00:22:18.568 "ddgst": false 00:22:18.568 }, 00:22:18.568 "method": "bdev_nvme_attach_controller" 00:22:18.568 },{ 00:22:18.568 "params": { 00:22:18.568 "name": "Nvme9", 00:22:18.568 "trtype": "tcp", 00:22:18.568 "traddr": "10.0.0.2", 00:22:18.568 "adrfam": "ipv4", 00:22:18.568 "trsvcid": "4420", 00:22:18.568 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:18.568 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:18.568 "hdgst": false, 00:22:18.568 "ddgst": false 00:22:18.568 }, 00:22:18.568 "method": "bdev_nvme_attach_controller" 00:22:18.568 },{ 00:22:18.568 "params": { 00:22:18.568 "name": "Nvme10", 00:22:18.568 "trtype": "tcp", 00:22:18.568 "traddr": "10.0.0.2", 00:22:18.568 "adrfam": "ipv4", 00:22:18.568 "trsvcid": "4420", 00:22:18.568 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:18.568 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:18.568 "hdgst": false, 00:22:18.568 "ddgst": false 00:22:18.568 }, 00:22:18.568 "method": "bdev_nvme_attach_controller" 00:22:18.568 }' 00:22:18.568 [2024-12-09 17:32:47.630585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:18.568 [2024-12-09 17:32:47.670302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:20.461 Running I/O for 10 seconds... 00:22:20.461 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:20.461 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:22:20.461 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:20.461 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.461 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:20.461 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.461 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:20.461 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:20.461 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:20.461 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:20.461 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:22:20.461 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:22:20.461 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:20.461 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:20.461 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:20.461 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:20.461 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.461 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:20.461 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.461 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:22:20.461 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:22:20.461 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:20.749 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:20.749 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:20.749 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:20.749 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:20.749 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.749 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:20.749 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.749 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=80 00:22:20.749 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 80 -ge 100 ']' 00:22:20.749 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:21.036 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:21.036 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:21.036 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:21.036 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:21.036 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.036 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:21.036 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.036 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=195 00:22:21.036 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 195 -ge 100 ']' 00:22:21.036 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:22:21.036 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:22:21.036 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:22:21.036 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 2638165 00:22:21.036 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2638165 ']' 00:22:21.036 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2638165 00:22:21.036 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:22:21.036 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:21.036 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2638165 00:22:21.036 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:21.318 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:21.318 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2638165' 00:22:21.318 killing process with pid 2638165 00:22:21.318 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 2638165 00:22:21.318 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 2638165 00:22:21.318 [2024-12-09 17:32:50.213489] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1af50 is same with the state(6) to be set 00:22:21.318 [2024-12-09 17:32:50.213542] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1af50 is same with the state(6) to be set 00:22:21.318 [2024-12-09 17:32:50.213556] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1af50 is same with the state(6) to be set 00:22:21.318 [2024-12-09 17:32:50.213564] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1af50 is same with the state(6) to be set 00:22:21.318 [2024-12-09 17:32:50.213570] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1af50 is same with the state(6) to be set 00:22:21.318 [2024-12-09 17:32:50.213577] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1af50 is same with the state(6) to be set 00:22:21.318 [2024-12-09 17:32:50.213583] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1af50 is same with the state(6) to be set 00:22:21.318 [2024-12-09 17:32:50.213589] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1af50 is same with the state(6) to be set 00:22:21.318 [2024-12-09 17:32:50.213595] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1af50 is same with the state(6) to be set 00:22:21.318 [2024-12-09 17:32:50.213602] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1af50 is same with the state(6) to be set 00:22:21.318 [2024-12-09 17:32:50.213608] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1af50 is same with the state(6) to be set 00:22:21.318 [2024-12-09 17:32:50.213615] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1af50 is same with the state(6) to be set 00:22:21.318 [2024-12-09 17:32:50.213621] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1af50 is same with the state(6) to be set 00:22:21.318 [2024-12-09 17:32:50.213627] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1af50 is same with the state(6) to be set 00:22:21.318 [2024-12-09 17:32:50.213633] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1af50 is same with the state(6) to be set 00:22:21.318 [2024-12-09 17:32:50.213639] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1af50 is same with the state(6) to be set 00:22:21.318 [2024-12-09 17:32:50.213646] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1af50 is same with the state(6) to be set 00:22:21.318 [2024-12-09 17:32:50.213653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1af50 is same with the state(6) to be set 00:22:21.318 [2024-12-09 17:32:50.213660] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1af50 is same with the state(6) to be set 00:22:21.318 [2024-12-09 17:32:50.213666] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1af50 is same with the state(6) to be set 00:22:21.318 [2024-12-09 17:32:50.213672] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1af50 is same with the state(6) to be set 00:22:21.318 [2024-12-09 17:32:50.213678] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1af50 is same with the state(6) to be set 00:22:21.318 [2024-12-09 17:32:50.213685] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1af50 is same with the state(6) to be set 00:22:21.318 [2024-12-09 17:32:50.213691] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1af50 is same with the state(6) to be set 00:22:21.318 [2024-12-09 17:32:50.213697] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1af50 is same with the state(6) to be set 00:22:21.318 [2024-12-09 17:32:50.213703] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1af50 is same with the state(6) to be set 00:22:21.318 [2024-12-09 17:32:50.213710] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1af50 is same with the state(6) to be set 00:22:21.319 [2024-12-09 17:32:50.213718] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1af50 is same with the state(6) to be set 00:22:21.319 [2024-12-09 17:32:50.213724] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1af50 is same with the state(6) to be set 00:22:21.319 [2024-12-09 17:32:50.213733] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1af50 is same with the state(6) to be set 00:22:21.319 [2024-12-09 17:32:50.213739] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1af50 is same with the state(6) to be set 00:22:21.319 [2024-12-09 17:32:50.213748] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1af50 is same with the state(6) to be set 00:22:21.319 [2024-12-09 17:32:50.213754] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1af50 is same with the state(6) to be set 00:22:21.319 [2024-12-09 17:32:50.213761] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1af50 is same with the state(6) to be set 00:22:21.319 [2024-12-09 17:32:50.213766] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1af50 is same with the state(6) to be set 00:22:21.319 [2024-12-09 17:32:50.213774] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1af50 is same with the state(6) to be set 00:22:21.319 [2024-12-09 17:32:50.213780] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1af50 is same with the state(6) to be set 00:22:21.319 [2024-12-09 17:32:50.213786] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1af50 is same with the state(6) to be set 00:22:21.319 [2024-12-09 17:32:50.213792] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1af50 is same with the state(6) to be set 00:22:21.319 [2024-12-09 17:32:50.213799] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1af50 is same with the state(6) to be set 00:22:21.319 [2024-12-09 17:32:50.213805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1af50 is same with the state(6) to be set 00:22:21.319 [2024-12-09 17:32:50.213811] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1af50 is same with the state(6) to be set 00:22:21.319 [2024-12-09 17:32:50.213817] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1af50 is same with the state(6) to be set 00:22:21.319 [2024-12-09 17:32:50.213824] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1af50 is same with the state(6) to be set 00:22:21.319 [2024-12-09 17:32:50.213830] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1af50 is same with the state(6) to be set 00:22:21.319 [2024-12-09 17:32:50.213836] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1af50 is same with the state(6) to be set 00:22:21.319 [2024-12-09 17:32:50.213843] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1af50 is same with the state(6) to be set 00:22:21.319 [2024-12-09 17:32:50.213848] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1af50 is same with the state(6) to be set 00:22:21.319 [2024-12-09 17:32:50.213855] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1af50 is same with the state(6) to be set 00:22:21.319 [2024-12-09 17:32:50.213861] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1af50 is same with the state(6) to be set 00:22:21.319 [2024-12-09 17:32:50.213867] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1af50 is same with the state(6) to be set 00:22:21.319 [2024-12-09 17:32:50.213873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1af50 is same with the state(6) to be set 00:22:21.319 [2024-12-09 17:32:50.213880] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1af50 is same with the state(6) to be set 00:22:21.319 [2024-12-09 17:32:50.213886] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1af50 is same with the state(6) to be set 00:22:21.319 [2024-12-09 17:32:50.213892] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1af50 is same with the state(6) to be set 00:22:21.319 [2024-12-09 17:32:50.213898] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1af50 is same with the state(6) to be set 00:22:21.319 [2024-12-09 17:32:50.213905] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1af50 is same with the state(6) to be set 00:22:21.319 [2024-12-09 17:32:50.213912] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1af50 is same with the state(6) to be set 00:22:21.319 [2024-12-09 17:32:50.213918] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1af50 is same with the state(6) to be set 00:22:21.319 [2024-12-09 17:32:50.213925] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1af50 is same with the state(6) to be set 00:22:21.319 [2024-12-09 17:32:50.213931] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1af50 is same with the state(6) to be set 00:22:21.319 [2024-12-09 17:32:50.213937] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1af50 is same with the state(6) to be set 00:22:21.319 [2024-12-09 17:32:50.213943] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1af50 is same with the state(6) to be set 00:22:21.319 [2024-12-09 17:32:50.217624] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b440 is same with the state(6) to be set 00:22:21.319 [2024-12-09 17:32:50.217658] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b440 is same with the state(6) to be set 00:22:21.319 [2024-12-09 17:32:50.217667] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b440 is same with the state(6) to be set 00:22:21.319 [2024-12-09 17:32:50.217674] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b440 is same with the state(6) to be set 00:22:21.319 [2024-12-09 17:32:50.217682] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b440 is same with the state(6) to be set 00:22:21.319 [2024-12-09 17:32:50.217688] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b440 is same with the state(6) to be set 00:22:21.319 [2024-12-09 17:32:50.217695] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b440 is same with the state(6) to be set 00:22:21.319 [2024-12-09 17:32:50.217700] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b440 is same with the state(6) to be set 00:22:21.319 [2024-12-09 17:32:50.217707] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b440 is same with the state(6) to be set 00:22:21.319 [2024-12-09 17:32:50.217713] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b440 is same with the state(6) to be set 00:22:21.319 [2024-12-09 17:32:50.217720] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b440 is same with the state(6) to be set 00:22:21.319 [2024-12-09 17:32:50.217726] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b440 is same with the state(6) to be set 00:22:21.319 [2024-12-09 17:32:50.217733] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b440 is same with the state(6) to be set 00:22:21.319 [2024-12-09 17:32:50.217740] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b440 is same with the state(6) to be set 00:22:21.319 [2024-12-09 17:32:50.217747] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b440 is same with the state(6) to be set 00:22:21.319 [2024-12-09 17:32:50.217753] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b440 is same with the state(6) to be set 00:22:21.319 [2024-12-09 17:32:50.217760] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b440 is same with the state(6) to be set 00:22:21.319 [2024-12-09 17:32:50.217766] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b440 is same with the state(6) to be set 00:22:21.319 [2024-12-09 17:32:50.217772] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b440 is same with the state(6) to be set 00:22:21.319 [2024-12-09 17:32:50.217778] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b440 is same with the state(6) to be set 00:22:21.319 [2024-12-09 17:32:50.217789] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b440 is same with the state(6) to be set 00:22:21.319 [2024-12-09 17:32:50.217796] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b440 is same with the state(6) to be set 00:22:21.319 [2024-12-09 17:32:50.217802] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b440 is same with the state(6) to be set 00:22:21.319 [2024-12-09 17:32:50.217809] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b440 is same with the state(6) to be set 00:22:21.319 [2024-12-09 17:32:50.217816] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b440 is same with the state(6) to be set 00:22:21.319 [2024-12-09 17:32:50.217823] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b440 is same with the state(6) to be set 00:22:21.320 [2024-12-09 17:32:50.217829] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b440 is same with the state(6) to be set 00:22:21.320 [2024-12-09 17:32:50.217836] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b440 is same with the state(6) to be set 00:22:21.320 [2024-12-09 17:32:50.217842] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b440 is same with the state(6) to be set 00:22:21.320 [2024-12-09 17:32:50.217850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b440 is same with the state(6) to be set 00:22:21.320 [2024-12-09 17:32:50.217856] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b440 is same with the state(6) to be set 00:22:21.320 [2024-12-09 17:32:50.217862] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b440 is same with the state(6) to be set 00:22:21.320 [2024-12-09 17:32:50.217868] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b440 is same with the state(6) to be set 00:22:21.320 [2024-12-09 17:32:50.217873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b440 is same with the state(6) to be set 00:22:21.320 [2024-12-09 17:32:50.217880] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b440 is same with the state(6) to be set 00:22:21.320 [2024-12-09 17:32:50.217886] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b440 is same with the state(6) to be set 00:22:21.320 [2024-12-09 17:32:50.217892] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b440 is same with the state(6) to be set 00:22:21.320 [2024-12-09 17:32:50.217907] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b440 is same with the state(6) to be set 00:22:21.320 [2024-12-09 17:32:50.217913] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b440 is same with the state(6) to be set 00:22:21.320 [2024-12-09 17:32:50.217919] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b440 is same with the state(6) to be set 00:22:21.320 [2024-12-09 17:32:50.217926] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b440 is same with the state(6) to be set 00:22:21.320 [2024-12-09 17:32:50.217932] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b440 is same with the state(6) to be set 00:22:21.320 [2024-12-09 17:32:50.217938] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b440 is same with the state(6) to be set 00:22:21.320 [2024-12-09 17:32:50.217944] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b440 is same with the state(6) to be set 00:22:21.320 [2024-12-09 17:32:50.217953] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b440 is same with the state(6) to be set 00:22:21.320 [2024-12-09 17:32:50.217959] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b440 is same with the state(6) to be set 00:22:21.320 [2024-12-09 17:32:50.217966] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b440 is same with the state(6) to be set 00:22:21.320 [2024-12-09 17:32:50.217972] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b440 is same with the state(6) to be set 00:22:21.320 [2024-12-09 17:32:50.217979] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b440 is same with the state(6) to be set 00:22:21.320 [2024-12-09 17:32:50.217985] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b440 is same with the state(6) to be set 00:22:21.320 [2024-12-09 17:32:50.217991] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b440 is same with the state(6) to be set 00:22:21.320 [2024-12-09 17:32:50.217997] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b440 is same with the state(6) to be set 00:22:21.320 [2024-12-09 17:32:50.218004] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b440 is same with the state(6) to be set 00:22:21.320 [2024-12-09 17:32:50.218010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b440 is same with the state(6) to be set 00:22:21.320 [2024-12-09 17:32:50.218017] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b440 is same with the state(6) to be set 00:22:21.320 [2024-12-09 17:32:50.218023] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b440 is same with the state(6) to be set 00:22:21.320 [2024-12-09 17:32:50.218029] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b440 is same with the state(6) to be set 00:22:21.320 [2024-12-09 17:32:50.219327] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b910 is same with the state(6) to be set 00:22:21.320 [2024-12-09 17:32:50.219351] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b910 is same with the state(6) to be set 00:22:21.320 [2024-12-09 17:32:50.219360] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b910 is same with the state(6) to be set 00:22:21.320 [2024-12-09 17:32:50.219368] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b910 is same with the state(6) to be set 00:22:21.320 [2024-12-09 17:32:50.219374] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b910 is same with the state(6) to be set 00:22:21.320 [2024-12-09 17:32:50.219380] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b910 is same with the state(6) to be set 00:22:21.320 [2024-12-09 17:32:50.219387] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b910 is same with the state(6) to be set 00:22:21.320 [2024-12-09 17:32:50.219392] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b910 is same with the state(6) to be set 00:22:21.320 [2024-12-09 17:32:50.219398] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b910 is same with the state(6) to be set 00:22:21.320 [2024-12-09 17:32:50.219406] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b910 is same with the state(6) to be set 00:22:21.320 [2024-12-09 17:32:50.219412] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b910 is same with the state(6) to be set 00:22:21.320 [2024-12-09 17:32:50.219418] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b910 is same with the state(6) to be set 00:22:21.320 [2024-12-09 17:32:50.219424] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b910 is same with the state(6) to be set 00:22:21.320 [2024-12-09 17:32:50.219430] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b910 is same with the state(6) to be set 00:22:21.320 [2024-12-09 17:32:50.219437] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b910 is same with the state(6) to be set 00:22:21.320 [2024-12-09 17:32:50.219443] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b910 is same with the state(6) to be set 00:22:21.320 [2024-12-09 17:32:50.219449] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b910 is same with the state(6) to be set 00:22:21.320 [2024-12-09 17:32:50.219456] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b910 is same with the state(6) to be set 00:22:21.320 [2024-12-09 17:32:50.219468] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b910 is same with the state(6) to be set 00:22:21.320 [2024-12-09 17:32:50.219474] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b910 is same with the state(6) to be set 00:22:21.320 [2024-12-09 17:32:50.219481] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b910 is same with the state(6) to be set 00:22:21.320 [2024-12-09 17:32:50.219487] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b910 is same with the state(6) to be set 00:22:21.320 [2024-12-09 17:32:50.219493] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b910 is same with the state(6) to be set 00:22:21.320 [2024-12-09 17:32:50.219499] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b910 is same with the state(6) to be set 00:22:21.320 [2024-12-09 17:32:50.219505] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b910 is same with the state(6) to be set 00:22:21.320 [2024-12-09 17:32:50.219512] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b910 is same with the state(6) to be set 00:22:21.320 [2024-12-09 17:32:50.219518] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b910 is same with the state(6) to be set 00:22:21.320 [2024-12-09 17:32:50.219524] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b910 is same with the state(6) to be set 00:22:21.320 [2024-12-09 17:32:50.219531] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b910 is same with the state(6) to be set 00:22:21.320 [2024-12-09 17:32:50.219537] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b910 is same with the state(6) to be set 00:22:21.321 [2024-12-09 17:32:50.219544] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b910 is same with the state(6) to be set 00:22:21.321 [2024-12-09 17:32:50.219550] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b910 is same with the state(6) to be set 00:22:21.321 [2024-12-09 17:32:50.219557] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b910 is same with the state(6) to be set 00:22:21.321 [2024-12-09 17:32:50.219566] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b910 is same with the state(6) to be set 00:22:21.321 [2024-12-09 17:32:50.219573] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b910 is same with the state(6) to be set 00:22:21.321 [2024-12-09 17:32:50.219580] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b910 is same with the state(6) to be set 00:22:21.321 [2024-12-09 17:32:50.219588] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b910 is same with the state(6) to be set 00:22:21.321 [2024-12-09 17:32:50.219594] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b910 is same with the state(6) to be set 00:22:21.321 [2024-12-09 17:32:50.219601] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b910 is same with the state(6) to be set 00:22:21.321 [2024-12-09 17:32:50.219608] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b910 is same with the state(6) to be set 00:22:21.321 [2024-12-09 17:32:50.219615] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b910 is same with the state(6) to be set 00:22:21.321 [2024-12-09 17:32:50.219621] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b910 is same with the state(6) to be set 00:22:21.321 [2024-12-09 17:32:50.219628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b910 is same with the state(6) to be set 00:22:21.321 [2024-12-09 17:32:50.219635] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b910 is same with the state(6) to be set 00:22:21.321 [2024-12-09 17:32:50.219641] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b910 is same with the state(6) to be set 00:22:21.321 [2024-12-09 17:32:50.219653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b910 is same with the state(6) to be set 00:22:21.321 [2024-12-09 17:32:50.219661] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b910 is same with the state(6) to be set 00:22:21.321 [2024-12-09 17:32:50.219669] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b910 is same with the state(6) to be set 00:22:21.321 [2024-12-09 17:32:50.219676] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b910 is same with the state(6) to be set 00:22:21.321 [2024-12-09 17:32:50.219682] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b910 is same with the state(6) to be set 00:22:21.321 [2024-12-09 17:32:50.219688] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b910 is same with the state(6) to be set 00:22:21.321 [2024-12-09 17:32:50.219695] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b910 is same with the state(6) to be set 00:22:21.321 [2024-12-09 17:32:50.219701] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b910 is same with the state(6) to be set 00:22:21.321 [2024-12-09 17:32:50.219708] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b910 is same with the state(6) to be set 00:22:21.321 [2024-12-09 17:32:50.219715] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b910 is same with the state(6) to be set 00:22:21.321 [2024-12-09 17:32:50.219721] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b910 is same with the state(6) to be set 00:22:21.321 [2024-12-09 17:32:50.219728] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b910 is same with the state(6) to be set 00:22:21.321 [2024-12-09 17:32:50.219734] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b910 is same with the state(6) to be set 00:22:21.321 [2024-12-09 17:32:50.219740] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b910 is same with the state(6) to be set 00:22:21.321 [2024-12-09 17:32:50.219746] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b910 is same with the state(6) to be set 00:22:21.321 [2024-12-09 17:32:50.219753] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b910 is same with the state(6) to be set 00:22:21.321 [2024-12-09 17:32:50.219759] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b910 is same with the state(6) to be set 00:22:21.321 [2024-12-09 17:32:50.219765] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b910 is same with the state(6) to be set 00:22:21.321 [2024-12-09 17:32:50.221552] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c2f0 is same with the state(6) to be set 00:22:21.321 [2024-12-09 17:32:50.221570] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c2f0 is same with the state(6) to be set 00:22:21.321 [2024-12-09 17:32:50.221577] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c2f0 is same with the state(6) to be set 00:22:21.321 [2024-12-09 17:32:50.221584] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c2f0 is same with the state(6) to be set 00:22:21.321 [2024-12-09 17:32:50.221590] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c2f0 is same with the state(6) to be set 00:22:21.321 [2024-12-09 17:32:50.221596] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c2f0 is same with the state(6) to be set 00:22:21.321 [2024-12-09 17:32:50.221604] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c2f0 is same with the state(6) to be set 00:22:21.321 [2024-12-09 17:32:50.221612] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c2f0 is same with the state(6) to be set 00:22:21.321 [2024-12-09 17:32:50.221618] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c2f0 is same with the state(6) to be set 00:22:21.321 [2024-12-09 17:32:50.221625] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c2f0 is same with the state(6) to be set 00:22:21.321 [2024-12-09 17:32:50.221635] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c2f0 is same with the state(6) to be set 00:22:21.321 [2024-12-09 17:32:50.221641] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c2f0 is same with the state(6) to be set 00:22:21.321 [2024-12-09 17:32:50.221647] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c2f0 is same with the state(6) to be set 00:22:21.321 [2024-12-09 17:32:50.221653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c2f0 is same with the state(6) to be set 00:22:21.321 [2024-12-09 17:32:50.221660] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c2f0 is same with the state(6) to be set 00:22:21.321 [2024-12-09 17:32:50.221669] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c2f0 is same with the state(6) to be set 00:22:21.321 [2024-12-09 17:32:50.221675] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c2f0 is same with the state(6) to be set 00:22:21.321 [2024-12-09 17:32:50.221681] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c2f0 is same with the state(6) to be set 00:22:21.321 [2024-12-09 17:32:50.221688] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c2f0 is same with the state(6) to be set 00:22:21.321 [2024-12-09 17:32:50.221694] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c2f0 is same with the state(6) to be set 00:22:21.321 [2024-12-09 17:32:50.221700] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c2f0 is same with the state(6) to be set 00:22:21.321 [2024-12-09 17:32:50.221706] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c2f0 is same with the state(6) to be set 00:22:21.321 [2024-12-09 17:32:50.221713] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c2f0 is same with the state(6) to be set 00:22:21.321 [2024-12-09 17:32:50.221720] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c2f0 is same with the state(6) to be set 00:22:21.321 [2024-12-09 17:32:50.221726] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c2f0 is same with the state(6) to be set 00:22:21.321 [2024-12-09 17:32:50.221732] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c2f0 is same with the state(6) to be set 00:22:21.321 [2024-12-09 17:32:50.221738] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c2f0 is same with the state(6) to be set 00:22:21.321 [2024-12-09 17:32:50.221745] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c2f0 is same with the state(6) to be set 00:22:21.321 [2024-12-09 17:32:50.221751] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c2f0 is same with the state(6) to be set 00:22:21.322 [2024-12-09 17:32:50.221757] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c2f0 is same with the state(6) to be set 00:22:21.322 [2024-12-09 17:32:50.221764] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c2f0 is same with the state(6) to be set 00:22:21.322 [2024-12-09 17:32:50.221771] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c2f0 is same with the state(6) to be set 00:22:21.322 [2024-12-09 17:32:50.221777] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c2f0 is same with the state(6) to be set 00:22:21.322 [2024-12-09 17:32:50.221786] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c2f0 is same with the state(6) to be set 00:22:21.322 [2024-12-09 17:32:50.221793] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c2f0 is same with the state(6) to be set 00:22:21.322 [2024-12-09 17:32:50.221799] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c2f0 is same with the state(6) to be set 00:22:21.322 [2024-12-09 17:32:50.221805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c2f0 is same with the state(6) to be set 00:22:21.322 [2024-12-09 17:32:50.221813] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c2f0 is same with the state(6) to be set 00:22:21.322 [2024-12-09 17:32:50.221821] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c2f0 is same with the state(6) to be set 00:22:21.322 [2024-12-09 17:32:50.221827] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c2f0 is same with the state(6) to be set 00:22:21.322 [2024-12-09 17:32:50.221834] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c2f0 is same with the state(6) to be set 00:22:21.322 [2024-12-09 17:32:50.221840] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c2f0 is same with the state(6) to be set 00:22:21.322 [2024-12-09 17:32:50.221847] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c2f0 is same with the state(6) to be set 00:22:21.322 [2024-12-09 17:32:50.221853] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c2f0 is same with the state(6) to be set 00:22:21.322 [2024-12-09 17:32:50.221860] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c2f0 is same with the state(6) to be set 00:22:21.322 [2024-12-09 17:32:50.221866] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c2f0 is same with the state(6) to be set 00:22:21.322 [2024-12-09 17:32:50.221873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c2f0 is same with the state(6) to be set 00:22:21.322 [2024-12-09 17:32:50.221879] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c2f0 is same with the state(6) to be set 00:22:21.322 [2024-12-09 17:32:50.221885] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c2f0 is same with the state(6) to be set 00:22:21.322 [2024-12-09 17:32:50.221891] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c2f0 is same with the state(6) to be set 00:22:21.322 [2024-12-09 17:32:50.221897] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c2f0 is same with the state(6) to be set 00:22:21.322 [2024-12-09 17:32:50.221903] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c2f0 is same with the state(6) to be set 00:22:21.322 [2024-12-09 17:32:50.221910] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c2f0 is same with the state(6) to be set 00:22:21.322 [2024-12-09 17:32:50.221916] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c2f0 is same with the state(6) to be set 00:22:21.322 [2024-12-09 17:32:50.221922] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c2f0 is same with the state(6) to be set 00:22:21.322 [2024-12-09 17:32:50.221930] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c2f0 is same with the state(6) to be set 00:22:21.322 [2024-12-09 17:32:50.221936] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c2f0 is same with the state(6) to be set 00:22:21.322 [2024-12-09 17:32:50.221943] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c2f0 is same with the state(6) to be set 00:22:21.322 [2024-12-09 17:32:50.221949] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c2f0 is same with the state(6) to be set 00:22:21.322 [2024-12-09 17:32:50.221955] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c2f0 is same with the state(6) to be set 00:22:21.322 [2024-12-09 17:32:50.221961] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c2f0 is same with the state(6) to be set 00:22:21.322 [2024-12-09 17:32:50.221967] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c2f0 is same with the state(6) to be set 00:22:21.322 [2024-12-09 17:32:50.221973] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c2f0 is same with the state(6) to be set 00:22:21.322 [2024-12-09 17:32:50.222155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.322 [2024-12-09 17:32:50.222190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.322 [2024-12-09 17:32:50.222208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.322 [2024-12-09 17:32:50.222216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.322 [2024-12-09 17:32:50.222232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.322 [2024-12-09 17:32:50.222240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.322 [2024-12-09 17:32:50.222249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.322 [2024-12-09 17:32:50.222256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.322 [2024-12-09 17:32:50.222265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.322 [2024-12-09 17:32:50.222272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.322 [2024-12-09 17:32:50.222280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.322 [2024-12-09 17:32:50.222288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.322 [2024-12-09 17:32:50.222296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.322 [2024-12-09 17:32:50.222303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.322 [2024-12-09 17:32:50.222311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.322 [2024-12-09 17:32:50.222318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.322 [2024-12-09 17:32:50.222326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.322 [2024-12-09 17:32:50.222333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.322 [2024-12-09 17:32:50.222342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.322 [2024-12-09 17:32:50.222349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.322 [2024-12-09 17:32:50.222357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.322 [2024-12-09 17:32:50.222363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.322 [2024-12-09 17:32:50.222372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.322 [2024-12-09 17:32:50.222378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.322 [2024-12-09 17:32:50.222386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.322 [2024-12-09 17:32:50.222393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.323 [2024-12-09 17:32:50.222404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.323 [2024-12-09 17:32:50.222411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.323 [2024-12-09 17:32:50.222420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.323 [2024-12-09 17:32:50.222426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.323 [2024-12-09 17:32:50.222434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.323 [2024-12-09 17:32:50.222441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.323 [2024-12-09 17:32:50.222449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.323 [2024-12-09 17:32:50.222457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.323 [2024-12-09 17:32:50.222465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.323 [2024-12-09 17:32:50.222472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.323 [2024-12-09 17:32:50.222480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.323 [2024-12-09 17:32:50.222487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.323 [2024-12-09 17:32:50.222495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.323 [2024-12-09 17:32:50.222502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.323 [2024-12-09 17:32:50.222510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.323 [2024-12-09 17:32:50.222517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.323 [2024-12-09 17:32:50.222525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.323 [2024-12-09 17:32:50.222531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.323 [2024-12-09 17:32:50.222539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.323 [2024-12-09 17:32:50.222546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.323 [2024-12-09 17:32:50.222554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.323 [2024-12-09 17:32:50.222561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.323 [2024-12-09 17:32:50.222569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.323 [2024-12-09 17:32:50.222576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.323 [2024-12-09 17:32:50.222585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.323 [2024-12-09 17:32:50.222593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.323 [2024-12-09 17:32:50.222601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.323 [2024-12-09 17:32:50.222607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.323 [2024-12-09 17:32:50.222615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.323 [2024-12-09 17:32:50.222623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.323 [2024-12-09 17:32:50.222631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.323 [2024-12-09 17:32:50.222638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.323 [2024-12-09 17:32:50.222646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.323 [2024-12-09 17:32:50.222653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.323 [2024-12-09 17:32:50.222661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.323 [2024-12-09 17:32:50.222668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.323 [2024-12-09 17:32:50.222676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.323 [2024-12-09 17:32:50.222683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.323 [2024-12-09 17:32:50.222691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.323 [2024-12-09 17:32:50.222698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.323 [2024-12-09 17:32:50.222706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.323 [2024-12-09 17:32:50.222713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.323 [2024-12-09 17:32:50.222722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.323 [2024-12-09 17:32:50.222729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.323 [2024-12-09 17:32:50.222737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.323 [2024-12-09 17:32:50.222744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.323 [2024-12-09 17:32:50.222752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.323 [2024-12-09 17:32:50.222759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.323 [2024-12-09 17:32:50.222767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.323 [2024-12-09 17:32:50.222774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.323 [2024-12-09 17:32:50.222787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.323 [2024-12-09 17:32:50.222794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.323 [2024-12-09 17:32:50.222802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.323 [2024-12-09 17:32:50.222809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.323 [2024-12-09 17:32:50.222817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.323 [2024-12-09 17:32:50.222826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.323 [2024-12-09 17:32:50.222836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.323 [2024-12-09 17:32:50.222843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.324 [2024-12-09 17:32:50.222850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.324 [2024-12-09 17:32:50.222857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.324 [2024-12-09 17:32:50.222866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.324 [2024-12-09 17:32:50.222872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.324 [2024-12-09 17:32:50.222881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.324 [2024-12-09 17:32:50.222884] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c7e0 is same with t[2024-12-09 17:32:50.222888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:22:21.324 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.324 [2024-12-09 17:32:50.222898] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c7e0 is same with the state(6) to be set 00:22:21.324 [2024-12-09 17:32:50.222899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.324 [2024-12-09 17:32:50.222907] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c7e0 is same with the state(6) to be set 00:22:21.324 [2024-12-09 17:32:50.222909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.324 [2024-12-09 17:32:50.222914] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c7e0 is same with the state(6) to be set 00:22:21.324 [2024-12-09 17:32:50.222918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.324 [2024-12-09 17:32:50.222922] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c7e0 is same with the state(6) to be set 00:22:21.324 [2024-12-09 17:32:50.222926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.324 [2024-12-09 17:32:50.222929] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c7e0 is same with the state(6) to be set 00:22:21.324 [2024-12-09 17:32:50.222935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.324 [2024-12-09 17:32:50.222937] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c7e0 is same with the state(6) to be set 00:22:21.324 [2024-12-09 17:32:50.222943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.324 [2024-12-09 17:32:50.222947] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c7e0 is same with the state(6) to be set 00:22:21.324 [2024-12-09 17:32:50.222953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.324 [2024-12-09 17:32:50.222955] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c7e0 is same with the state(6) to be set 00:22:21.324 [2024-12-09 17:32:50.222961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.324 [2024-12-09 17:32:50.222962] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c7e0 is same with the state(6) to be set 00:22:21.324 [2024-12-09 17:32:50.222970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:1[2024-12-09 17:32:50.222970] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c7e0 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.324 he state(6) to be set 00:22:21.324 [2024-12-09 17:32:50.222979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.324 [2024-12-09 17:32:50.222980] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c7e0 is same with the state(6) to be set 00:22:21.324 [2024-12-09 17:32:50.222989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:1[2024-12-09 17:32:50.222989] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c7e0 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.324 he state(6) to be set 00:22:21.324 [2024-12-09 17:32:50.222999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-09 17:32:50.223000] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c7e0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.324 he state(6) to be set 00:22:21.324 [2024-12-09 17:32:50.223009] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c7e0 is same with the state(6) to be set 00:22:21.324 [2024-12-09 17:32:50.223010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.324 [2024-12-09 17:32:50.223016] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c7e0 is same with the state(6) to be set 00:22:21.324 [2024-12-09 17:32:50.223018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.324 [2024-12-09 17:32:50.223023] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c7e0 is same with the state(6) to be set 00:22:21.324 [2024-12-09 17:32:50.223027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.324 [2024-12-09 17:32:50.223031] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c7e0 is same with the state(6) to be set 00:22:21.324 [2024-12-09 17:32:50.223035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.324 [2024-12-09 17:32:50.223039] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c7e0 is same with the state(6) to be set 00:22:21.324 [2024-12-09 17:32:50.223044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.324 [2024-12-09 17:32:50.223046] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c7e0 is same with the state(6) to be set 00:22:21.324 [2024-12-09 17:32:50.223052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-09 17:32:50.223053] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c7e0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.324 he state(6) to be set 00:22:21.324 [2024-12-09 17:32:50.223063] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c7e0 is same with the state(6) to be set 00:22:21.324 [2024-12-09 17:32:50.223066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.325 [2024-12-09 17:32:50.223070] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c7e0 is same with the state(6) to be set 00:22:21.325 [2024-12-09 17:32:50.223074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.325 [2024-12-09 17:32:50.223077] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c7e0 is same with the state(6) to be set 00:22:21.325 [2024-12-09 17:32:50.223084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:1[2024-12-09 17:32:50.223085] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c7e0 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.325 he state(6) to be set 00:22:21.325 [2024-12-09 17:32:50.223094] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c7e0 is same with t[2024-12-09 17:32:50.223094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:22:21.325 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.325 [2024-12-09 17:32:50.223104] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c7e0 is same with the state(6) to be set 00:22:21.325 [2024-12-09 17:32:50.223108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.325 [2024-12-09 17:32:50.223111] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c7e0 is same with the state(6) to be set 00:22:21.325 [2024-12-09 17:32:50.223116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.325 [2024-12-09 17:32:50.223119] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c7e0 is same with the state(6) to be set 00:22:21.325 [2024-12-09 17:32:50.223126] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c7e0 is same with t[2024-12-09 17:32:50.223126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:1he state(6) to be set 00:22:21.325 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.325 [2024-12-09 17:32:50.223136] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c7e0 is same with t[2024-12-09 17:32:50.223138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:22:21.325 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.325 [2024-12-09 17:32:50.223147] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c7e0 is same with the state(6) to be set 00:22:21.325 [2024-12-09 17:32:50.223150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.325 [2024-12-09 17:32:50.223156] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c7e0 is same with the state(6) to be set 00:22:21.325 [2024-12-09 17:32:50.223157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.325 [2024-12-09 17:32:50.223166] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c7e0 is same with the state(6) to be set 00:22:21.325 [2024-12-09 17:32:50.223167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.325 [2024-12-09 17:32:50.223173] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c7e0 is same with t[2024-12-09 17:32:50.223175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:22:21.325 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.325 [2024-12-09 17:32:50.223183] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c7e0 is same with the state(6) to be set 00:22:21.325 [2024-12-09 17:32:50.223186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.325 [2024-12-09 17:32:50.223191] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c7e0 is same with the state(6) to be set 00:22:21.325 [2024-12-09 17:32:50.223194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.325 [2024-12-09 17:32:50.223198] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c7e0 is same with the state(6) to be set 00:22:21.325 [2024-12-09 17:32:50.223204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.325 [2024-12-09 17:32:50.223206] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c7e0 is same with the state(6) to be set 00:22:21.325 [2024-12-09 17:32:50.223212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.325 [2024-12-09 17:32:50.223213] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c7e0 is same with the state(6) to be set 00:22:21.325 [2024-12-09 17:32:50.223226] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c7e0 is same with t[2024-12-09 17:32:50.223227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128he state(6) to be set 00:22:21.325 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.325 [2024-12-09 17:32:50.223234] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c7e0 is same with the state(6) to be set 00:22:21.325 [2024-12-09 17:32:50.223236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.325 [2024-12-09 17:32:50.223242] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c7e0 is same with the state(6) to be set 00:22:21.325 [2024-12-09 17:32:50.223246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.325 [2024-12-09 17:32:50.223249] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c7e0 is same with the state(6) to be set 00:22:21.325 [2024-12-09 17:32:50.223254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.325 [2024-12-09 17:32:50.223256] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c7e0 is same with the state(6) to be set 00:22:21.325 [2024-12-09 17:32:50.223264] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c7e0 is same with the state(6) to be set 00:22:21.325 [2024-12-09 17:32:50.223271] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c7e0 is same with the state(6) to be set 00:22:21.325 [2024-12-09 17:32:50.223277] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c7e0 is same with the state(6) to be set 00:22:21.325 [2024-12-09 17:32:50.223284] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c7e0 is same with the state(6) to be set 00:22:21.325 [2024-12-09 17:32:50.223290] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c7e0 is same with the state(6) to be set 00:22:21.325 [2024-12-09 17:32:50.223297] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c7e0 is same with the state(6) to be set 00:22:21.325 [2024-12-09 17:32:50.223303] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c7e0 is same with the state(6) to be set 00:22:21.325 [2024-12-09 17:32:50.223311] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c7e0 is same with the state(6) to be set 00:22:21.325 [2024-12-09 17:32:50.223323] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c7e0 is same with the state(6) to be set 00:22:21.325 [2024-12-09 17:32:50.223330] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c7e0 is same with the state(6) to be set 00:22:21.325 [2024-12-09 17:32:50.223336] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c7e0 is same with the state(6) to be set 00:22:21.325 [2024-12-09 17:32:50.223342] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c7e0 is same with the state(6) to be set 00:22:21.325 [2024-12-09 17:32:50.223348] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c7e0 is same with the state(6) to be set 00:22:21.325 [2024-12-09 17:32:50.223354] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c7e0 is same with the state(6) to be set 00:22:21.325 [2024-12-09 17:32:50.223360] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c7e0 is same with the state(6) to be set 00:22:21.325 [2024-12-09 17:32:50.223366] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c7e0 is same with the state(6) to be set 00:22:21.325 [2024-12-09 17:32:50.223372] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c7e0 is same with the state(6) to be set 00:22:21.325 [2024-12-09 17:32:50.223372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.325 [2024-12-09 17:32:50.223380] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c7e0 is same with the state(6) to be set 00:22:21.326 [2024-12-09 17:32:50.223385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.326 [2024-12-09 17:32:50.223394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.326 [2024-12-09 17:32:50.223401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.326 [2024-12-09 17:32:50.223408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.326 [2024-12-09 17:32:50.223415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.326 [2024-12-09 17:32:50.223422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.326 [2024-12-09 17:32:50.223429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.326 [2024-12-09 17:32:50.223436] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1089750 is same with the state(6) to be set 00:22:21.326 [2024-12-09 17:32:50.223460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.326 [2024-12-09 17:32:50.223468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.326 [2024-12-09 17:32:50.223476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.326 [2024-12-09 17:32:50.223484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.326 [2024-12-09 17:32:50.223492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.326 [2024-12-09 17:32:50.223498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.326 [2024-12-09 17:32:50.223506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.326 [2024-12-09 17:32:50.223515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.326 [2024-12-09 17:32:50.223521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dd610 is same with the state(6) to be set 00:22:21.326 [2024-12-09 17:32:50.223562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.326 [2024-12-09 17:32:50.223571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.326 [2024-12-09 17:32:50.223578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.326 [2024-12-09 17:32:50.223586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.326 [2024-12-09 17:32:50.223593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.326 [2024-12-09 17:32:50.223600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.326 [2024-12-09 17:32:50.223607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.326 [2024-12-09 17:32:50.223613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.326 [2024-12-09 17:32:50.223619] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10871b0 is same with the state(6) to be set 00:22:21.326 [2024-12-09 17:32:50.223643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.326 [2024-12-09 17:32:50.223652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.326 [2024-12-09 17:32:50.223660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.326 [2024-12-09 17:32:50.223666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.326 [2024-12-09 17:32:50.223673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.326 [2024-12-09 17:32:50.223680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.326 [2024-12-09 17:32:50.223689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.326 [2024-12-09 17:32:50.223695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.326 [2024-12-09 17:32:50.223702] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107d790 is same with the state(6) to be set 00:22:21.326 [2024-12-09 17:32:50.223730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.326 [2024-12-09 17:32:50.223740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.326 [2024-12-09 17:32:50.223747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.326 [2024-12-09 17:32:50.223755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.326 [2024-12-09 17:32:50.223761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.326 [2024-12-09 17:32:50.223770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.326 [2024-12-09 17:32:50.223777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.326 [2024-12-09 17:32:50.223783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.326 [2024-12-09 17:32:50.223791] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1088870 is same with the state(6) to be set 00:22:21.326 [2024-12-09 17:32:50.223816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.326 [2024-12-09 17:32:50.223824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.326 [2024-12-09 17:32:50.223831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.326 [2024-12-09 17:32:50.223837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.326 [2024-12-09 17:32:50.223845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.326 [2024-12-09 17:32:50.223852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.326 [2024-12-09 17:32:50.223858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.326 [2024-12-09 17:32:50.223865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.326 [2024-12-09 17:32:50.223871] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b41a0 is same with the state(6) to be set 00:22:21.326 [2024-12-09 17:32:50.224105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.326 [2024-12-09 17:32:50.224125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.326 [2024-12-09 17:32:50.224137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.326 [2024-12-09 17:32:50.224144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.326 [2024-12-09 17:32:50.224153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.326 [2024-12-09 17:32:50.224161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.327 [2024-12-09 17:32:50.224169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.327 [2024-12-09 17:32:50.224176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.327 [2024-12-09 17:32:50.224184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.327 [2024-12-09 17:32:50.224191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.327 [2024-12-09 17:32:50.224199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.327 [2024-12-09 17:32:50.224208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.327 [2024-12-09 17:32:50.224222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.327 [2024-12-09 17:32:50.224233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.327 [2024-12-09 17:32:50.224241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.327 [2024-12-09 17:32:50.224248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.327 [2024-12-09 17:32:50.224257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.327 [2024-12-09 17:32:50.224263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.327 [2024-12-09 17:32:50.224273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.327 [2024-12-09 17:32:50.224280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.327 [2024-12-09 17:32:50.224289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.327 [2024-12-09 17:32:50.224295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.327 [2024-12-09 17:32:50.224304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.327 [2024-12-09 17:32:50.224310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.327 [2024-12-09 17:32:50.224318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.327 [2024-12-09 17:32:50.224326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.327 [2024-12-09 17:32:50.224334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.327 [2024-12-09 17:32:50.224341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-09 17:32:50.224337] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1ccb0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.327 he state(6) to be set 00:22:21.327 [2024-12-09 17:32:50.224352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.327 [2024-12-09 17:32:50.224354] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1ccb0 is same with the state(6) to be set 00:22:21.327 [2024-12-09 17:32:50.224360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.327 [2024-12-09 17:32:50.224362] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1ccb0 is same with the state(6) to be set 00:22:21.327 [2024-12-09 17:32:50.224369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.327 [2024-12-09 17:32:50.224370] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1ccb0 is same with the state(6) to be set 00:22:21.327 [2024-12-09 17:32:50.224378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-09 17:32:50.224378] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1ccb0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.327 he state(6) to be set 00:22:21.327 [2024-12-09 17:32:50.224388] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1ccb0 is same with t[2024-12-09 17:32:50.224389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34816 len:1he state(6) to be set 00:22:21.327 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.327 [2024-12-09 17:32:50.224399] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1ccb0 is same with t[2024-12-09 17:32:50.224400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:22:21.327 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.327 [2024-12-09 17:32:50.224408] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1ccb0 is same with the state(6) to be set 00:22:21.327 [2024-12-09 17:32:50.224411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.327 [2024-12-09 17:32:50.224416] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1ccb0 is same with the state(6) to be set 00:22:21.327 [2024-12-09 17:32:50.224418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.327 [2024-12-09 17:32:50.224424] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1ccb0 is same with the state(6) to be set 00:22:21.327 [2024-12-09 17:32:50.224428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.327 [2024-12-09 17:32:50.224431] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1ccb0 is same with the state(6) to be set 00:22:21.327 [2024-12-09 17:32:50.224435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.327 [2024-12-09 17:32:50.224438] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1ccb0 is same with the state(6) to be set 00:22:21.327 [2024-12-09 17:32:50.224444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.327 [2024-12-09 17:32:50.224446] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1ccb0 is same with the state(6) to be set 00:22:21.327 [2024-12-09 17:32:50.224452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.327 [2024-12-09 17:32:50.224454] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1ccb0 is same with the state(6) to be set 00:22:21.327 [2024-12-09 17:32:50.224462] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1ccb0 is same with the state(6) to be set 00:22:21.327 [2024-12-09 17:32:50.224463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.327 [2024-12-09 17:32:50.224469] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1ccb0 is same with the state(6) to be set 00:22:21.327 [2024-12-09 17:32:50.224471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.327 [2024-12-09 17:32:50.224476] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1ccb0 is same with the state(6) to be set 00:22:21.327 [2024-12-09 17:32:50.224480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.327 [2024-12-09 17:32:50.224485] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1ccb0 is same with the state(6) to be set 00:22:21.327 [2024-12-09 17:32:50.224490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.327 [2024-12-09 17:32:50.224493] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1ccb0 is same with the state(6) to be set 00:22:21.327 [2024-12-09 17:32:50.224499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35584 len:1[2024-12-09 17:32:50.224500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1ccb0 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.327 he state(6) to be set 00:22:21.328 [2024-12-09 17:32:50.224510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-09 17:32:50.224511] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1ccb0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.328 he state(6) to be set 00:22:21.328 [2024-12-09 17:32:50.224520] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1ccb0 is same with the state(6) to be set 00:22:21.328 [2024-12-09 17:32:50.224521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.328 [2024-12-09 17:32:50.224526] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1ccb0 is same with the state(6) to be set 00:22:21.328 [2024-12-09 17:32:50.224529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.328 [2024-12-09 17:32:50.224534] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1ccb0 is same with the state(6) to be set 00:22:21.328 [2024-12-09 17:32:50.224539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.328 [2024-12-09 17:32:50.224541] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1ccb0 is same with the state(6) to be set 00:22:21.328 [2024-12-09 17:32:50.224548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-09 17:32:50.224549] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1ccb0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.328 he state(6) to be set 00:22:21.328 [2024-12-09 17:32:50.224558] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1ccb0 is same with t[2024-12-09 17:32:50.224559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35968 len:1he state(6) to be set 00:22:21.328 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.328 [2024-12-09 17:32:50.224567] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1ccb0 is same with t[2024-12-09 17:32:50.224567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:22:21.328 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.328 [2024-12-09 17:32:50.224577] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1ccb0 is same with t[2024-12-09 17:32:50.224578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36096 len:1he state(6) to be set 00:22:21.328 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.328 [2024-12-09 17:32:50.224588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-09 17:32:50.224588] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1ccb0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.328 he state(6) to be set 00:22:21.328 [2024-12-09 17:32:50.224598] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1ccb0 is same with the state(6) to be set 00:22:21.328 [2024-12-09 17:32:50.224600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.328 [2024-12-09 17:32:50.224606] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1ccb0 is same with the state(6) to be set 00:22:21.328 [2024-12-09 17:32:50.224608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.328 [2024-12-09 17:32:50.224614] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1ccb0 is same with the state(6) to be set 00:22:21.328 [2024-12-09 17:32:50.224617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.328 [2024-12-09 17:32:50.224621] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1ccb0 is same with the state(6) to be set 00:22:21.328 [2024-12-09 17:32:50.224626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.328 [2024-12-09 17:32:50.224628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1ccb0 is same with the state(6) to be set 00:22:21.328 [2024-12-09 17:32:50.224635] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1ccb0 is same with t[2024-12-09 17:32:50.224636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36480 len:1he state(6) to be set 00:22:21.328 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.328 [2024-12-09 17:32:50.224644] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1ccb0 is same with t[2024-12-09 17:32:50.224645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:22:21.328 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.328 [2024-12-09 17:32:50.224654] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1ccb0 is same with the state(6) to be set 00:22:21.328 [2024-12-09 17:32:50.224656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.328 [2024-12-09 17:32:50.224662] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1ccb0 is same with the state(6) to be set 00:22:21.328 [2024-12-09 17:32:50.224664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.328 [2024-12-09 17:32:50.224670] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1ccb0 is same with the state(6) to be set 00:22:21.328 [2024-12-09 17:32:50.224674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.328 [2024-12-09 17:32:50.224677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1ccb0 is same with the state(6) to be set 00:22:21.328 [2024-12-09 17:32:50.224682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.328 [2024-12-09 17:32:50.224684] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1ccb0 is same with the state(6) to be set 00:22:21.328 [2024-12-09 17:32:50.224690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36864 len:1[2024-12-09 17:32:50.224692] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1ccb0 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.328 he state(6) to be set 00:22:21.328 [2024-12-09 17:32:50.224699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.328 [2024-12-09 17:32:50.224708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.328 [2024-12-09 17:32:50.224715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.328 [2024-12-09 17:32:50.224723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.328 [2024-12-09 17:32:50.224729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.328 [2024-12-09 17:32:50.224737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.328 [2024-12-09 17:32:50.224744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.328 [2024-12-09 17:32:50.224753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.328 [2024-12-09 17:32:50.224762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.328 [2024-12-09 17:32:50.224770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.328 [2024-12-09 17:32:50.224778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.328 [2024-12-09 17:32:50.224786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.328 [2024-12-09 17:32:50.224793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.328 [2024-12-09 17:32:50.224801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.328 [2024-12-09 17:32:50.224809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.328 [2024-12-09 17:32:50.224817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.328 [2024-12-09 17:32:50.224824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.329 [2024-12-09 17:32:50.224832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.329 [2024-12-09 17:32:50.224839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.329 [2024-12-09 17:32:50.224847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.329 [2024-12-09 17:32:50.224853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.329 [2024-12-09 17:32:50.224861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.329 [2024-12-09 17:32:50.224869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.329 [2024-12-09 17:32:50.224877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.329 [2024-12-09 17:32:50.224884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.329 [2024-12-09 17:32:50.224892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.329 [2024-12-09 17:32:50.224898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.329 [2024-12-09 17:32:50.224906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.329 [2024-12-09 17:32:50.224913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.329 [2024-12-09 17:32:50.224921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.329 [2024-12-09 17:32:50.224928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.329 [2024-12-09 17:32:50.224936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.329 [2024-12-09 17:32:50.225369] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d180 is same with the state(6) to be set 00:22:21.329 [2024-12-09 17:32:50.225398] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d180 is same with the state(6) to be set 00:22:21.329 [2024-12-09 17:32:50.225406] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d180 is same with the state(6) to be set 00:22:21.329 [2024-12-09 17:32:50.225413] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d180 is same with the state(6) to be set 00:22:21.329 [2024-12-09 17:32:50.225419] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d180 is same with the state(6) to be set 00:22:21.329 [2024-12-09 17:32:50.225425] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d180 is same with the state(6) to be set 00:22:21.329 [2024-12-09 17:32:50.225432] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d180 is same with the state(6) to be set 00:22:21.329 [2024-12-09 17:32:50.225438] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d180 is same with the state(6) to be set 00:22:21.329 [2024-12-09 17:32:50.225445] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d180 is same with the state(6) to be set 00:22:21.329 [2024-12-09 17:32:50.225451] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d180 is same with the state(6) to be set 00:22:21.329 [2024-12-09 17:32:50.225457] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d180 is same with the state(6) to be set 00:22:21.329 [2024-12-09 17:32:50.225463] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d180 is same with the state(6) to be set 00:22:21.329 [2024-12-09 17:32:50.225470] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d180 is same with the state(6) to be set 00:22:21.329 [2024-12-09 17:32:50.225475] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d180 is same with the state(6) to be set 00:22:21.329 [2024-12-09 17:32:50.225481] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d180 is same with the state(6) to be set 00:22:21.329 [2024-12-09 17:32:50.225487] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d180 is same with the state(6) to be set 00:22:21.329 [2024-12-09 17:32:50.225493] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d180 is same with the state(6) to be set 00:22:21.329 [2024-12-09 17:32:50.225501] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d180 is same with the state(6) to be set 00:22:21.329 [2024-12-09 17:32:50.225508] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d180 is same with the state(6) to be set 00:22:21.329 [2024-12-09 17:32:50.225514] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d180 is same with the state(6) to be set 00:22:21.329 [2024-12-09 17:32:50.225520] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d180 is same with the state(6) to be set 00:22:21.329 [2024-12-09 17:32:50.225526] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d180 is same with the state(6) to be set 00:22:21.329 [2024-12-09 17:32:50.225533] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d180 is same with the state(6) to be set 00:22:21.329 [2024-12-09 17:32:50.225539] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d180 is same with the state(6) to be set 00:22:21.329 [2024-12-09 17:32:50.225545] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d180 is same with the state(6) to be set 00:22:21.329 [2024-12-09 17:32:50.225552] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d180 is same with the state(6) to be set 00:22:21.329 [2024-12-09 17:32:50.225558] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d180 is same with the state(6) to be set 00:22:21.329 [2024-12-09 17:32:50.225565] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d180 is same with the state(6) to be set 00:22:21.329 [2024-12-09 17:32:50.225571] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d180 is same with the state(6) to be set 00:22:21.329 [2024-12-09 17:32:50.225579] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d180 is same with the state(6) to be set 00:22:21.329 [2024-12-09 17:32:50.225585] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d180 is same with the state(6) to be set 00:22:21.329 [2024-12-09 17:32:50.225591] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d180 is same with the state(6) to be set 00:22:21.329 [2024-12-09 17:32:50.225597] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d180 is same with the state(6) to be set 00:22:21.329 [2024-12-09 17:32:50.225604] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d180 is same with the state(6) to be set 00:22:21.329 [2024-12-09 17:32:50.225611] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d180 is same with the state(6) to be set 00:22:21.329 [2024-12-09 17:32:50.225617] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d180 is same with the state(6) to be set 00:22:21.329 [2024-12-09 17:32:50.225623] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d180 is same with the state(6) to be set 00:22:21.329 [2024-12-09 17:32:50.225629] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d180 is same with the state(6) to be set 00:22:21.329 [2024-12-09 17:32:50.225635] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d180 is same with the state(6) to be set 00:22:21.329 [2024-12-09 17:32:50.225641] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d180 is same with the state(6) to be set 00:22:21.329 [2024-12-09 17:32:50.225647] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d180 is same with the state(6) to be set 00:22:21.329 [2024-12-09 17:32:50.225654] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d180 is same with the state(6) to be set 00:22:21.329 [2024-12-09 17:32:50.225661] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d180 is same with the state(6) to be set 00:22:21.329 [2024-12-09 17:32:50.225670] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d180 is same with the state(6) to be set 00:22:21.329 [2024-12-09 17:32:50.225677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d180 is same with the state(6) to be set 00:22:21.329 [2024-12-09 17:32:50.225683] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d180 is same with the state(6) to be set 00:22:21.329 [2024-12-09 17:32:50.225689] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d180 is same with the state(6) to be set 00:22:21.330 [2024-12-09 17:32:50.225696] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d180 is same with the state(6) to be set 00:22:21.330 [2024-12-09 17:32:50.225702] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d180 is same with the state(6) to be set 00:22:21.330 [2024-12-09 17:32:50.225709] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d180 is same with the state(6) to be set 00:22:21.330 [2024-12-09 17:32:50.225715] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d180 is same with the state(6) to be set 00:22:21.330 [2024-12-09 17:32:50.225722] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d180 is same with the state(6) to be set 00:22:21.330 [2024-12-09 17:32:50.225728] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d180 is same with the state(6) to be set 00:22:21.330 [2024-12-09 17:32:50.225734] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d180 is same with the state(6) to be set 00:22:21.330 [2024-12-09 17:32:50.225740] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d180 is same with the state(6) to be set 00:22:21.330 [2024-12-09 17:32:50.225746] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d180 is same with the state(6) to be set 00:22:21.330 [2024-12-09 17:32:50.225754] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d180 is same with the state(6) to be set 00:22:21.330 [2024-12-09 17:32:50.225761] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d180 is same with the state(6) to be set 00:22:21.330 [2024-12-09 17:32:50.225768] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d180 is same with the state(6) to be set 00:22:21.330 [2024-12-09 17:32:50.225774] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d180 is same with the state(6) to be set 00:22:21.330 [2024-12-09 17:32:50.225780] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d180 is same with the state(6) to be set 00:22:21.330 [2024-12-09 17:32:50.225786] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d180 is same with the state(6) to be set 00:22:21.330 [2024-12-09 17:32:50.225792] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d180 is same with the state(6) to be set 00:22:21.330 [2024-12-09 17:32:50.226351] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d650 is same with the state(6) to be set 00:22:21.330 [2024-12-09 17:32:50.226373] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d650 is same with the state(6) to be set 00:22:21.330 [2024-12-09 17:32:50.226381] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d650 is same with the state(6) to be set 00:22:21.330 [2024-12-09 17:32:50.226387] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d650 is same with the state(6) to be set 00:22:21.330 [2024-12-09 17:32:50.237492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.330 [2024-12-09 17:32:50.237515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.330 [2024-12-09 17:32:50.237525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.330 [2024-12-09 17:32:50.237537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.330 [2024-12-09 17:32:50.237547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.330 [2024-12-09 17:32:50.237559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.330 [2024-12-09 17:32:50.237569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.330 [2024-12-09 17:32:50.237582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.330 [2024-12-09 17:32:50.237592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.330 [2024-12-09 17:32:50.237604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.330 [2024-12-09 17:32:50.237616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.330 [2024-12-09 17:32:50.237628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.330 [2024-12-09 17:32:50.237639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.330 [2024-12-09 17:32:50.237651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.330 [2024-12-09 17:32:50.237661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.330 [2024-12-09 17:32:50.237677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.330 [2024-12-09 17:32:50.237688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.330 [2024-12-09 17:32:50.237701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.330 [2024-12-09 17:32:50.237710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.330 [2024-12-09 17:32:50.237723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.330 [2024-12-09 17:32:50.237733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.330 [2024-12-09 17:32:50.237744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.330 [2024-12-09 17:32:50.237754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.330 [2024-12-09 17:32:50.237766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.330 [2024-12-09 17:32:50.237776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.330 [2024-12-09 17:32:50.237788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.330 [2024-12-09 17:32:50.237798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.330 [2024-12-09 17:32:50.237809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.330 [2024-12-09 17:32:50.237819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.331 [2024-12-09 17:32:50.237830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.331 [2024-12-09 17:32:50.237840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.331 [2024-12-09 17:32:50.244086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.331 [2024-12-09 17:32:50.244135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.331 [2024-12-09 17:32:50.244153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.331 [2024-12-09 17:32:50.244162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.331 [2024-12-09 17:32:50.244174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.331 [2024-12-09 17:32:50.244183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.331 [2024-12-09 17:32:50.244195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.331 [2024-12-09 17:32:50.244204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.331 [2024-12-09 17:32:50.244215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.331 [2024-12-09 17:32:50.244240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.331 [2024-12-09 17:32:50.244251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.331 [2024-12-09 17:32:50.244260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.331 [2024-12-09 17:32:50.244271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.331 [2024-12-09 17:32:50.244281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.331 [2024-12-09 17:32:50.244292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.331 [2024-12-09 17:32:50.244300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.331 [2024-12-09 17:32:50.244312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.331 [2024-12-09 17:32:50.244321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.331 [2024-12-09 17:32:50.244333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.331 [2024-12-09 17:32:50.244343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.331 [2024-12-09 17:32:50.244354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.331 [2024-12-09 17:32:50.244364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.331 [2024-12-09 17:32:50.244375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.331 [2024-12-09 17:32:50.244383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.331 [2024-12-09 17:32:50.244395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.331 [2024-12-09 17:32:50.244403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.331 [2024-12-09 17:32:50.244415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.331 [2024-12-09 17:32:50.244425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.331 [2024-12-09 17:32:50.244436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.331 [2024-12-09 17:32:50.244446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.331 [2024-12-09 17:32:50.244457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.331 [2024-12-09 17:32:50.244466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.331 [2024-12-09 17:32:50.244478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.331 [2024-12-09 17:32:50.244487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.331 [2024-12-09 17:32:50.244505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.331 [2024-12-09 17:32:50.244514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.331 [2024-12-09 17:32:50.244526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.331 [2024-12-09 17:32:50.244535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.331 [2024-12-09 17:32:50.244546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.331 [2024-12-09 17:32:50.244556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.331 [2024-12-09 17:32:50.244567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.331 [2024-12-09 17:32:50.244576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.331 [2024-12-09 17:32:50.244589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.331 [2024-12-09 17:32:50.244599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.331 [2024-12-09 17:32:50.244610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.331 [2024-12-09 17:32:50.244620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.331 [2024-12-09 17:32:50.244632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.331 [2024-12-09 17:32:50.244641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.331 [2024-12-09 17:32:50.244653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.331 [2024-12-09 17:32:50.244662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.331 [2024-12-09 17:32:50.244674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.331 [2024-12-09 17:32:50.244683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.331 [2024-12-09 17:32:50.244693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.331 [2024-12-09 17:32:50.244703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.331 [2024-12-09 17:32:50.244714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.331 [2024-12-09 17:32:50.244724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.332 [2024-12-09 17:32:50.244735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.332 [2024-12-09 17:32:50.244744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.332 [2024-12-09 17:32:50.244755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.332 [2024-12-09 17:32:50.244766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.332 [2024-12-09 17:32:50.244778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.332 [2024-12-09 17:32:50.244787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.332 [2024-12-09 17:32:50.244798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.332 [2024-12-09 17:32:50.244807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.332 [2024-12-09 17:32:50.244818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.332 [2024-12-09 17:32:50.244827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.332 [2024-12-09 17:32:50.244839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.332 [2024-12-09 17:32:50.244849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.332 [2024-12-09 17:32:50.244859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.332 [2024-12-09 17:32:50.244868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.332 [2024-12-09 17:32:50.244879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.332 [2024-12-09 17:32:50.244888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.332 [2024-12-09 17:32:50.244900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.332 [2024-12-09 17:32:50.244909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.332 [2024-12-09 17:32:50.244920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.332 [2024-12-09 17:32:50.244929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.332 [2024-12-09 17:32:50.244940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.332 [2024-12-09 17:32:50.244949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.332 [2024-12-09 17:32:50.244960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.332 [2024-12-09 17:32:50.244969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.332 [2024-12-09 17:32:50.244980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.332 [2024-12-09 17:32:50.244990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.332 [2024-12-09 17:32:50.245001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.332 [2024-12-09 17:32:50.245010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.332 [2024-12-09 17:32:50.245023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.332 [2024-12-09 17:32:50.245032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.332 [2024-12-09 17:32:50.245043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.332 [2024-12-09 17:32:50.245053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.332 [2024-12-09 17:32:50.245063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.332 [2024-12-09 17:32:50.245073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.332 [2024-12-09 17:32:50.245084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.332 [2024-12-09 17:32:50.245093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.332 [2024-12-09 17:32:50.245103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.332 [2024-12-09 17:32:50.245112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.332 [2024-12-09 17:32:50.245123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.332 [2024-12-09 17:32:50.245132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.332 [2024-12-09 17:32:50.245144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.332 [2024-12-09 17:32:50.245153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.332 [2024-12-09 17:32:50.245164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.332 [2024-12-09 17:32:50.245173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.332 [2024-12-09 17:32:50.245184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.332 [2024-12-09 17:32:50.245194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.332 [2024-12-09 17:32:50.245205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.332 [2024-12-09 17:32:50.245214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.332 [2024-12-09 17:32:50.245231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.332 [2024-12-09 17:32:50.245240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.332 [2024-12-09 17:32:50.245251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.332 [2024-12-09 17:32:50.245260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.332 [2024-12-09 17:32:50.245270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.332 [2024-12-09 17:32:50.245282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.332 [2024-12-09 17:32:50.245293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.332 [2024-12-09 17:32:50.245303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.332 [2024-12-09 17:32:50.245314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.332 [2024-12-09 17:32:50.245323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.332 [2024-12-09 17:32:50.245335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.333 [2024-12-09 17:32:50.245344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.333 [2024-12-09 17:32:50.245356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.333 [2024-12-09 17:32:50.245365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.333 [2024-12-09 17:32:50.245375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.333 [2024-12-09 17:32:50.245384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.333 [2024-12-09 17:32:50.245395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.333 [2024-12-09 17:32:50.245405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.333 [2024-12-09 17:32:50.245416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.333 [2024-12-09 17:32:50.245425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.333 [2024-12-09 17:32:50.245436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.333 [2024-12-09 17:32:50.245446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.333 [2024-12-09 17:32:50.245458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.333 [2024-12-09 17:32:50.245467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.333 [2024-12-09 17:32:50.245504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:21.333 [2024-12-09 17:32:50.246776] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:22:21.333 [2024-12-09 17:32:50.246802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dd610 (9): Bad file descriptor 00:22:21.333 [2024-12-09 17:32:50.246846] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1089750 (9): Bad file descriptor 00:22:21.333 [2024-12-09 17:32:50.246893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.333 [2024-12-09 17:32:50.246906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.333 [2024-12-09 17:32:50.246921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.333 [2024-12-09 17:32:50.246930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.333 [2024-12-09 17:32:50.246940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.333 [2024-12-09 17:32:50.246959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.333 [2024-12-09 17:32:50.246968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.333 [2024-12-09 17:32:50.246975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.333 [2024-12-09 17:32:50.246982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9e610 is same with the state(6) to be set 00:22:21.333 [2024-12-09 17:32:50.247011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.333 [2024-12-09 17:32:50.247021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.333 [2024-12-09 17:32:50.247029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.333 [2024-12-09 17:32:50.247036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.333 [2024-12-09 17:32:50.247045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.333 [2024-12-09 17:32:50.247052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.333 [2024-12-09 17:32:50.247061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.333 [2024-12-09 17:32:50.247069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.333 [2024-12-09 17:32:50.247078] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1502830 is same with the state(6) to be set 00:22:21.333 [2024-12-09 17:32:50.247101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.333 [2024-12-09 17:32:50.247111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.333 [2024-12-09 17:32:50.247120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.333 [2024-12-09 17:32:50.247127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.333 [2024-12-09 17:32:50.247135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.333 [2024-12-09 17:32:50.247145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.333 [2024-12-09 17:32:50.247154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.333 [2024-12-09 17:32:50.247161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.333 [2024-12-09 17:32:50.247168] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10892c0 is same with the state(6) to be set 00:22:21.333 [2024-12-09 17:32:50.247183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10871b0 (9): Bad file descriptor 00:22:21.333 [2024-12-09 17:32:50.247201] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x107d790 (9): Bad file descriptor 00:22:21.333 [2024-12-09 17:32:50.247238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.333 [2024-12-09 17:32:50.247249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.333 [2024-12-09 17:32:50.247257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.333 [2024-12-09 17:32:50.247265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.333 [2024-12-09 17:32:50.247274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.333 [2024-12-09 17:32:50.247282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.333 [2024-12-09 17:32:50.247290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.333 [2024-12-09 17:32:50.247297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.333 [2024-12-09 17:32:50.247304] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e9120 is same with the state(6) to be set 00:22:21.333 [2024-12-09 17:32:50.247321] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1088870 (9): Bad file descriptor 00:22:21.333 [2024-12-09 17:32:50.247334] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b41a0 (9): Bad file descriptor 00:22:21.333 [2024-12-09 17:32:50.247431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.333 [2024-12-09 17:32:50.247443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.334 [2024-12-09 17:32:50.247457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.334 [2024-12-09 17:32:50.247465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.334 [2024-12-09 17:32:50.247476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.334 [2024-12-09 17:32:50.247484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.334 [2024-12-09 17:32:50.247493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.334 [2024-12-09 17:32:50.247501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.334 [2024-12-09 17:32:50.247510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.334 [2024-12-09 17:32:50.247519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.334 [2024-12-09 17:32:50.247529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.334 [2024-12-09 17:32:50.247536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.334 [2024-12-09 17:32:50.247545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.334 [2024-12-09 17:32:50.247555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.334 [2024-12-09 17:32:50.247565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.334 [2024-12-09 17:32:50.247572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.334 [2024-12-09 17:32:50.247581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.334 [2024-12-09 17:32:50.247590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.334 [2024-12-09 17:32:50.247600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.334 [2024-12-09 17:32:50.247608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.334 [2024-12-09 17:32:50.247617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.334 [2024-12-09 17:32:50.247625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.334 [2024-12-09 17:32:50.247634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.334 [2024-12-09 17:32:50.247642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.334 [2024-12-09 17:32:50.247650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.334 [2024-12-09 17:32:50.247658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.334 [2024-12-09 17:32:50.247667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.334 [2024-12-09 17:32:50.247675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.334 [2024-12-09 17:32:50.247684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.334 [2024-12-09 17:32:50.247692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.334 [2024-12-09 17:32:50.247701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.334 [2024-12-09 17:32:50.247709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.334 [2024-12-09 17:32:50.247719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.334 [2024-12-09 17:32:50.247726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.334 [2024-12-09 17:32:50.247735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.334 [2024-12-09 17:32:50.247742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.334 [2024-12-09 17:32:50.247752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.334 [2024-12-09 17:32:50.247759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.334 [2024-12-09 17:32:50.247771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.334 [2024-12-09 17:32:50.247779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.334 [2024-12-09 17:32:50.247788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.334 [2024-12-09 17:32:50.247796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.334 [2024-12-09 17:32:50.247805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.334 [2024-12-09 17:32:50.247813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.334 [2024-12-09 17:32:50.247822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.334 [2024-12-09 17:32:50.247829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.334 [2024-12-09 17:32:50.247838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.334 [2024-12-09 17:32:50.247846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.334 [2024-12-09 17:32:50.247856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.334 [2024-12-09 17:32:50.247863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.334 [2024-12-09 17:32:50.247872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.334 [2024-12-09 17:32:50.247879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.334 [2024-12-09 17:32:50.247889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.334 [2024-12-09 17:32:50.247896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.334 [2024-12-09 17:32:50.247904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.334 [2024-12-09 17:32:50.247912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.334 [2024-12-09 17:32:50.247921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.334 [2024-12-09 17:32:50.247928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.334 [2024-12-09 17:32:50.247937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.334 [2024-12-09 17:32:50.247945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.335 [2024-12-09 17:32:50.247954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.335 [2024-12-09 17:32:50.247961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.335 [2024-12-09 17:32:50.247971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.335 [2024-12-09 17:32:50.247981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.335 [2024-12-09 17:32:50.247991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.335 [2024-12-09 17:32:50.247998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.335 [2024-12-09 17:32:50.248007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.335 [2024-12-09 17:32:50.248014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.335 [2024-12-09 17:32:50.248024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.335 [2024-12-09 17:32:50.248032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.335 [2024-12-09 17:32:50.248041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.335 [2024-12-09 17:32:50.248049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.335 [2024-12-09 17:32:50.248058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.335 [2024-12-09 17:32:50.248066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.335 [2024-12-09 17:32:50.248075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.335 [2024-12-09 17:32:50.248083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.335 [2024-12-09 17:32:50.248093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.335 [2024-12-09 17:32:50.248100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.335 [2024-12-09 17:32:50.248109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.335 [2024-12-09 17:32:50.248117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.335 [2024-12-09 17:32:50.248127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.335 [2024-12-09 17:32:50.248134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.335 [2024-12-09 17:32:50.248144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.335 [2024-12-09 17:32:50.248151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.335 [2024-12-09 17:32:50.248162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.335 [2024-12-09 17:32:50.248170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.335 [2024-12-09 17:32:50.248180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.335 [2024-12-09 17:32:50.248187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.335 [2024-12-09 17:32:50.248196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.335 [2024-12-09 17:32:50.248206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.335 [2024-12-09 17:32:50.248215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.335 [2024-12-09 17:32:50.248229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.335 [2024-12-09 17:32:50.248238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.335 [2024-12-09 17:32:50.248245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.335 [2024-12-09 17:32:50.248254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.335 [2024-12-09 17:32:50.248262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.335 [2024-12-09 17:32:50.248271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.335 [2024-12-09 17:32:50.248278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.335 [2024-12-09 17:32:50.248288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.335 [2024-12-09 17:32:50.248295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.335 [2024-12-09 17:32:50.248304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.335 [2024-12-09 17:32:50.248311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.335 [2024-12-09 17:32:50.248321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.335 [2024-12-09 17:32:50.248329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.335 [2024-12-09 17:32:50.248339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.335 [2024-12-09 17:32:50.248348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.335 [2024-12-09 17:32:50.248357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.335 [2024-12-09 17:32:50.248366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.335 [2024-12-09 17:32:50.248375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.335 [2024-12-09 17:32:50.248383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.335 [2024-12-09 17:32:50.248391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.335 [2024-12-09 17:32:50.248399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.335 [2024-12-09 17:32:50.248408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.335 [2024-12-09 17:32:50.248416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.335 [2024-12-09 17:32:50.248426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.335 [2024-12-09 17:32:50.248435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.335 [2024-12-09 17:32:50.248444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.335 [2024-12-09 17:32:50.248452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.336 [2024-12-09 17:32:50.248461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.336 [2024-12-09 17:32:50.248468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.336 [2024-12-09 17:32:50.248477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.336 [2024-12-09 17:32:50.248484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.336 [2024-12-09 17:32:50.248495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.336 [2024-12-09 17:32:50.248502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.336 [2024-12-09 17:32:50.248511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.336 [2024-12-09 17:32:50.248519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.336 [2024-12-09 17:32:50.248528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.336 [2024-12-09 17:32:50.248535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.336 [2024-12-09 17:32:50.250853] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:22:21.336 [2024-12-09 17:32:50.252541] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:22:21.336 [2024-12-09 17:32:50.252574] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:22:21.336 [2024-12-09 17:32:50.252600] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1502830 (9): Bad file descriptor 00:22:21.336 [2024-12-09 17:32:50.252785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:21.336 [2024-12-09 17:32:50.252800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dd610 with addr=10.0.0.2, port=4420 00:22:21.336 [2024-12-09 17:32:50.252810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dd610 is same with the state(6) to be set 00:22:21.336 [2024-12-09 17:32:50.252899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:21.336 [2024-12-09 17:32:50.252911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1088870 with addr=10.0.0.2, port=4420 00:22:21.336 [2024-12-09 17:32:50.252920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1088870 is same with the state(6) to be set 00:22:21.336 [2024-12-09 17:32:50.253423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.336 [2024-12-09 17:32:50.253444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.336 [2024-12-09 17:32:50.253464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.336 [2024-12-09 17:32:50.253472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.336 [2024-12-09 17:32:50.253483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.336 [2024-12-09 17:32:50.253491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.336 [2024-12-09 17:32:50.253501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.336 [2024-12-09 17:32:50.253509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.336 [2024-12-09 17:32:50.253519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.336 [2024-12-09 17:32:50.253527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.336 [2024-12-09 17:32:50.253536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.336 [2024-12-09 17:32:50.253544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.336 [2024-12-09 17:32:50.253553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.336 [2024-12-09 17:32:50.253561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.336 [2024-12-09 17:32:50.253570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.336 [2024-12-09 17:32:50.253578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.336 [2024-12-09 17:32:50.253588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.336 [2024-12-09 17:32:50.253595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.336 [2024-12-09 17:32:50.253605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.336 [2024-12-09 17:32:50.253612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.336 [2024-12-09 17:32:50.253622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.336 [2024-12-09 17:32:50.253630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.336 [2024-12-09 17:32:50.253639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.336 [2024-12-09 17:32:50.253647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.336 [2024-12-09 17:32:50.253656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.336 [2024-12-09 17:32:50.253664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.336 [2024-12-09 17:32:50.253673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.336 [2024-12-09 17:32:50.253683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.336 [2024-12-09 17:32:50.253692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.336 [2024-12-09 17:32:50.253700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.336 [2024-12-09 17:32:50.253710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.337 [2024-12-09 17:32:50.253718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.337 [2024-12-09 17:32:50.253728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.337 [2024-12-09 17:32:50.253735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.337 [2024-12-09 17:32:50.253745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.337 [2024-12-09 17:32:50.253753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.337 [2024-12-09 17:32:50.253763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.337 [2024-12-09 17:32:50.253770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.337 [2024-12-09 17:32:50.253780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.337 [2024-12-09 17:32:50.253789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.337 [2024-12-09 17:32:50.253797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.337 [2024-12-09 17:32:50.253805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.337 [2024-12-09 17:32:50.253815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.337 [2024-12-09 17:32:50.253822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.337 [2024-12-09 17:32:50.253832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.337 [2024-12-09 17:32:50.253840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.337 [2024-12-09 17:32:50.253850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.337 [2024-12-09 17:32:50.253858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.337 [2024-12-09 17:32:50.253867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.337 [2024-12-09 17:32:50.253875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.337 [2024-12-09 17:32:50.253885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.337 [2024-12-09 17:32:50.253893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.337 [2024-12-09 17:32:50.253904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.337 [2024-12-09 17:32:50.253912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.337 [2024-12-09 17:32:50.253921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.337 [2024-12-09 17:32:50.253929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.337 [2024-12-09 17:32:50.253940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.337 [2024-12-09 17:32:50.253947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.337 [2024-12-09 17:32:50.253957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.337 [2024-12-09 17:32:50.253965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.337 [2024-12-09 17:32:50.253973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.337 [2024-12-09 17:32:50.253981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.337 [2024-12-09 17:32:50.253990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.337 [2024-12-09 17:32:50.253998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.337 [2024-12-09 17:32:50.254007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.337 [2024-12-09 17:32:50.254016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.337 [2024-12-09 17:32:50.254026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.337 [2024-12-09 17:32:50.254034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.337 [2024-12-09 17:32:50.254043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.337 [2024-12-09 17:32:50.254051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.337 [2024-12-09 17:32:50.254060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.337 [2024-12-09 17:32:50.254067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.337 [2024-12-09 17:32:50.254077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.337 [2024-12-09 17:32:50.254086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.337 [2024-12-09 17:32:50.254096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.337 [2024-12-09 17:32:50.254104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.337 [2024-12-09 17:32:50.254113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.337 [2024-12-09 17:32:50.254126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.337 [2024-12-09 17:32:50.254136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.337 [2024-12-09 17:32:50.254144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.337 [2024-12-09 17:32:50.254153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.337 [2024-12-09 17:32:50.254160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.337 [2024-12-09 17:32:50.254169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.337 [2024-12-09 17:32:50.254177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.337 [2024-12-09 17:32:50.254186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.337 [2024-12-09 17:32:50.254194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.337 [2024-12-09 17:32:50.254203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.337 [2024-12-09 17:32:50.254211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.337 [2024-12-09 17:32:50.254227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.338 [2024-12-09 17:32:50.254238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.338 [2024-12-09 17:32:50.254247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.338 [2024-12-09 17:32:50.254255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.338 [2024-12-09 17:32:50.254265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.338 [2024-12-09 17:32:50.254272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.338 [2024-12-09 17:32:50.254282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.338 [2024-12-09 17:32:50.254290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.338 [2024-12-09 17:32:50.254300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.338 [2024-12-09 17:32:50.254308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.338 [2024-12-09 17:32:50.254317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.338 [2024-12-09 17:32:50.254325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.338 [2024-12-09 17:32:50.254333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.338 [2024-12-09 17:32:50.254342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.338 [2024-12-09 17:32:50.254352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.338 [2024-12-09 17:32:50.254360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.338 [2024-12-09 17:32:50.254369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.338 [2024-12-09 17:32:50.254377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.338 [2024-12-09 17:32:50.254386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.338 [2024-12-09 17:32:50.254393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.338 [2024-12-09 17:32:50.254403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.338 [2024-12-09 17:32:50.254410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.338 [2024-12-09 17:32:50.254419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.338 [2024-12-09 17:32:50.254427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.338 [2024-12-09 17:32:50.254435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.338 [2024-12-09 17:32:50.254443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.338 [2024-12-09 17:32:50.254452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.338 [2024-12-09 17:32:50.254459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.338 [2024-12-09 17:32:50.254468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.338 [2024-12-09 17:32:50.254476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.338 [2024-12-09 17:32:50.254485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.338 [2024-12-09 17:32:50.254492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.338 [2024-12-09 17:32:50.254503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.338 [2024-12-09 17:32:50.254511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.338 [2024-12-09 17:32:50.254520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.338 [2024-12-09 17:32:50.254527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.338 [2024-12-09 17:32:50.254536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.338 [2024-12-09 17:32:50.254544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.338 [2024-12-09 17:32:50.254553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.338 [2024-12-09 17:32:50.254563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.338 [2024-12-09 17:32:50.255177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:21.338 [2024-12-09 17:32:50.255196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107d790 with addr=10.0.0.2, port=4420 00:22:21.338 [2024-12-09 17:32:50.255207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107d790 is same with the state(6) to be set 00:22:21.338 [2024-12-09 17:32:50.255235] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dd610 (9): Bad file descriptor 00:22:21.338 [2024-12-09 17:32:50.255247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1088870 (9): Bad file descriptor 00:22:21.338 [2024-12-09 17:32:50.255309] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:21.338 [2024-12-09 17:32:50.255716] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:21.338 [2024-12-09 17:32:50.255779] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:21.338 [2024-12-09 17:32:50.255827] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:21.338 [2024-12-09 17:32:50.256867] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:21.338 [2024-12-09 17:32:50.256906] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:22:21.338 [2024-12-09 17:32:50.256926] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf9e610 (9): Bad file descriptor 00:22:21.338 [2024-12-09 17:32:50.257028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:21.338 [2024-12-09 17:32:50.257043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1502830 with addr=10.0.0.2, port=4420 00:22:21.338 [2024-12-09 17:32:50.257052] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1502830 is same with the state(6) to be set 00:22:21.338 [2024-12-09 17:32:50.257062] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x107d790 (9): Bad file descriptor 00:22:21.338 [2024-12-09 17:32:50.257071] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:22:21.338 [2024-12-09 17:32:50.257078] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:22:21.338 [2024-12-09 17:32:50.257089] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:22:21.338 [2024-12-09 17:32:50.257097] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:22:21.338 [2024-12-09 17:32:50.257106] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:22:21.338 [2024-12-09 17:32:50.257113] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:22:21.339 [2024-12-09 17:32:50.257119] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:22:21.339 [2024-12-09 17:32:50.257126] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:22:21.339 [2024-12-09 17:32:50.257147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10892c0 (9): Bad file descriptor 00:22:21.339 [2024-12-09 17:32:50.257174] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14e9120 (9): Bad file descriptor 00:22:21.339 [2024-12-09 17:32:50.257232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.339 [2024-12-09 17:32:50.257243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.339 [2024-12-09 17:32:50.257256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.339 [2024-12-09 17:32:50.257268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.339 [2024-12-09 17:32:50.257278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.339 [2024-12-09 17:32:50.257285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.339 [2024-12-09 17:32:50.257294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.339 [2024-12-09 17:32:50.257302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.339 [2024-12-09 17:32:50.257311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.339 [2024-12-09 17:32:50.257318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.339 [2024-12-09 17:32:50.257327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.339 [2024-12-09 17:32:50.257335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.339 [2024-12-09 17:32:50.257344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.339 [2024-12-09 17:32:50.257352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.339 [2024-12-09 17:32:50.257360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.339 [2024-12-09 17:32:50.257367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.339 [2024-12-09 17:32:50.257376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.339 [2024-12-09 17:32:50.257383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.339 [2024-12-09 17:32:50.257392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.339 [2024-12-09 17:32:50.257398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.339 [2024-12-09 17:32:50.257407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.339 [2024-12-09 17:32:50.257414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.339 [2024-12-09 17:32:50.257422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.339 [2024-12-09 17:32:50.257429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.339 [2024-12-09 17:32:50.257437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.339 [2024-12-09 17:32:50.257446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.339 [2024-12-09 17:32:50.257455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.339 [2024-12-09 17:32:50.257462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.339 [2024-12-09 17:32:50.257473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.339 [2024-12-09 17:32:50.257480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.339 [2024-12-09 17:32:50.257489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.339 [2024-12-09 17:32:50.257496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.339 [2024-12-09 17:32:50.257506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.339 [2024-12-09 17:32:50.257513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.339 [2024-12-09 17:32:50.257522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.339 [2024-12-09 17:32:50.257529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.339 [2024-12-09 17:32:50.257538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.339 [2024-12-09 17:32:50.257546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.339 [2024-12-09 17:32:50.257555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.339 [2024-12-09 17:32:50.257562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.339 [2024-12-09 17:32:50.257571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.339 [2024-12-09 17:32:50.257579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.339 [2024-12-09 17:32:50.257587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.339 [2024-12-09 17:32:50.257596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.339 [2024-12-09 17:32:50.257606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.339 [2024-12-09 17:32:50.257613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.339 [2024-12-09 17:32:50.257622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.339 [2024-12-09 17:32:50.257630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.339 [2024-12-09 17:32:50.257639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.339 [2024-12-09 17:32:50.257646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.339 [2024-12-09 17:32:50.257655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.339 [2024-12-09 17:32:50.257662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.339 [2024-12-09 17:32:50.257670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.339 [2024-12-09 17:32:50.257679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.339 [2024-12-09 17:32:50.257688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.340 [2024-12-09 17:32:50.257695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.340 [2024-12-09 17:32:50.257704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.340 [2024-12-09 17:32:50.257711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.340 [2024-12-09 17:32:50.257720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.340 [2024-12-09 17:32:50.257726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.340 [2024-12-09 17:32:50.257735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.340 [2024-12-09 17:32:50.257751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.340 [2024-12-09 17:32:50.257761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.340 [2024-12-09 17:32:50.257769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.340 [2024-12-09 17:32:50.257777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.340 [2024-12-09 17:32:50.257784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.340 [2024-12-09 17:32:50.257795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.340 [2024-12-09 17:32:50.257802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.340 [2024-12-09 17:32:50.257810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.340 [2024-12-09 17:32:50.257818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.340 [2024-12-09 17:32:50.257827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.340 [2024-12-09 17:32:50.257839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.340 [2024-12-09 17:32:50.257849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.340 [2024-12-09 17:32:50.257856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.340 [2024-12-09 17:32:50.257864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.340 [2024-12-09 17:32:50.257871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.340 [2024-12-09 17:32:50.257879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.340 [2024-12-09 17:32:50.257887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.340 [2024-12-09 17:32:50.257897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.340 [2024-12-09 17:32:50.257905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.340 [2024-12-09 17:32:50.257914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.340 [2024-12-09 17:32:50.257921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.340 [2024-12-09 17:32:50.257929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.340 [2024-12-09 17:32:50.257936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.340 [2024-12-09 17:32:50.257945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.340 [2024-12-09 17:32:50.257952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.340 [2024-12-09 17:32:50.257961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.340 [2024-12-09 17:32:50.257968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.340 [2024-12-09 17:32:50.257977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.340 [2024-12-09 17:32:50.257983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.340 [2024-12-09 17:32:50.257992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.340 [2024-12-09 17:32:50.257999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.340 [2024-12-09 17:32:50.258007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.340 [2024-12-09 17:32:50.258015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.340 [2024-12-09 17:32:50.258023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.340 [2024-12-09 17:32:50.258029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.340 [2024-12-09 17:32:50.258038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.340 [2024-12-09 17:32:50.258045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.340 [2024-12-09 17:32:50.258053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.340 [2024-12-09 17:32:50.258060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.340 [2024-12-09 17:32:50.258069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.340 [2024-12-09 17:32:50.258076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.340 [2024-12-09 17:32:50.258085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.340 [2024-12-09 17:32:50.258094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.340 [2024-12-09 17:32:50.258103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.340 [2024-12-09 17:32:50.258110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.340 [2024-12-09 17:32:50.258118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.340 [2024-12-09 17:32:50.258125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.340 [2024-12-09 17:32:50.258134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.340 [2024-12-09 17:32:50.258140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.340 [2024-12-09 17:32:50.258150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.340 [2024-12-09 17:32:50.258157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.340 [2024-12-09 17:32:50.258167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.341 [2024-12-09 17:32:50.258174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.341 [2024-12-09 17:32:50.258182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.341 [2024-12-09 17:32:50.258189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.341 [2024-12-09 17:32:50.258197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.341 [2024-12-09 17:32:50.258204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.341 [2024-12-09 17:32:50.258213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.341 [2024-12-09 17:32:50.258239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.341 [2024-12-09 17:32:50.258248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.341 [2024-12-09 17:32:50.258255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.341 [2024-12-09 17:32:50.258264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.341 [2024-12-09 17:32:50.258273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.341 [2024-12-09 17:32:50.258281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.341 [2024-12-09 17:32:50.258288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.341 [2024-12-09 17:32:50.258297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.341 [2024-12-09 17:32:50.258304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.341 [2024-12-09 17:32:50.258314] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147c050 is same with the state(6) to be set 00:22:21.341 [2024-12-09 17:32:50.258400] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:22:21.341 [2024-12-09 17:32:50.258495] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1502830 (9): Bad file descriptor 00:22:21.341 [2024-12-09 17:32:50.258508] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:22:21.341 [2024-12-09 17:32:50.258516] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:22:21.341 [2024-12-09 17:32:50.258524] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:22:21.341 [2024-12-09 17:32:50.258533] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:22:21.341 [2024-12-09 17:32:50.258579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.341 [2024-12-09 17:32:50.258588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.341 [2024-12-09 17:32:50.258599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.341 [2024-12-09 17:32:50.258618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.341 [2024-12-09 17:32:50.258626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.341 [2024-12-09 17:32:50.258634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.341 [2024-12-09 17:32:50.258643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.341 [2024-12-09 17:32:50.258649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.341 [2024-12-09 17:32:50.258658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.341 [2024-12-09 17:32:50.258666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.341 [2024-12-09 17:32:50.258675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.341 [2024-12-09 17:32:50.258682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.341 [2024-12-09 17:32:50.258690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.341 [2024-12-09 17:32:50.258697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.341 [2024-12-09 17:32:50.258706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.341 [2024-12-09 17:32:50.258713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.341 [2024-12-09 17:32:50.258721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.341 [2024-12-09 17:32:50.258729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.341 [2024-12-09 17:32:50.258737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.341 [2024-12-09 17:32:50.258748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.341 [2024-12-09 17:32:50.258756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.341 [2024-12-09 17:32:50.258765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.341 [2024-12-09 17:32:50.258774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.341 [2024-12-09 17:32:50.258781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.341 [2024-12-09 17:32:50.258789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.341 [2024-12-09 17:32:50.258797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.341 [2024-12-09 17:32:50.258805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.341 [2024-12-09 17:32:50.258812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.341 [2024-12-09 17:32:50.258820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.341 [2024-12-09 17:32:50.258828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.341 [2024-12-09 17:32:50.258836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.341 [2024-12-09 17:32:50.258843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.341 [2024-12-09 17:32:50.258851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.341 [2024-12-09 17:32:50.258858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.341 [2024-12-09 17:32:50.258866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.341 [2024-12-09 17:32:50.258873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.341 [2024-12-09 17:32:50.258882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.341 [2024-12-09 17:32:50.258889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.342 [2024-12-09 17:32:50.258898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.342 [2024-12-09 17:32:50.258905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.342 [2024-12-09 17:32:50.258914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.342 [2024-12-09 17:32:50.258921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.342 [2024-12-09 17:32:50.258931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.342 [2024-12-09 17:32:50.258938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.342 [2024-12-09 17:32:50.258948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.342 [2024-12-09 17:32:50.258956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.342 [2024-12-09 17:32:50.258964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.342 [2024-12-09 17:32:50.258971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.342 [2024-12-09 17:32:50.258979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.342 [2024-12-09 17:32:50.258987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.342 [2024-12-09 17:32:50.258995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.342 [2024-12-09 17:32:50.259002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.342 [2024-12-09 17:32:50.259011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.342 [2024-12-09 17:32:50.259020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.342 [2024-12-09 17:32:50.259029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.342 [2024-12-09 17:32:50.259035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.342 [2024-12-09 17:32:50.259044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.342 [2024-12-09 17:32:50.259051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.342 [2024-12-09 17:32:50.259060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.342 [2024-12-09 17:32:50.259067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.342 [2024-12-09 17:32:50.259076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.342 [2024-12-09 17:32:50.259083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.342 [2024-12-09 17:32:50.259091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.342 [2024-12-09 17:32:50.259099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.342 [2024-12-09 17:32:50.259107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.342 [2024-12-09 17:32:50.259115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.342 [2024-12-09 17:32:50.259123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.342 [2024-12-09 17:32:50.259130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.342 [2024-12-09 17:32:50.259139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.342 [2024-12-09 17:32:50.259148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.342 [2024-12-09 17:32:50.259157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.342 [2024-12-09 17:32:50.259164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.342 [2024-12-09 17:32:50.259172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.342 [2024-12-09 17:32:50.259180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.342 [2024-12-09 17:32:50.259189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.342 [2024-12-09 17:32:50.259196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.342 [2024-12-09 17:32:50.259204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.342 [2024-12-09 17:32:50.259211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.342 [2024-12-09 17:32:50.259231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.342 [2024-12-09 17:32:50.259238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.342 [2024-12-09 17:32:50.259247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.342 [2024-12-09 17:32:50.259255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.342 [2024-12-09 17:32:50.259263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.342 [2024-12-09 17:32:50.259270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.342 [2024-12-09 17:32:50.259279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.342 [2024-12-09 17:32:50.259285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.343 [2024-12-09 17:32:50.259294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.343 [2024-12-09 17:32:50.259301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.343 [2024-12-09 17:32:50.259309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.343 [2024-12-09 17:32:50.259317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.343 [2024-12-09 17:32:50.259325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.343 [2024-12-09 17:32:50.259332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.343 [2024-12-09 17:32:50.259341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.343 [2024-12-09 17:32:50.259350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.343 [2024-12-09 17:32:50.259360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.343 [2024-12-09 17:32:50.259367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.343 [2024-12-09 17:32:50.259376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.343 [2024-12-09 17:32:50.259383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.343 [2024-12-09 17:32:50.259391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.343 [2024-12-09 17:32:50.259398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.343 [2024-12-09 17:32:50.259407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.343 [2024-12-09 17:32:50.259413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.343 [2024-12-09 17:32:50.259422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.343 [2024-12-09 17:32:50.259429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.343 [2024-12-09 17:32:50.259438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.343 [2024-12-09 17:32:50.259447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.343 [2024-12-09 17:32:50.259456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.343 [2024-12-09 17:32:50.259462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.343 [2024-12-09 17:32:50.259471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.343 [2024-12-09 17:32:50.259478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.343 [2024-12-09 17:32:50.259486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.343 [2024-12-09 17:32:50.259493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.343 [2024-12-09 17:32:50.259501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.343 [2024-12-09 17:32:50.259509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.343 [2024-12-09 17:32:50.259517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.343 [2024-12-09 17:32:50.259523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.343 [2024-12-09 17:32:50.259532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.343 [2024-12-09 17:32:50.259539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.343 [2024-12-09 17:32:50.259548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.343 [2024-12-09 17:32:50.259556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.343 [2024-12-09 17:32:50.259564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.343 [2024-12-09 17:32:50.259571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.343 [2024-12-09 17:32:50.259579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.343 [2024-12-09 17:32:50.259586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.343 [2024-12-09 17:32:50.259594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.343 [2024-12-09 17:32:50.259601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.343 [2024-12-09 17:32:50.259610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.343 [2024-12-09 17:32:50.259616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.343 [2024-12-09 17:32:50.259624] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d740 is same with the state(6) to be set 00:22:21.343 [2024-12-09 17:32:50.261535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.343 [2024-12-09 17:32:50.261553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.343 [2024-12-09 17:32:50.261565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.343 [2024-12-09 17:32:50.261573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.343 [2024-12-09 17:32:50.261583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.343 [2024-12-09 17:32:50.261590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.343 [2024-12-09 17:32:50.261599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.343 [2024-12-09 17:32:50.261606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.343 [2024-12-09 17:32:50.261615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.343 [2024-12-09 17:32:50.261622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.343 [2024-12-09 17:32:50.261632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.343 [2024-12-09 17:32:50.261639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.343 [2024-12-09 17:32:50.261648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.343 [2024-12-09 17:32:50.261655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.343 [2024-12-09 17:32:50.261664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.344 [2024-12-09 17:32:50.261671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.344 [2024-12-09 17:32:50.261684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.344 [2024-12-09 17:32:50.261692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.344 [2024-12-09 17:32:50.261701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.344 [2024-12-09 17:32:50.261708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.344 [2024-12-09 17:32:50.261718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.344 [2024-12-09 17:32:50.261725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.344 [2024-12-09 17:32:50.261733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.344 [2024-12-09 17:32:50.261741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.344 [2024-12-09 17:32:50.261749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.344 [2024-12-09 17:32:50.261757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.344 [2024-12-09 17:32:50.261766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.344 [2024-12-09 17:32:50.261773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.344 [2024-12-09 17:32:50.261783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.344 [2024-12-09 17:32:50.261790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.344 [2024-12-09 17:32:50.261801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.344 [2024-12-09 17:32:50.261809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.344 [2024-12-09 17:32:50.261818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.344 [2024-12-09 17:32:50.261826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.344 [2024-12-09 17:32:50.261835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.344 [2024-12-09 17:32:50.261842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.344 [2024-12-09 17:32:50.261851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.344 [2024-12-09 17:32:50.261858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.344 [2024-12-09 17:32:50.261867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.344 [2024-12-09 17:32:50.261875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.344 [2024-12-09 17:32:50.261885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.344 [2024-12-09 17:32:50.261897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.344 [2024-12-09 17:32:50.261907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.344 [2024-12-09 17:32:50.261914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.344 [2024-12-09 17:32:50.261923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.344 [2024-12-09 17:32:50.261930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.344 [2024-12-09 17:32:50.261939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.344 [2024-12-09 17:32:50.261946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.344 [2024-12-09 17:32:50.261954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.344 [2024-12-09 17:32:50.261962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.344 [2024-12-09 17:32:50.261970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.344 [2024-12-09 17:32:50.261977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.344 [2024-12-09 17:32:50.261985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.344 [2024-12-09 17:32:50.261992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.344 [2024-12-09 17:32:50.262000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.344 [2024-12-09 17:32:50.262006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.344 [2024-12-09 17:32:50.262015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.344 [2024-12-09 17:32:50.262022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.344 [2024-12-09 17:32:50.262030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.344 [2024-12-09 17:32:50.262037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.344 [2024-12-09 17:32:50.262045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.344 [2024-12-09 17:32:50.262051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.344 [2024-12-09 17:32:50.262060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.344 [2024-12-09 17:32:50.262067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.344 [2024-12-09 17:32:50.262075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.344 [2024-12-09 17:32:50.262081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.344 [2024-12-09 17:32:50.262091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.344 [2024-12-09 17:32:50.262098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.344 [2024-12-09 17:32:50.262107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.344 [2024-12-09 17:32:50.262114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.344 [2024-12-09 17:32:50.262121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.344 [2024-12-09 17:32:50.262129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.344 [2024-12-09 17:32:50.262138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.345 [2024-12-09 17:32:50.262146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.345 [2024-12-09 17:32:50.262154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.345 [2024-12-09 17:32:50.262161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.345 [2024-12-09 17:32:50.262169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.345 [2024-12-09 17:32:50.262176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.345 [2024-12-09 17:32:50.262184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.345 [2024-12-09 17:32:50.262191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.345 [2024-12-09 17:32:50.262200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.345 [2024-12-09 17:32:50.262206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.345 [2024-12-09 17:32:50.262215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.345 [2024-12-09 17:32:50.262227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.345 [2024-12-09 17:32:50.262235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.345 [2024-12-09 17:32:50.262242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.345 [2024-12-09 17:32:50.262250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.345 [2024-12-09 17:32:50.262257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.345 [2024-12-09 17:32:50.262265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.345 [2024-12-09 17:32:50.262272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.345 [2024-12-09 17:32:50.262280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.345 [2024-12-09 17:32:50.262289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.345 [2024-12-09 17:32:50.262297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.345 [2024-12-09 17:32:50.262304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.345 [2024-12-09 17:32:50.262313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.345 [2024-12-09 17:32:50.262320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.345 [2024-12-09 17:32:50.262328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.345 [2024-12-09 17:32:50.262335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.345 [2024-12-09 17:32:50.262343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.345 [2024-12-09 17:32:50.262351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.345 [2024-12-09 17:32:50.262359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.345 [2024-12-09 17:32:50.262366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.345 [2024-12-09 17:32:50.262374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.345 [2024-12-09 17:32:50.262383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.345 [2024-12-09 17:32:50.262392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.345 [2024-12-09 17:32:50.262398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.345 [2024-12-09 17:32:50.262407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.345 [2024-12-09 17:32:50.262413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.345 [2024-12-09 17:32:50.262421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.345 [2024-12-09 17:32:50.262428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.345 [2024-12-09 17:32:50.262435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.345 [2024-12-09 17:32:50.262442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.345 [2024-12-09 17:32:50.262450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.345 [2024-12-09 17:32:50.262456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.345 [2024-12-09 17:32:50.262465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.345 [2024-12-09 17:32:50.262471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.345 [2024-12-09 17:32:50.262481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.345 [2024-12-09 17:32:50.262487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.345 [2024-12-09 17:32:50.262496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.345 [2024-12-09 17:32:50.262503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.345 [2024-12-09 17:32:50.262511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.345 [2024-12-09 17:32:50.262518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.345 [2024-12-09 17:32:50.262526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.345 [2024-12-09 17:32:50.262533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.345 [2024-12-09 17:32:50.262541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.345 [2024-12-09 17:32:50.262548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.345 [2024-12-09 17:32:50.262556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.345 [2024-12-09 17:32:50.262563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.345 [2024-12-09 17:32:50.262571] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148a330 is same with the state(6) to be set 00:22:21.345 [2024-12-09 17:32:50.263754] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:21.345 [2024-12-09 17:32:50.263770] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:22:21.345 [2024-12-09 17:32:50.263782] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:22:21.345 [2024-12-09 17:32:50.263914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:21.346 [2024-12-09 17:32:50.263927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf9e610 with addr=10.0.0.2, port=4420 00:22:21.346 [2024-12-09 17:32:50.263936] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9e610 is same with the state(6) to be set 00:22:21.346 [2024-12-09 17:32:50.263944] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:22:21.346 [2024-12-09 17:32:50.263951] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:22:21.346 [2024-12-09 17:32:50.263958] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:22:21.346 [2024-12-09 17:32:50.263965] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:22:21.346 [2024-12-09 17:32:50.264274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:21.346 [2024-12-09 17:32:50.264287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1089750 with addr=10.0.0.2, port=4420 00:22:21.346 [2024-12-09 17:32:50.264295] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1089750 is same with the state(6) to be set 00:22:21.346 [2024-12-09 17:32:50.264377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:21.346 [2024-12-09 17:32:50.264391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10871b0 with addr=10.0.0.2, port=4420 00:22:21.346 [2024-12-09 17:32:50.264398] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10871b0 is same with the state(6) to be set 00:22:21.346 [2024-12-09 17:32:50.264615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:21.346 [2024-12-09 17:32:50.264625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b41a0 with addr=10.0.0.2, port=4420 00:22:21.346 [2024-12-09 17:32:50.264632] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b41a0 is same with the state(6) to be set 00:22:21.346 [2024-12-09 17:32:50.264641] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf9e610 (9): Bad file descriptor 00:22:21.346 [2024-12-09 17:32:50.265325] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:22:21.346 [2024-12-09 17:32:50.265339] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:22:21.346 [2024-12-09 17:32:50.265349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:22:21.346 [2024-12-09 17:32:50.265377] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1089750 (9): Bad file descriptor 00:22:21.346 [2024-12-09 17:32:50.265388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10871b0 (9): Bad file descriptor 00:22:21.346 [2024-12-09 17:32:50.265398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b41a0 (9): Bad file descriptor 00:22:21.346 [2024-12-09 17:32:50.265407] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:22:21.346 [2024-12-09 17:32:50.265415] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:22:21.346 [2024-12-09 17:32:50.265423] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:22:21.346 [2024-12-09 17:32:50.265432] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:22:21.346 [2024-12-09 17:32:50.265646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:21.346 [2024-12-09 17:32:50.265660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1088870 with addr=10.0.0.2, port=4420 00:22:21.346 [2024-12-09 17:32:50.265668] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1088870 is same with the state(6) to be set 00:22:21.346 [2024-12-09 17:32:50.265761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:21.346 [2024-12-09 17:32:50.265771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dd610 with addr=10.0.0.2, port=4420 00:22:21.346 [2024-12-09 17:32:50.265779] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dd610 is same with the state(6) to be set 00:22:21.346 [2024-12-09 17:32:50.266006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:21.346 [2024-12-09 17:32:50.266017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107d790 with addr=10.0.0.2, port=4420 00:22:21.346 [2024-12-09 17:32:50.266026] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107d790 is same with the state(6) to be set 00:22:21.346 [2024-12-09 17:32:50.266034] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:21.346 [2024-12-09 17:32:50.266041] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:21.346 [2024-12-09 17:32:50.266049] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:21.346 [2024-12-09 17:32:50.266056] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:21.346 [2024-12-09 17:32:50.266068] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:22:21.346 [2024-12-09 17:32:50.266076] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:22:21.346 [2024-12-09 17:32:50.266084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:22:21.346 [2024-12-09 17:32:50.266090] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:22:21.346 [2024-12-09 17:32:50.266098] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:22:21.346 [2024-12-09 17:32:50.266105] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:22:21.346 [2024-12-09 17:32:50.266113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:22:21.346 [2024-12-09 17:32:50.266119] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:22:21.346 [2024-12-09 17:32:50.266160] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:22:21.346 [2024-12-09 17:32:50.266179] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1088870 (9): Bad file descriptor 00:22:21.346 [2024-12-09 17:32:50.266190] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dd610 (9): Bad file descriptor 00:22:21.346 [2024-12-09 17:32:50.266200] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x107d790 (9): Bad file descriptor 00:22:21.346 [2024-12-09 17:32:50.266345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:21.346 [2024-12-09 17:32:50.266357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1502830 with addr=10.0.0.2, port=4420 00:22:21.346 [2024-12-09 17:32:50.266366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1502830 is same with the state(6) to be set 00:22:21.346 [2024-12-09 17:32:50.266375] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:22:21.346 [2024-12-09 17:32:50.266383] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:22:21.346 [2024-12-09 17:32:50.266391] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:22:21.346 [2024-12-09 17:32:50.266398] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:22:21.346 [2024-12-09 17:32:50.266406] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:22:21.346 [2024-12-09 17:32:50.266414] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:22:21.346 [2024-12-09 17:32:50.266421] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:22:21.346 [2024-12-09 17:32:50.266428] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:22:21.346 [2024-12-09 17:32:50.266436] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:22:21.346 [2024-12-09 17:32:50.266443] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:22:21.346 [2024-12-09 17:32:50.266450] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:22:21.347 [2024-12-09 17:32:50.266457] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:22:21.347 [2024-12-09 17:32:50.266479] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1502830 (9): Bad file descriptor 00:22:21.347 [2024-12-09 17:32:50.266501] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:22:21.347 [2024-12-09 17:32:50.266513] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:22:21.347 [2024-12-09 17:32:50.266520] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:22:21.347 [2024-12-09 17:32:50.266528] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:22:21.347 [2024-12-09 17:32:50.267006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.347 [2024-12-09 17:32:50.267018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.347 [2024-12-09 17:32:50.267031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.347 [2024-12-09 17:32:50.267040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.347 [2024-12-09 17:32:50.267050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.347 [2024-12-09 17:32:50.267058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.347 [2024-12-09 17:32:50.267069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.347 [2024-12-09 17:32:50.267077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.347 [2024-12-09 17:32:50.267087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.347 [2024-12-09 17:32:50.267096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.347 [2024-12-09 17:32:50.267109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.347 [2024-12-09 17:32:50.267117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.347 [2024-12-09 17:32:50.267127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.347 [2024-12-09 17:32:50.267135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.347 [2024-12-09 17:32:50.267145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.347 [2024-12-09 17:32:50.267153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.347 [2024-12-09 17:32:50.267162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.347 [2024-12-09 17:32:50.267170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.347 [2024-12-09 17:32:50.267180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.347 [2024-12-09 17:32:50.267188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.347 [2024-12-09 17:32:50.267197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.347 [2024-12-09 17:32:50.267206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.347 [2024-12-09 17:32:50.267216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.347 [2024-12-09 17:32:50.267234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.347 [2024-12-09 17:32:50.267244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.347 [2024-12-09 17:32:50.267253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.347 [2024-12-09 17:32:50.267262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.347 [2024-12-09 17:32:50.267275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.347 [2024-12-09 17:32:50.267284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.347 [2024-12-09 17:32:50.267292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.347 [2024-12-09 17:32:50.267301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.347 [2024-12-09 17:32:50.267309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.347 [2024-12-09 17:32:50.267318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.347 [2024-12-09 17:32:50.267326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.347 [2024-12-09 17:32:50.267336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.347 [2024-12-09 17:32:50.267343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.347 [2024-12-09 17:32:50.267353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.347 [2024-12-09 17:32:50.267361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.347 [2024-12-09 17:32:50.267371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.347 [2024-12-09 17:32:50.267379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.347 [2024-12-09 17:32:50.267389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.347 [2024-12-09 17:32:50.267396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.347 [2024-12-09 17:32:50.267406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.347 [2024-12-09 17:32:50.267413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.347 [2024-12-09 17:32:50.267422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.347 [2024-12-09 17:32:50.267430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.347 [2024-12-09 17:32:50.267440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.347 [2024-12-09 17:32:50.267448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.347 [2024-12-09 17:32:50.267459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.347 [2024-12-09 17:32:50.267467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.347 [2024-12-09 17:32:50.267477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.347 [2024-12-09 17:32:50.267485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.347 [2024-12-09 17:32:50.267494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.348 [2024-12-09 17:32:50.267504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.348 [2024-12-09 17:32:50.267513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.348 [2024-12-09 17:32:50.267521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.348 [2024-12-09 17:32:50.267530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.348 [2024-12-09 17:32:50.267538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.348 [2024-12-09 17:32:50.267547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.348 [2024-12-09 17:32:50.267557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.348 [2024-12-09 17:32:50.267566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.348 [2024-12-09 17:32:50.267573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.348 [2024-12-09 17:32:50.267583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.348 [2024-12-09 17:32:50.446212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.348 [2024-12-09 17:32:50.446294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.348 [2024-12-09 17:32:50.446320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.348 [2024-12-09 17:32:50.446348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.348 [2024-12-09 17:32:50.446372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.348 [2024-12-09 17:32:50.446398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.348 [2024-12-09 17:32:50.446421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.348 [2024-12-09 17:32:50.446448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.348 [2024-12-09 17:32:50.446470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.348 [2024-12-09 17:32:50.446496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.348 [2024-12-09 17:32:50.446529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.348 [2024-12-09 17:32:50.446556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.348 [2024-12-09 17:32:50.446578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.348 [2024-12-09 17:32:50.446605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.348 [2024-12-09 17:32:50.446628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.348 [2024-12-09 17:32:50.446654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.348 [2024-12-09 17:32:50.446676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.348 [2024-12-09 17:32:50.446704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.348 [2024-12-09 17:32:50.446726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.348 [2024-12-09 17:32:50.446753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.348 [2024-12-09 17:32:50.446776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.348 [2024-12-09 17:32:50.446802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.348 [2024-12-09 17:32:50.446828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.348 [2024-12-09 17:32:50.446856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.348 [2024-12-09 17:32:50.446878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.348 [2024-12-09 17:32:50.446906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.348 [2024-12-09 17:32:50.446928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.348 [2024-12-09 17:32:50.446955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.348 [2024-12-09 17:32:50.446981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.348 [2024-12-09 17:32:50.447008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.348 [2024-12-09 17:32:50.447030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.348 [2024-12-09 17:32:50.447058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.348 [2024-12-09 17:32:50.447079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.348 [2024-12-09 17:32:50.447106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.348 [2024-12-09 17:32:50.447128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.348 [2024-12-09 17:32:50.447160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.348 [2024-12-09 17:32:50.447183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.348 [2024-12-09 17:32:50.447209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.348 [2024-12-09 17:32:50.447244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.349 [2024-12-09 17:32:50.447271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.349 [2024-12-09 17:32:50.447293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.349 [2024-12-09 17:32:50.447320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.349 [2024-12-09 17:32:50.447342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.349 [2024-12-09 17:32:50.447369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.349 [2024-12-09 17:32:50.447391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.349 [2024-12-09 17:32:50.447419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.349 [2024-12-09 17:32:50.447442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.349 [2024-12-09 17:32:50.447469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.349 [2024-12-09 17:32:50.447490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.349 [2024-12-09 17:32:50.447517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.349 [2024-12-09 17:32:50.447539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.349 [2024-12-09 17:32:50.447566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.349 [2024-12-09 17:32:50.447588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.349 [2024-12-09 17:32:50.447616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.349 [2024-12-09 17:32:50.447639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.349 [2024-12-09 17:32:50.447666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.349 [2024-12-09 17:32:50.447688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.349 [2024-12-09 17:32:50.447715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.349 [2024-12-09 17:32:50.447737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.349 [2024-12-09 17:32:50.447764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.349 [2024-12-09 17:32:50.447793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.349 [2024-12-09 17:32:50.447821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.349 [2024-12-09 17:32:50.447843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.349 [2024-12-09 17:32:50.447870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.349 [2024-12-09 17:32:50.447892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.349 [2024-12-09 17:32:50.447917] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148b5f0 is same with the state(6) to be set 00:22:21.349 [2024-12-09 17:32:50.451145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.349 [2024-12-09 17:32:50.451191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.349 [2024-12-09 17:32:50.451236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.349 [2024-12-09 17:32:50.451264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.349 [2024-12-09 17:32:50.451293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.349 [2024-12-09 17:32:50.451316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.349 [2024-12-09 17:32:50.451344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.349 [2024-12-09 17:32:50.451366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.349 [2024-12-09 17:32:50.451394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.349 [2024-12-09 17:32:50.451418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.349 [2024-12-09 17:32:50.451446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.349 [2024-12-09 17:32:50.451470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.349 [2024-12-09 17:32:50.451498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.349 [2024-12-09 17:32:50.451522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.349 [2024-12-09 17:32:50.451553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.349 [2024-12-09 17:32:50.451578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.349 [2024-12-09 17:32:50.451607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.349 [2024-12-09 17:32:50.451631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.349 [2024-12-09 17:32:50.451660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.349 [2024-12-09 17:32:50.451694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.349 [2024-12-09 17:32:50.451724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.349 [2024-12-09 17:32:50.451746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.349 [2024-12-09 17:32:50.451775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.349 [2024-12-09 17:32:50.451800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.349 [2024-12-09 17:32:50.451830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.349 [2024-12-09 17:32:50.451857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.349 [2024-12-09 17:32:50.451886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.349 [2024-12-09 17:32:50.451908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.349 [2024-12-09 17:32:50.451936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.349 [2024-12-09 17:32:50.451961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.349 [2024-12-09 17:32:50.451989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.349 [2024-12-09 17:32:50.452015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.350 [2024-12-09 17:32:50.452044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.350 [2024-12-09 17:32:50.452070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.350 [2024-12-09 17:32:50.452097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.350 [2024-12-09 17:32:50.452121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.350 [2024-12-09 17:32:50.452149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.350 [2024-12-09 17:32:50.452174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.350 [2024-12-09 17:32:50.452203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.350 [2024-12-09 17:32:50.452238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.350 [2024-12-09 17:32:50.452268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.350 [2024-12-09 17:32:50.452291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.350 [2024-12-09 17:32:50.452318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.350 [2024-12-09 17:32:50.452343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.350 [2024-12-09 17:32:50.452376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.350 [2024-12-09 17:32:50.452401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.350 [2024-12-09 17:32:50.452429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.350 [2024-12-09 17:32:50.452453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.350 [2024-12-09 17:32:50.452482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.350 [2024-12-09 17:32:50.452508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.350 [2024-12-09 17:32:50.452536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.350 [2024-12-09 17:32:50.452561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.350 [2024-12-09 17:32:50.452590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.350 [2024-12-09 17:32:50.452615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.350 [2024-12-09 17:32:50.452644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.350 [2024-12-09 17:32:50.452669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.350 [2024-12-09 17:32:50.452696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.350 [2024-12-09 17:32:50.452721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.350 [2024-12-09 17:32:50.452749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.350 [2024-12-09 17:32:50.452774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.350 [2024-12-09 17:32:50.452803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.350 [2024-12-09 17:32:50.452828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.350 [2024-12-09 17:32:50.452856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.350 [2024-12-09 17:32:50.452883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.350 [2024-12-09 17:32:50.452912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.350 [2024-12-09 17:32:50.452940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.350 [2024-12-09 17:32:50.452971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.350 [2024-12-09 17:32:50.452999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.350 [2024-12-09 17:32:50.453027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.350 [2024-12-09 17:32:50.453056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.350 [2024-12-09 17:32:50.453085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.350 [2024-12-09 17:32:50.453111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.350 [2024-12-09 17:32:50.453139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.350 [2024-12-09 17:32:50.453165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.350 [2024-12-09 17:32:50.453193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.350 [2024-12-09 17:32:50.453226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.350 [2024-12-09 17:32:50.453256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.350 [2024-12-09 17:32:50.453280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.350 [2024-12-09 17:32:50.453308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.350 [2024-12-09 17:32:50.453331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.350 [2024-12-09 17:32:50.453359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.350 [2024-12-09 17:32:50.453383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.350 [2024-12-09 17:32:50.453412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.350 [2024-12-09 17:32:50.453436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.350 [2024-12-09 17:32:50.453464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.350 [2024-12-09 17:32:50.453487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.350 [2024-12-09 17:32:50.453515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.350 [2024-12-09 17:32:50.453541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.350 [2024-12-09 17:32:50.453569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.350 [2024-12-09 17:32:50.453595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.351 [2024-12-09 17:32:50.453624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.351 [2024-12-09 17:32:50.453649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.351 [2024-12-09 17:32:50.453677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.351 [2024-12-09 17:32:50.453702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.351 [2024-12-09 17:32:50.453743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.351 [2024-12-09 17:32:50.453769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.351 [2024-12-09 17:32:50.453799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.351 [2024-12-09 17:32:50.453825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.351 [2024-12-09 17:32:50.453853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.351 [2024-12-09 17:32:50.453877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.351 [2024-12-09 17:32:50.453907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.351 [2024-12-09 17:32:50.453932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.351 [2024-12-09 17:32:50.453961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.351 [2024-12-09 17:32:50.453988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.351 [2024-12-09 17:32:50.454016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.351 [2024-12-09 17:32:50.454041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.351 [2024-12-09 17:32:50.454069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.351 [2024-12-09 17:32:50.454096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.351 [2024-12-09 17:32:50.454125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.351 [2024-12-09 17:32:50.454151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.351 [2024-12-09 17:32:50.454179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.351 [2024-12-09 17:32:50.454205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.351 [2024-12-09 17:32:50.454242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.351 [2024-12-09 17:32:50.454267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.351 [2024-12-09 17:32:50.454295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.351 [2024-12-09 17:32:50.454321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.351 [2024-12-09 17:32:50.454349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.351 [2024-12-09 17:32:50.454374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.351 [2024-12-09 17:32:50.454401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.351 [2024-12-09 17:32:50.454431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.351 [2024-12-09 17:32:50.454460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.351 [2024-12-09 17:32:50.454484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.351 [2024-12-09 17:32:50.454513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.351 [2024-12-09 17:32:50.454538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.351 [2024-12-09 17:32:50.454567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.351 [2024-12-09 17:32:50.454591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.351 [2024-12-09 17:32:50.454619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.351 [2024-12-09 17:32:50.454642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.351 [2024-12-09 17:32:50.454668] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148dbc0 is same with the state(6) to be set 00:22:21.351 [2024-12-09 17:32:50.457805] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:22:21.630 task offset: 25344 on job bdev=Nvme10n1 fails 00:22:21.630 00:22:21.630 Latency(us) 00:22:21.630 [2024-12-09T16:32:50.809Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:21.630 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:21.630 Job: Nvme1n1 ended in about 0.95 seconds with error 00:22:21.630 Verification LBA range: start 0x0 length 0x400 00:22:21.630 Nvme1n1 : 0.95 202.30 12.64 67.43 0.00 235063.59 17226.61 212711.13 00:22:21.630 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:21.630 Job: Nvme2n1 ended in about 0.94 seconds with error 00:22:21.630 Verification LBA range: start 0x0 length 0x400 00:22:21.630 Nvme2n1 : 0.94 204.15 12.76 68.05 0.00 229007.85 17601.10 217704.35 00:22:21.630 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:21.630 Job: Nvme3n1 ended in about 0.95 seconds with error 00:22:21.630 Verification LBA range: start 0x0 length 0x400 00:22:21.630 Nvme3n1 : 0.95 206.31 12.89 67.37 0.00 224005.65 15603.81 211712.49 00:22:21.630 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:21.630 Job: Nvme4n1 ended in about 0.94 seconds with error 00:22:21.630 Verification LBA range: start 0x0 length 0x400 00:22:21.630 Nvme4n1 : 0.94 272.88 17.05 68.22 0.00 176498.49 14979.66 208716.56 00:22:21.630 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:21.630 Job: Nvme5n1 ended in about 0.95 seconds with error 00:22:21.630 Verification LBA range: start 0x0 length 0x400 00:22:21.630 Nvme5n1 : 0.95 204.83 12.80 67.23 0.00 217711.10 16103.13 216705.71 00:22:21.630 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:21.630 Job: Nvme6n1 ended in about 1.14 seconds with error 00:22:21.630 Verification LBA range: start 0x0 length 0x400 00:22:21.630 Nvme6n1 : 1.14 173.19 10.82 56.26 0.00 257671.15 17351.44 383479.22 00:22:21.630 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:21.630 Job: Nvme7n1 ended in about 0.95 seconds with error 00:22:21.630 Verification LBA range: start 0x0 length 0x400 00:22:21.630 Nvme7n1 : 0.95 224.26 14.02 67.70 0.00 195519.69 3885.35 226692.14 00:22:21.630 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:21.630 Job: Nvme8n1 ended in about 1.14 seconds with error 00:22:21.630 Verification LBA range: start 0x0 length 0x400 00:22:21.630 Nvme8n1 : 1.14 167.80 10.49 55.93 0.00 256666.70 12233.39 345530.76 00:22:21.630 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:21.630 Job: Nvme9n1 ended in about 0.94 seconds with error 00:22:21.630 Verification LBA range: start 0x0 length 0x400 00:22:21.630 Nvme9n1 : 0.94 204.40 12.78 68.13 0.00 201709.04 6054.28 218702.99 00:22:21.630 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:21.630 Job: Nvme10n1 ended in about 0.94 seconds with error 00:22:21.630 Verification LBA range: start 0x0 length 0x400 00:22:21.630 Nvme10n1 : 0.94 205.32 12.83 68.44 0.00 196854.49 19348.72 231685.36 00:22:21.630 [2024-12-09T16:32:50.809Z] =================================================================================================================== 00:22:21.630 [2024-12-09T16:32:50.809Z] Total : 2065.44 129.09 654.77 0.00 217946.16 3885.35 383479.22 00:22:21.630 [2024-12-09 17:32:50.506845] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:21.630 [2024-12-09 17:32:50.506900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:22:21.630 [2024-12-09 17:32:50.507379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:21.630 [2024-12-09 17:32:50.507407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10892c0 with addr=10.0.0.2, port=4420 00:22:21.630 [2024-12-09 17:32:50.507420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10892c0 is same with the state(6) to be set 00:22:21.630 [2024-12-09 17:32:50.507658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:21.630 [2024-12-09 17:32:50.507672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e9120 with addr=10.0.0.2, port=4420 00:22:21.630 [2024-12-09 17:32:50.507681] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e9120 is same with the state(6) to be set 00:22:21.630 [2024-12-09 17:32:50.507705] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:22:21.630 [2024-12-09 17:32:50.507721] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:22:21.630 [2024-12-09 17:32:50.507732] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:22:21.630 [2024-12-09 17:32:50.507745] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:22:21.630 [2024-12-09 17:32:50.507756] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:22:21.630 [2024-12-09 17:32:50.507766] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:22:21.630 [2024-12-09 17:32:50.507778] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:22:21.630 [2024-12-09 17:32:50.507789] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:22:21.630 [2024-12-09 17:32:50.508287] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:22:21.630 [2024-12-09 17:32:50.508304] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:22:21.630 [2024-12-09 17:32:50.508314] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:22:21.630 [2024-12-09 17:32:50.508323] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:21.630 [2024-12-09 17:32:50.508332] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:22:21.630 [2024-12-09 17:32:50.508348] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:22:21.630 [2024-12-09 17:32:50.508357] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:22:21.630 [2024-12-09 17:32:50.508408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10892c0 (9): Bad file descriptor 00:22:21.630 [2024-12-09 17:32:50.508423] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14e9120 (9): Bad file descriptor 00:22:21.630 [2024-12-09 17:32:50.508473] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:22:21.630 [2024-12-09 17:32:50.508643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:21.630 [2024-12-09 17:32:50.508660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf9e610 with addr=10.0.0.2, port=4420 00:22:21.630 [2024-12-09 17:32:50.508669] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9e610 is same with the state(6) to be set 00:22:21.630 [2024-12-09 17:32:50.508918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:21.630 [2024-12-09 17:32:50.508932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b41a0 with addr=10.0.0.2, port=4420 00:22:21.630 [2024-12-09 17:32:50.508941] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b41a0 is same with the state(6) to be set 00:22:21.630 [2024-12-09 17:32:50.509086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:21.630 [2024-12-09 17:32:50.509099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10871b0 with addr=10.0.0.2, port=4420 00:22:21.630 [2024-12-09 17:32:50.509107] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10871b0 is same with the state(6) to be set 00:22:21.630 [2024-12-09 17:32:50.509294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:21.630 [2024-12-09 17:32:50.509309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1089750 with addr=10.0.0.2, port=4420 00:22:21.630 [2024-12-09 17:32:50.509318] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1089750 is same with the state(6) to be set 00:22:21.630 [2024-12-09 17:32:50.509406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:21.630 [2024-12-09 17:32:50.509418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107d790 with addr=10.0.0.2, port=4420 00:22:21.630 [2024-12-09 17:32:50.509425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107d790 is same with the state(6) to be set 00:22:21.630 [2024-12-09 17:32:50.509619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:21.631 [2024-12-09 17:32:50.509632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dd610 with addr=10.0.0.2, port=4420 00:22:21.631 [2024-12-09 17:32:50.509640] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dd610 is same with the state(6) to be set 00:22:21.631 [2024-12-09 17:32:50.509777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:21.631 [2024-12-09 17:32:50.509790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1088870 with addr=10.0.0.2, port=4420 00:22:21.631 [2024-12-09 17:32:50.509798] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1088870 is same with the state(6) to be set 00:22:21.631 [2024-12-09 17:32:50.509807] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:22:21.631 [2024-12-09 17:32:50.509814] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:22:21.631 [2024-12-09 17:32:50.509824] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:22:21.631 [2024-12-09 17:32:50.509833] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:22:21.631 [2024-12-09 17:32:50.509845] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:22:21.631 [2024-12-09 17:32:50.509851] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:22:21.631 [2024-12-09 17:32:50.509860] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:22:21.631 [2024-12-09 17:32:50.509867] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:22:21.631 [2024-12-09 17:32:50.510059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:21.631 [2024-12-09 17:32:50.510074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1502830 with addr=10.0.0.2, port=4420 00:22:21.631 [2024-12-09 17:32:50.510082] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1502830 is same with the state(6) to be set 00:22:21.631 [2024-12-09 17:32:50.510094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf9e610 (9): Bad file descriptor 00:22:21.631 [2024-12-09 17:32:50.510105] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b41a0 (9): Bad file descriptor 00:22:21.631 [2024-12-09 17:32:50.510114] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10871b0 (9): Bad file descriptor 00:22:21.631 [2024-12-09 17:32:50.510124] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1089750 (9): Bad file descriptor 00:22:21.631 [2024-12-09 17:32:50.510133] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x107d790 (9): Bad file descriptor 00:22:21.631 [2024-12-09 17:32:50.510143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dd610 (9): Bad file descriptor 00:22:21.631 [2024-12-09 17:32:50.510165] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1088870 (9): Bad file descriptor 00:22:21.631 [2024-12-09 17:32:50.510197] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1502830 (9): Bad file descriptor 00:22:21.631 [2024-12-09 17:32:50.510209] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:22:21.631 [2024-12-09 17:32:50.510223] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:22:21.631 [2024-12-09 17:32:50.510232] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:22:21.631 [2024-12-09 17:32:50.510240] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:22:21.631 [2024-12-09 17:32:50.510249] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:22:21.631 [2024-12-09 17:32:50.510257] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:22:21.631 [2024-12-09 17:32:50.510265] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:22:21.631 [2024-12-09 17:32:50.510273] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:22:21.631 [2024-12-09 17:32:50.510283] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:22:21.631 [2024-12-09 17:32:50.510291] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:22:21.631 [2024-12-09 17:32:50.510299] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:22:21.631 [2024-12-09 17:32:50.510306] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:22:21.631 [2024-12-09 17:32:50.510316] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:21.631 [2024-12-09 17:32:50.510324] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:21.631 [2024-12-09 17:32:50.510336] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:21.631 [2024-12-09 17:32:50.510344] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:21.631 [2024-12-09 17:32:50.510354] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:22:21.631 [2024-12-09 17:32:50.510361] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:22:21.631 [2024-12-09 17:32:50.510370] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:22:21.631 [2024-12-09 17:32:50.510378] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:22:21.631 [2024-12-09 17:32:50.510386] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:22:21.631 [2024-12-09 17:32:50.510395] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:22:21.631 [2024-12-09 17:32:50.510404] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:22:21.631 [2024-12-09 17:32:50.510411] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:22:21.631 [2024-12-09 17:32:50.510420] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:22:21.631 [2024-12-09 17:32:50.510429] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:22:21.631 [2024-12-09 17:32:50.510444] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:22:21.631 [2024-12-09 17:32:50.510452] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:22:21.631 [2024-12-09 17:32:50.510481] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:22:21.631 [2024-12-09 17:32:50.510491] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:22:21.631 [2024-12-09 17:32:50.510500] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:22:21.631 [2024-12-09 17:32:50.510508] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:22:21.631 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:22:22.568 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 2638440 00:22:22.568 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:22:22.568 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2638440 00:22:22.568 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:22:22.568 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:22.568 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:22:22.568 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:22.568 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 2638440 00:22:22.568 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:22:22.568 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:22.568 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:22:22.568 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:22:22.568 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:22:22.568 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:22.568 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:22:22.568 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:22.568 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:22.568 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:22.568 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:22.569 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:22.569 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:22:22.569 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:22.569 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:22:22.569 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:22.569 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:22.569 rmmod nvme_tcp 00:22:22.569 rmmod nvme_fabrics 00:22:22.569 rmmod nvme_keyring 00:22:22.569 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:22.569 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:22:22.569 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:22:22.569 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 2638165 ']' 00:22:22.569 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 2638165 00:22:22.569 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2638165 ']' 00:22:22.569 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2638165 00:22:22.569 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2638165) - No such process 00:22:22.569 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2638165 is not found' 00:22:22.569 Process with pid 2638165 is not found 00:22:22.569 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:22.569 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:22.569 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:22.569 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:22:22.569 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:22:22.569 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:22.569 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:22:22.569 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:22.569 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:22.569 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:22.569 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:22.569 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:25.104 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:25.104 00:22:25.104 real 0m8.045s 00:22:25.104 user 0m20.434s 00:22:25.104 sys 0m1.353s 00:22:25.104 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:25.104 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:25.104 ************************************ 00:22:25.104 END TEST nvmf_shutdown_tc3 00:22:25.105 ************************************ 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:25.105 ************************************ 00:22:25.105 START TEST nvmf_shutdown_tc4 00:22:25.105 ************************************ 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:25.105 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:25.105 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:25.105 Found net devices under 0000:af:00.0: cvl_0_0 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:25.105 Found net devices under 0000:af:00.1: cvl_0_1 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:25.105 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:25.106 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:25.106 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:25.106 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:25.106 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:25.106 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:25.106 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:25.106 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:25.106 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:25.106 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:25.106 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:25.106 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:25.106 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:25.106 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:25.106 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:25.106 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:25.106 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:25.106 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:25.106 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:25.106 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:25.106 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:25.106 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:25.106 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.348 ms 00:22:25.106 00:22:25.106 --- 10.0.0.2 ping statistics --- 00:22:25.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:25.106 rtt min/avg/max/mdev = 0.348/0.348/0.348/0.000 ms 00:22:25.106 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:25.106 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:25.106 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:22:25.106 00:22:25.106 --- 10.0.0.1 ping statistics --- 00:22:25.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:25.106 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:22:25.106 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:25.106 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:22:25.106 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:25.106 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:25.106 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:25.106 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:25.106 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:25.106 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:25.106 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:25.106 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:25.106 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:25.106 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:25.106 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:25.106 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=2639706 00:22:25.106 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 2639706 00:22:25.106 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:25.106 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 2639706 ']' 00:22:25.106 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:25.106 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:25.106 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:25.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:25.106 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:25.106 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:25.365 [2024-12-09 17:32:54.310287] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:22:25.365 [2024-12-09 17:32:54.310337] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:25.365 [2024-12-09 17:32:54.389430] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:25.365 [2024-12-09 17:32:54.427902] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:25.365 [2024-12-09 17:32:54.427940] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:25.365 [2024-12-09 17:32:54.427947] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:25.365 [2024-12-09 17:32:54.427953] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:25.365 [2024-12-09 17:32:54.427958] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:25.365 [2024-12-09 17:32:54.429364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:25.365 [2024-12-09 17:32:54.429472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:25.365 [2024-12-09 17:32:54.429581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:25.365 [2024-12-09 17:32:54.429582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:26.297 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:26.297 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:22:26.297 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:26.297 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:26.297 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:26.297 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:26.297 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:26.297 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.297 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:26.297 [2024-12-09 17:32:55.192661] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:26.297 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.297 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:26.297 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:26.297 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:26.297 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:26.297 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:26.297 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:26.297 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:26.297 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:26.297 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:26.297 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:26.297 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:26.297 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:26.297 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:26.297 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:26.297 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:26.297 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:26.297 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:26.297 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:26.297 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:26.297 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:26.297 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:26.297 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:26.297 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:26.297 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:26.297 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:26.297 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:26.297 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.297 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:26.297 Malloc1 00:22:26.297 [2024-12-09 17:32:55.301164] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:26.297 Malloc2 00:22:26.297 Malloc3 00:22:26.297 Malloc4 00:22:26.297 Malloc5 00:22:26.555 Malloc6 00:22:26.555 Malloc7 00:22:26.555 Malloc8 00:22:26.555 Malloc9 00:22:26.555 Malloc10 00:22:26.555 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.555 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:26.555 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:26.555 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:26.555 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=2639984 00:22:26.555 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:22:26.555 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:22:26.812 [2024-12-09 17:32:55.805850] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:32.080 17:33:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:32.080 17:33:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 2639706 00:22:32.080 17:33:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2639706 ']' 00:22:32.081 17:33:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2639706 00:22:32.081 17:33:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:22:32.081 17:33:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:32.081 17:33:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2639706 00:22:32.081 17:33:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:32.081 17:33:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:32.081 17:33:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2639706' 00:22:32.081 killing process with pid 2639706 00:22:32.081 17:33:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 2639706 00:22:32.081 17:33:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 2639706 00:22:32.081 [2024-12-09 17:33:00.801024] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d66e0 is same with the state(6) to be set 00:22:32.081 [2024-12-09 17:33:00.801082] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d66e0 is same with the state(6) to be set 00:22:32.081 [2024-12-09 17:33:00.801090] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d66e0 is same with the state(6) to be set 00:22:32.081 [2024-12-09 17:33:00.801097] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d66e0 is same with the state(6) to be set 00:22:32.081 [2024-12-09 17:33:00.801104] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d66e0 is same with the state(6) to be set 00:22:32.081 [2024-12-09 17:33:00.801111] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d66e0 is same with the state(6) to be set 00:22:32.081 [2024-12-09 17:33:00.801815] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d6bb0 is same with the state(6) to be set 00:22:32.081 [2024-12-09 17:33:00.801848] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d6bb0 is same with the state(6) to be set 00:22:32.081 [2024-12-09 17:33:00.801856] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d6bb0 is same with the state(6) to be set 00:22:32.081 [2024-12-09 17:33:00.802380] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2d950 is same with the state(6) to be set 00:22:32.081 [2024-12-09 17:33:00.802416] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2d950 is same with the state(6) to be set 00:22:32.081 [2024-12-09 17:33:00.802424] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2d950 is same with the state(6) to be set 00:22:32.081 [2024-12-09 17:33:00.802431] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2d950 is same with the state(6) to be set 00:22:32.081 [2024-12-09 17:33:00.802439] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2d950 is same with the state(6) to be set 00:22:32.081 [2024-12-09 17:33:00.802446] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2d950 is same with the state(6) to be set 00:22:32.081 [2024-12-09 17:33:00.802454] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2d950 is same with the state(6) to be set 00:22:32.081 [2024-12-09 17:33:00.802460] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2d950 is same with the state(6) to be set 00:22:32.081 [2024-12-09 17:33:00.803015] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d6210 is same with the state(6) to be set 00:22:32.081 [2024-12-09 17:33:00.803044] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d6210 is same with the state(6) to be set 00:22:32.081 [2024-12-09 17:33:00.803053] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d6210 is same with the state(6) to be set 00:22:32.081 [2024-12-09 17:33:00.803061] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d6210 is same with the state(6) to be set 00:22:32.081 [2024-12-09 17:33:00.803068] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d6210 is same with the state(6) to be set 00:22:32.081 [2024-12-09 17:33:00.803074] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d6210 is same with the state(6) to be set 00:22:32.081 [2024-12-09 17:33:00.803081] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d6210 is same with the state(6) to be set 00:22:32.081 [2024-12-09 17:33:00.803087] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d6210 is same with the state(6) to be set 00:22:32.081 [2024-12-09 17:33:00.803093] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d6210 is same with the state(6) to be set 00:22:32.081 [2024-12-09 17:33:00.803099] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d6210 is same with the state(6) to be set 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 starting I/O failed: -6 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 starting I/O failed: -6 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 starting I/O failed: -6 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 starting I/O failed: -6 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 starting I/O failed: -6 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 starting I/O failed: -6 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 starting I/O failed: -6 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 starting I/O failed: -6 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 starting I/O failed: -6 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 starting I/O failed: -6 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 [2024-12-09 17:33:00.807719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:32.081 starting I/O failed: -6 00:22:32.081 starting I/O failed: -6 00:22:32.081 starting I/O failed: -6 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 starting I/O failed: -6 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 starting I/O failed: -6 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 starting I/O failed: -6 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 starting I/O failed: -6 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 starting I/O failed: -6 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 starting I/O failed: -6 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 starting I/O failed: -6 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 starting I/O failed: -6 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 starting I/O failed: -6 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 starting I/O failed: -6 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 starting I/O failed: -6 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 starting I/O failed: -6 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 starting I/O failed: -6 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 starting I/O failed: -6 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 starting I/O failed: -6 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 starting I/O failed: -6 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 starting I/O failed: -6 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 starting I/O failed: -6 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 [2024-12-09 17:33:00.808683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 starting I/O failed: -6 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 starting I/O failed: -6 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 starting I/O failed: -6 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 starting I/O failed: -6 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 starting I/O failed: -6 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.081 starting I/O failed: -6 00:22:32.081 Write completed with error (sct=0, sc=8) 00:22:32.082 starting I/O failed: -6 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 starting I/O failed: -6 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 starting I/O failed: -6 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 starting I/O failed: -6 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 starting I/O failed: -6 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 starting I/O failed: -6 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 starting I/O failed: -6 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 starting I/O failed: -6 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 starting I/O failed: -6 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 starting I/O failed: -6 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 starting I/O failed: -6 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 starting I/O failed: -6 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 starting I/O failed: -6 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 starting I/O failed: -6 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 starting I/O failed: -6 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 starting I/O failed: -6 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 starting I/O failed: -6 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 starting I/O failed: -6 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 starting I/O failed: -6 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 starting I/O failed: -6 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 starting I/O failed: -6 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 starting I/O failed: -6 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 starting I/O failed: -6 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 starting I/O failed: -6 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 starting I/O failed: -6 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 starting I/O failed: -6 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 starting I/O failed: -6 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 starting I/O failed: -6 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 starting I/O failed: -6 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 starting I/O failed: -6 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 starting I/O failed: -6 00:22:32.082 [2024-12-09 17:33:00.809708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 starting I/O failed: -6 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 starting I/O failed: -6 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 starting I/O failed: -6 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 starting I/O failed: -6 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 starting I/O failed: -6 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 starting I/O failed: -6 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 starting I/O failed: -6 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 starting I/O failed: -6 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 starting I/O failed: -6 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 starting I/O failed: -6 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 starting I/O failed: -6 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 starting I/O failed: -6 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 starting I/O failed: -6 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 starting I/O failed: -6 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 starting I/O failed: -6 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 starting I/O failed: -6 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 starting I/O failed: -6 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 starting I/O failed: -6 00:22:32.082 [2024-12-09 17:33:00.810147] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c466e0 is same with the state(6) to be set 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 starting I/O failed: -6 00:22:32.082 [2024-12-09 17:33:00.810173] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c466e0 is same with the state(6) to be set 00:22:32.082 [2024-12-09 17:33:00.810182] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c466e0 is same with the state(6) to be set 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 [2024-12-09 17:33:00.810189] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c466e0 is same with the state(6) to be set 00:22:32.082 starting I/O failed: -6 00:22:32.082 [2024-12-09 17:33:00.810197] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c466e0 is same with the state(6) to be set 00:22:32.082 [2024-12-09 17:33:00.810204] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c466e0 is same with the state(6) to be set 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 [2024-12-09 17:33:00.810210] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c466e0 is same with the state(6) to be set 00:22:32.082 starting I/O failed: -6 00:22:32.082 [2024-12-09 17:33:00.810223] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c466e0 is same with the state(6) to be set 00:22:32.082 [2024-12-09 17:33:00.810231] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c466e0 is same with the state(6) to be set 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 [2024-12-09 17:33:00.810238] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c466e0 is same with the state(6) to be set 00:22:32.082 starting I/O failed: -6 00:22:32.082 [2024-12-09 17:33:00.810245] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c466e0 is same with the state(6) to be set 00:22:32.082 [2024-12-09 17:33:00.810252] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c466e0 is same with the state(6) to be set 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 [2024-12-09 17:33:00.810259] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c466e0 is same with starting I/O failed: -6 00:22:32.082 the state(6) to be set 00:22:32.082 [2024-12-09 17:33:00.810267] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c466e0 is same with the state(6) to be set 00:22:32.082 [2024-12-09 17:33:00.810273] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c466e0 is same with the state(6) to be set 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 [2024-12-09 17:33:00.810280] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c466e0 is same with starting I/O failed: -6 00:22:32.082 the state(6) to be set 00:22:32.082 [2024-12-09 17:33:00.810288] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c466e0 is same with the state(6) to be set 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 starting I/O failed: -6 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 starting I/O failed: -6 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 starting I/O failed: -6 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 starting I/O failed: -6 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 starting I/O failed: -6 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 starting I/O failed: -6 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 starting I/O failed: -6 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 starting I/O failed: -6 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 starting I/O failed: -6 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 starting I/O failed: -6 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 starting I/O failed: -6 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 starting I/O failed: -6 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 starting I/O failed: -6 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 starting I/O failed: -6 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 starting I/O failed: -6 00:22:32.082 [2024-12-09 17:33:00.810588] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c46a60 is same with Write completed with error (sct=0, sc=8) 00:22:32.082 the state(6) to be set 00:22:32.082 starting I/O failed: -6 00:22:32.082 [2024-12-09 17:33:00.810612] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c46a60 is same with the state(6) to be set 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 [2024-12-09 17:33:00.810621] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c46a60 is same with the state(6) to be set 00:22:32.082 starting I/O failed: -6 00:22:32.082 [2024-12-09 17:33:00.810629] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c46a60 is same with the state(6) to be set 00:22:32.082 [2024-12-09 17:33:00.810636] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c46a60 is same with the state(6) to be set 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 [2024-12-09 17:33:00.810642] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c46a60 is same with the state(6) to be set 00:22:32.082 starting I/O failed: -6 00:22:32.082 [2024-12-09 17:33:00.810649] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c46a60 is same with the state(6) to be set 00:22:32.082 [2024-12-09 17:33:00.810657] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c46a60 is same with the state(6) to be set 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 starting I/O failed: -6 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 starting I/O failed: -6 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.082 starting I/O failed: -6 00:22:32.082 Write completed with error (sct=0, sc=8) 00:22:32.083 starting I/O failed: -6 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 starting I/O failed: -6 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 starting I/O failed: -6 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 starting I/O failed: -6 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 starting I/O failed: -6 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 starting I/O failed: -6 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 starting I/O failed: -6 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 starting I/O failed: -6 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 starting I/O failed: -6 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 starting I/O failed: -6 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 starting I/O failed: -6 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 starting I/O failed: -6 00:22:32.083 [2024-12-09 17:33:00.811071] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c46de0 is same with the state(6) to be set 00:22:32.083 [2024-12-09 17:33:00.811093] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c46de0 is same with the state(6) to be set 00:22:32.083 [2024-12-09 17:33:00.811101] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c46de0 is same with the state(6) to be set 00:22:32.083 [2024-12-09 17:33:00.811109] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c46de0 is same with the state(6) to be set 00:22:32.083 [2024-12-09 17:33:00.811117] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c46de0 is same with the state(6) to be set 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 [2024-12-09 17:33:00.811123] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c46de0 is same with the state(6) to be set 00:22:32.083 starting I/O failed: -6 00:22:32.083 [2024-12-09 17:33:00.811131] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c46de0 is same with the state(6) to be set 00:22:32.083 [2024-12-09 17:33:00.811137] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c46de0 is same with the state(6) to be set 00:22:32.083 [2024-12-09 17:33:00.811144] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c46de0 is same with Write completed with error (sct=0, sc=8) 00:22:32.083 the state(6) to be set 00:22:32.083 starting I/O failed: -6 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 starting I/O failed: -6 00:22:32.083 [2024-12-09 17:33:00.811313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:32.083 NVMe io qpair process completion error 00:22:32.083 [2024-12-09 17:33:00.811548] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf8c0 is same with the state(6) to be set 00:22:32.083 [2024-12-09 17:33:00.811571] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf8c0 is same with the state(6) to be set 00:22:32.083 [2024-12-09 17:33:00.811579] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf8c0 is same with the state(6) to be set 00:22:32.083 [2024-12-09 17:33:00.811586] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf8c0 is same with the state(6) to be set 00:22:32.083 [2024-12-09 17:33:00.811593] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf8c0 is same with the state(6) to be set 00:22:32.083 [2024-12-09 17:33:00.811600] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf8c0 is same with the state(6) to be set 00:22:32.083 [2024-12-09 17:33:00.811607] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf8c0 is same with the state(6) to be set 00:22:32.083 [2024-12-09 17:33:00.811613] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf8c0 is same with the state(6) to be set 00:22:32.083 [2024-12-09 17:33:00.811620] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf8c0 is same with the state(6) to be set 00:22:32.083 [2024-12-09 17:33:00.811626] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf8c0 is same with the state(6) to be set 00:22:32.083 [2024-12-09 17:33:00.813795] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d22c0 is same with the state(6) to be set 00:22:32.083 [2024-12-09 17:33:00.813817] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d22c0 is same with the state(6) to be set 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 [2024-12-09 17:33:00.813824] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d22c0 is same with the state(6) to be set 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 starting I/O failed: -6 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 [2024-12-09 17:33:00.813925] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d1410 is same with the state(6) to be set 00:22:32.083 starting I/O failed: -6 00:22:32.083 [2024-12-09 17:33:00.813943] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d1410 is same with the state(6) to be set 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 [2024-12-09 17:33:00.813950] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d1410 is same with the state(6) to be set 00:22:32.083 [2024-12-09 17:33:00.813958] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d1410 is same with the state(6) to be set 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 [2024-12-09 17:33:00.813964] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d1410 is same with the state(6) to be set 00:22:32.083 [2024-12-09 17:33:00.813971] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d1410 is same with the state(6) to be set 00:22:32.083 [2024-12-09 17:33:00.813977] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d1410 is same with Write completed with error (sct=0, sc=8) 00:22:32.083 the state(6) to be set 00:22:32.083 [2024-12-09 17:33:00.813984] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d1410 is same with the state(6) to be set 00:22:32.083 [2024-12-09 17:33:00.813991] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d1410 is same with the state(6) to be set 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 starting I/O failed: -6 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 starting I/O failed: -6 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 starting I/O failed: -6 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 starting I/O failed: -6 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 starting I/O failed: -6 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 starting I/O failed: -6 00:22:32.083 starting I/O failed: -6 00:22:32.083 starting I/O failed: -6 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 starting I/O failed: -6 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 starting I/O failed: -6 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 starting I/O failed: -6 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 starting I/O failed: -6 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 starting I/O failed: -6 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 starting I/O failed: -6 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 starting I/O failed: -6 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 starting I/O failed: -6 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 starting I/O failed: -6 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 [2024-12-09 17:33:00.815459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:32.083 starting I/O failed: -6 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 starting I/O failed: -6 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 starting I/O failed: -6 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 starting I/O failed: -6 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 starting I/O failed: -6 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 starting I/O failed: -6 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 starting I/O failed: -6 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.083 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 [2024-12-09 17:33:00.816354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 [2024-12-09 17:33:00.817390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.084 Write completed with error (sct=0, sc=8) 00:22:32.084 starting I/O failed: -6 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 starting I/O failed: -6 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 starting I/O failed: -6 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 starting I/O failed: -6 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 starting I/O failed: -6 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 starting I/O failed: -6 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 starting I/O failed: -6 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 starting I/O failed: -6 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 starting I/O failed: -6 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 starting I/O failed: -6 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 starting I/O failed: -6 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 starting I/O failed: -6 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 starting I/O failed: -6 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 starting I/O failed: -6 00:22:32.085 [2024-12-09 17:33:00.819161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:32.085 NVMe io qpair process completion error 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 starting I/O failed: -6 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 starting I/O failed: -6 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 starting I/O failed: -6 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 starting I/O failed: -6 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 starting I/O failed: -6 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 starting I/O failed: -6 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 starting I/O failed: -6 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 starting I/O failed: -6 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 starting I/O failed: -6 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 starting I/O failed: -6 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 [2024-12-09 17:33:00.820148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 starting I/O failed: -6 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 starting I/O failed: -6 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 starting I/O failed: -6 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 starting I/O failed: -6 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 starting I/O failed: -6 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 starting I/O failed: -6 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 starting I/O failed: -6 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 starting I/O failed: -6 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 starting I/O failed: -6 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 starting I/O failed: -6 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 starting I/O failed: -6 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 starting I/O failed: -6 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 starting I/O failed: -6 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 starting I/O failed: -6 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 starting I/O failed: -6 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 starting I/O failed: -6 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 starting I/O failed: -6 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 starting I/O failed: -6 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 starting I/O failed: -6 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 starting I/O failed: -6 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 starting I/O failed: -6 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 starting I/O failed: -6 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 [2024-12-09 17:33:00.821067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:32.085 starting I/O failed: -6 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 starting I/O failed: -6 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 starting I/O failed: -6 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 starting I/O failed: -6 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 starting I/O failed: -6 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 starting I/O failed: -6 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 starting I/O failed: -6 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 starting I/O failed: -6 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 starting I/O failed: -6 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 starting I/O failed: -6 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 starting I/O failed: -6 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 starting I/O failed: -6 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 starting I/O failed: -6 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 starting I/O failed: -6 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 starting I/O failed: -6 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.085 starting I/O failed: -6 00:22:32.085 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 [2024-12-09 17:33:00.822089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 [2024-12-09 17:33:00.823821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:32.086 NVMe io qpair process completion error 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 [2024-12-09 17:33:00.825070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 Write completed with error (sct=0, sc=8) 00:22:32.086 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 [2024-12-09 17:33:00.825871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 [2024-12-09 17:33:00.826965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.087 starting I/O failed: -6 00:22:32.087 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 [2024-12-09 17:33:00.829104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:32.088 NVMe io qpair process completion error 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 [2024-12-09 17:33:00.830863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 Write completed with error (sct=0, sc=8) 00:22:32.088 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 [2024-12-09 17:33:00.831875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 [2024-12-09 17:33:00.835208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:32.089 NVMe io qpair process completion error 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 [2024-12-09 17:33:00.836186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 starting I/O failed: -6 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.089 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 [2024-12-09 17:33:00.837059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 [2024-12-09 17:33:00.838123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.090 starting I/O failed: -6 00:22:32.090 Write completed with error (sct=0, sc=8) 00:22:32.091 starting I/O failed: -6 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 starting I/O failed: -6 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 starting I/O failed: -6 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 starting I/O failed: -6 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 starting I/O failed: -6 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 starting I/O failed: -6 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 starting I/O failed: -6 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 starting I/O failed: -6 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 starting I/O failed: -6 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 starting I/O failed: -6 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 starting I/O failed: -6 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 starting I/O failed: -6 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 starting I/O failed: -6 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 starting I/O failed: -6 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 starting I/O failed: -6 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 starting I/O failed: -6 00:22:32.091 [2024-12-09 17:33:00.840997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:32.091 NVMe io qpair process completion error 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 starting I/O failed: -6 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 starting I/O failed: -6 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 starting I/O failed: -6 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 starting I/O failed: -6 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 starting I/O failed: -6 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 starting I/O failed: -6 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 starting I/O failed: -6 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 starting I/O failed: -6 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 starting I/O failed: -6 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 starting I/O failed: -6 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 [2024-12-09 17:33:00.842003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 starting I/O failed: -6 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 starting I/O failed: -6 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 starting I/O failed: -6 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 starting I/O failed: -6 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 starting I/O failed: -6 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 starting I/O failed: -6 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 starting I/O failed: -6 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 starting I/O failed: -6 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 starting I/O failed: -6 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 starting I/O failed: -6 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 starting I/O failed: -6 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 starting I/O failed: -6 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 starting I/O failed: -6 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 starting I/O failed: -6 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 starting I/O failed: -6 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 starting I/O failed: -6 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 starting I/O failed: -6 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 starting I/O failed: -6 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 starting I/O failed: -6 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 starting I/O failed: -6 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 starting I/O failed: -6 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 starting I/O failed: -6 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 starting I/O failed: -6 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 [2024-12-09 17:33:00.842946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 starting I/O failed: -6 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 starting I/O failed: -6 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 starting I/O failed: -6 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 starting I/O failed: -6 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 starting I/O failed: -6 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 starting I/O failed: -6 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 starting I/O failed: -6 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 starting I/O failed: -6 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 starting I/O failed: -6 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 starting I/O failed: -6 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 starting I/O failed: -6 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 starting I/O failed: -6 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 starting I/O failed: -6 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 starting I/O failed: -6 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 starting I/O failed: -6 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 starting I/O failed: -6 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 starting I/O failed: -6 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 starting I/O failed: -6 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 starting I/O failed: -6 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 starting I/O failed: -6 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 starting I/O failed: -6 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 starting I/O failed: -6 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 starting I/O failed: -6 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 starting I/O failed: -6 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 starting I/O failed: -6 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 starting I/O failed: -6 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 starting I/O failed: -6 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 starting I/O failed: -6 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 Write completed with error (sct=0, sc=8) 00:22:32.091 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 [2024-12-09 17:33:00.843947] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 [2024-12-09 17:33:00.845785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:32.092 NVMe io qpair process completion error 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 [2024-12-09 17:33:00.846713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 Write completed with error (sct=0, sc=8) 00:22:32.092 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 [2024-12-09 17:33:00.847630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:32.093 starting I/O failed: -6 00:22:32.093 starting I/O failed: -6 00:22:32.093 starting I/O failed: -6 00:22:32.093 starting I/O failed: -6 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 [2024-12-09 17:33:00.848829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.093 Write completed with error (sct=0, sc=8) 00:22:32.093 starting I/O failed: -6 00:22:32.094 [2024-12-09 17:33:00.852869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:32.094 NVMe io qpair process completion error 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 starting I/O failed: -6 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 starting I/O failed: -6 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 starting I/O failed: -6 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 starting I/O failed: -6 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 starting I/O failed: -6 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 starting I/O failed: -6 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 starting I/O failed: -6 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 starting I/O failed: -6 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 starting I/O failed: -6 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 starting I/O failed: -6 00:22:32.094 [2024-12-09 17:33:00.853845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 starting I/O failed: -6 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 starting I/O failed: -6 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 starting I/O failed: -6 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 starting I/O failed: -6 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 starting I/O failed: -6 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 starting I/O failed: -6 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 starting I/O failed: -6 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 starting I/O failed: -6 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 starting I/O failed: -6 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 starting I/O failed: -6 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 starting I/O failed: -6 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 starting I/O failed: -6 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 starting I/O failed: -6 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 starting I/O failed: -6 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 starting I/O failed: -6 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 starting I/O failed: -6 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 starting I/O failed: -6 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 starting I/O failed: -6 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 starting I/O failed: -6 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 starting I/O failed: -6 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 starting I/O failed: -6 00:22:32.094 [2024-12-09 17:33:00.854673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:32.094 starting I/O failed: -6 00:22:32.094 starting I/O failed: -6 00:22:32.094 starting I/O failed: -6 00:22:32.094 starting I/O failed: -6 00:22:32.094 starting I/O failed: -6 00:22:32.094 starting I/O failed: -6 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 starting I/O failed: -6 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 starting I/O failed: -6 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 starting I/O failed: -6 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 starting I/O failed: -6 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 starting I/O failed: -6 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 starting I/O failed: -6 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 starting I/O failed: -6 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 starting I/O failed: -6 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 starting I/O failed: -6 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 starting I/O failed: -6 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 starting I/O failed: -6 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 starting I/O failed: -6 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 starting I/O failed: -6 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 starting I/O failed: -6 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 starting I/O failed: -6 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 starting I/O failed: -6 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 starting I/O failed: -6 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 starting I/O failed: -6 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 starting I/O failed: -6 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 starting I/O failed: -6 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 starting I/O failed: -6 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 starting I/O failed: -6 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 starting I/O failed: -6 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 starting I/O failed: -6 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 starting I/O failed: -6 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 starting I/O failed: -6 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 starting I/O failed: -6 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 starting I/O failed: -6 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 starting I/O failed: -6 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 starting I/O failed: -6 00:22:32.094 [2024-12-09 17:33:00.856027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 starting I/O failed: -6 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 starting I/O failed: -6 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 starting I/O failed: -6 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 starting I/O failed: -6 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 starting I/O failed: -6 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 starting I/O failed: -6 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 starting I/O failed: -6 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 starting I/O failed: -6 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 starting I/O failed: -6 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 starting I/O failed: -6 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 starting I/O failed: -6 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 starting I/O failed: -6 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.094 starting I/O failed: -6 00:22:32.094 Write completed with error (sct=0, sc=8) 00:22:32.095 starting I/O failed: -6 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 starting I/O failed: -6 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 starting I/O failed: -6 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 starting I/O failed: -6 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 starting I/O failed: -6 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 starting I/O failed: -6 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 starting I/O failed: -6 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 starting I/O failed: -6 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 starting I/O failed: -6 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 starting I/O failed: -6 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 starting I/O failed: -6 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 starting I/O failed: -6 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 starting I/O failed: -6 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 starting I/O failed: -6 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 starting I/O failed: -6 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 starting I/O failed: -6 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 starting I/O failed: -6 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 starting I/O failed: -6 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 starting I/O failed: -6 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 starting I/O failed: -6 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 starting I/O failed: -6 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 starting I/O failed: -6 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 starting I/O failed: -6 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 starting I/O failed: -6 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 starting I/O failed: -6 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 starting I/O failed: -6 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 starting I/O failed: -6 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 starting I/O failed: -6 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 starting I/O failed: -6 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 starting I/O failed: -6 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 starting I/O failed: -6 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 starting I/O failed: -6 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 starting I/O failed: -6 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 starting I/O failed: -6 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 starting I/O failed: -6 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 starting I/O failed: -6 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 starting I/O failed: -6 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 starting I/O failed: -6 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 starting I/O failed: -6 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 starting I/O failed: -6 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 starting I/O failed: -6 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 starting I/O failed: -6 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 starting I/O failed: -6 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 starting I/O failed: -6 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 starting I/O failed: -6 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 starting I/O failed: -6 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 starting I/O failed: -6 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 starting I/O failed: -6 00:22:32.095 [2024-12-09 17:33:00.859670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:32.095 NVMe io qpair process completion error 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.095 Write completed with error (sct=0, sc=8) 00:22:32.096 Write completed with error (sct=0, sc=8) 00:22:32.096 Write completed with error (sct=0, sc=8) 00:22:32.096 Write completed with error (sct=0, sc=8) 00:22:32.096 Write completed with error (sct=0, sc=8) 00:22:32.096 Write completed with error (sct=0, sc=8) 00:22:32.096 Write completed with error (sct=0, sc=8) 00:22:32.096 Write completed with error (sct=0, sc=8) 00:22:32.096 Write completed with error (sct=0, sc=8) 00:22:32.096 Write completed with error (sct=0, sc=8) 00:22:32.096 Write completed with error (sct=0, sc=8) 00:22:32.096 Write completed with error (sct=0, sc=8) 00:22:32.096 Write completed with error (sct=0, sc=8) 00:22:32.096 Write completed with error (sct=0, sc=8) 00:22:32.096 Write completed with error (sct=0, sc=8) 00:22:32.096 Write completed with error (sct=0, sc=8) 00:22:32.096 Write completed with error (sct=0, sc=8) 00:22:32.096 Initializing NVMe Controllers 00:22:32.096 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:22:32.096 Controller IO queue size 128, less than required. 00:22:32.096 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:32.096 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:22:32.096 Controller IO queue size 128, less than required. 00:22:32.096 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:32.096 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:22:32.096 Controller IO queue size 128, less than required. 00:22:32.096 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:32.096 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:22:32.096 Controller IO queue size 128, less than required. 00:22:32.096 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:32.096 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:22:32.096 Controller IO queue size 128, less than required. 00:22:32.096 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:32.096 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:22:32.096 Controller IO queue size 128, less than required. 00:22:32.096 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:32.096 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:32.096 Controller IO queue size 128, less than required. 00:22:32.096 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:32.096 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:22:32.096 Controller IO queue size 128, less than required. 00:22:32.096 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:32.096 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:22:32.096 Controller IO queue size 128, less than required. 00:22:32.096 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:32.096 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:22:32.096 Controller IO queue size 128, less than required. 00:22:32.096 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:32.096 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:22:32.096 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:22:32.096 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:22:32.096 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:22:32.096 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:22:32.096 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:22:32.096 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:32.096 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:22:32.096 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:22:32.096 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:22:32.096 Initialization complete. Launching workers. 00:22:32.096 ======================================================== 00:22:32.096 Latency(us) 00:22:32.096 Device Information : IOPS MiB/s Average min max 00:22:32.096 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2239.26 96.22 57170.34 757.60 106837.89 00:22:32.096 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2220.38 95.41 57045.66 699.57 126156.27 00:22:32.096 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2230.99 95.86 57307.73 507.49 120531.64 00:22:32.096 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2204.89 94.74 57450.88 904.26 103267.59 00:22:32.096 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2218.05 95.31 57123.07 844.49 101237.93 00:22:32.096 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2200.65 94.56 57589.67 803.43 100417.10 00:22:32.096 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2154.19 92.56 58846.94 1020.72 96265.75 00:22:32.096 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2180.28 93.68 58172.98 689.17 95396.85 00:22:32.096 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2183.68 93.83 58108.81 990.21 110232.53 00:22:32.096 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2141.88 92.03 59257.57 697.29 112799.06 00:22:32.096 ======================================================== 00:22:32.096 Total : 21974.25 944.21 57797.61 507.49 126156.27 00:22:32.096 00:22:32.096 [2024-12-09 17:33:00.865708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bd890 is same with the state(6) to be set 00:22:32.096 [2024-12-09 17:33:00.865756] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bf900 is same with the state(6) to be set 00:22:32.096 [2024-12-09 17:33:00.865786] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bdef0 is same with the state(6) to be set 00:22:32.096 [2024-12-09 17:33:00.865815] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bd560 is same with the state(6) to be set 00:22:32.096 [2024-12-09 17:33:00.865845] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bfae0 is same with the state(6) to be set 00:22:32.096 [2024-12-09 17:33:00.865873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bdbc0 is same with the state(6) to be set 00:22:32.096 [2024-12-09 17:33:00.865900] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bf720 is same with the state(6) to be set 00:22:32.096 [2024-12-09 17:33:00.865928] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be410 is same with the state(6) to be set 00:22:32.096 [2024-12-09 17:33:00.865956] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be740 is same with the state(6) to be set 00:22:32.096 [2024-12-09 17:33:00.865984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bea70 is same with the state(6) to be set 00:22:32.096 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:22:32.096 17:33:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:22:33.031 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 2639984 00:22:33.031 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:22:33.031 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2639984 00:22:33.031 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:22:33.031 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:33.031 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:22:33.031 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:33.031 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 2639984 00:22:33.032 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:22:33.032 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:33.032 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:33.032 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:33.032 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:22:33.032 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:33.032 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:33.032 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:33.032 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:33.032 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:33.032 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:22:33.032 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:33.032 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:22:33.032 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:33.032 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:33.291 rmmod nvme_tcp 00:22:33.291 rmmod nvme_fabrics 00:22:33.291 rmmod nvme_keyring 00:22:33.291 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:33.291 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:22:33.291 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:22:33.291 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 2639706 ']' 00:22:33.291 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 2639706 00:22:33.291 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2639706 ']' 00:22:33.291 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2639706 00:22:33.291 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2639706) - No such process 00:22:33.291 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2639706 is not found' 00:22:33.291 Process with pid 2639706 is not found 00:22:33.291 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:33.291 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:33.291 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:33.291 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:22:33.291 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:22:33.291 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:33.291 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:22:33.291 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:33.291 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:33.291 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:33.291 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:33.291 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:35.194 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:35.194 00:22:35.194 real 0m10.527s 00:22:35.194 user 0m27.688s 00:22:35.194 sys 0m5.121s 00:22:35.194 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:35.194 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:35.194 ************************************ 00:22:35.194 END TEST nvmf_shutdown_tc4 00:22:35.194 ************************************ 00:22:35.453 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:22:35.453 00:22:35.453 real 0m41.704s 00:22:35.453 user 1m44.201s 00:22:35.453 sys 0m13.967s 00:22:35.453 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:35.453 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:35.453 ************************************ 00:22:35.453 END TEST nvmf_shutdown 00:22:35.453 ************************************ 00:22:35.453 17:33:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:22:35.453 17:33:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:35.453 17:33:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:35.453 17:33:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:35.453 ************************************ 00:22:35.453 START TEST nvmf_nsid 00:22:35.453 ************************************ 00:22:35.453 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:22:35.453 * Looking for test storage... 00:22:35.453 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:35.453 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:35.453 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:22:35.453 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:35.713 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:35.713 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:35.713 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:35.713 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:35.713 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:22:35.713 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:22:35.713 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:22:35.713 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:22:35.713 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:22:35.713 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:22:35.713 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:22:35.713 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:35.713 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:22:35.713 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:22:35.713 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:35.713 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:35.713 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:22:35.713 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:22:35.713 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:35.713 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:22:35.713 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:22:35.713 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:22:35.713 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:22:35.713 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:35.713 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:22:35.713 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:22:35.713 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:35.713 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:35.713 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:22:35.713 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:35.713 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:35.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.713 --rc genhtml_branch_coverage=1 00:22:35.713 --rc genhtml_function_coverage=1 00:22:35.713 --rc genhtml_legend=1 00:22:35.713 --rc geninfo_all_blocks=1 00:22:35.713 --rc geninfo_unexecuted_blocks=1 00:22:35.713 00:22:35.713 ' 00:22:35.713 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:35.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.713 --rc genhtml_branch_coverage=1 00:22:35.714 --rc genhtml_function_coverage=1 00:22:35.714 --rc genhtml_legend=1 00:22:35.714 --rc geninfo_all_blocks=1 00:22:35.714 --rc geninfo_unexecuted_blocks=1 00:22:35.714 00:22:35.714 ' 00:22:35.714 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:35.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.714 --rc genhtml_branch_coverage=1 00:22:35.714 --rc genhtml_function_coverage=1 00:22:35.714 --rc genhtml_legend=1 00:22:35.714 --rc geninfo_all_blocks=1 00:22:35.714 --rc geninfo_unexecuted_blocks=1 00:22:35.714 00:22:35.714 ' 00:22:35.714 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:35.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.714 --rc genhtml_branch_coverage=1 00:22:35.714 --rc genhtml_function_coverage=1 00:22:35.714 --rc genhtml_legend=1 00:22:35.714 --rc geninfo_all_blocks=1 00:22:35.714 --rc geninfo_unexecuted_blocks=1 00:22:35.714 00:22:35.714 ' 00:22:35.714 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:35.714 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:22:35.714 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:35.714 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:35.714 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:35.714 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:35.714 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:35.714 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:35.714 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:35.714 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:35.714 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:35.714 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:35.714 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:22:35.714 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:22:35.714 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:35.714 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:35.714 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:35.714 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:35.714 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:35.714 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:22:35.714 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:35.714 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:35.714 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:35.714 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.714 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.714 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.714 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:22:35.714 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.714 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:22:35.714 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:35.714 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:35.714 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:35.714 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:35.714 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:35.714 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:35.714 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:35.714 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:35.714 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:35.714 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:35.714 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:22:35.714 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:22:35.714 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:22:35.714 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:22:35.714 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:22:35.714 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:22:35.714 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:35.714 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:35.714 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:35.714 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:35.714 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:35.714 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:35.714 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:35.714 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:35.714 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:35.714 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:35.714 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:22:35.714 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:42.282 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:42.282 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:22:42.282 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:42.282 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:42.282 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:42.282 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:42.282 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:42.282 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:22:42.282 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:42.282 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:22:42.282 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:22:42.282 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:22:42.282 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:22:42.282 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:22:42.282 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:22:42.282 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:42.282 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:42.282 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:42.282 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:42.282 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:42.282 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:42.282 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:42.283 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:42.283 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:42.283 Found net devices under 0000:af:00.0: cvl_0_0 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:42.283 Found net devices under 0000:af:00.1: cvl_0_1 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:42.283 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:42.283 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.382 ms 00:22:42.283 00:22:42.283 --- 10.0.0.2 ping statistics --- 00:22:42.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:42.283 rtt min/avg/max/mdev = 0.382/0.382/0.382/0.000 ms 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:42.283 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:42.283 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:22:42.283 00:22:42.283 --- 10.0.0.1 ping statistics --- 00:22:42.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:42.283 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=2644956 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 2644956 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2644956 ']' 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:42.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:42.283 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:42.283 [2024-12-09 17:33:10.651007] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:22:42.283 [2024-12-09 17:33:10.651058] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:42.283 [2024-12-09 17:33:10.731283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:42.283 [2024-12-09 17:33:10.770746] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:42.283 [2024-12-09 17:33:10.770784] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:42.283 [2024-12-09 17:33:10.770791] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:42.284 [2024-12-09 17:33:10.770798] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:42.284 [2024-12-09 17:33:10.770803] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:42.284 [2024-12-09 17:33:10.771348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:42.284 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:42.284 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:22:42.284 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:42.284 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:42.284 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:42.284 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:42.284 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:42.284 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=2645147 00:22:42.284 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:22:42.284 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:22:42.284 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:22:42.284 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:22:42.284 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:42.284 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:42.284 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:42.284 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:42.284 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:42.284 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:42.284 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:42.284 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:42.284 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:42.284 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:22:42.284 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:22:42.284 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=83fdc80e-5f37-4537-aa4a-a624309a373c 00:22:42.284 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:22:42.284 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=78112b65-41eb-4bee-960a-1019ac3ed0e8 00:22:42.284 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:22:42.284 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=744f5eb2-34db-4e8e-8c05-c9f2a91cb7e0 00:22:42.284 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:22:42.284 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.284 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:42.284 null0 00:22:42.284 null1 00:22:42.284 null2 00:22:42.284 [2024-12-09 17:33:10.957247] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:22:42.284 [2024-12-09 17:33:10.957292] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2645147 ] 00:22:42.284 [2024-12-09 17:33:10.960531] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:42.284 [2024-12-09 17:33:10.984715] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:42.284 17:33:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.284 17:33:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 2645147 /var/tmp/tgt2.sock 00:22:42.284 17:33:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2645147 ']' 00:22:42.284 17:33:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:22:42.284 17:33:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:42.284 17:33:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:22:42.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:22:42.284 17:33:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:42.284 17:33:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:42.284 [2024-12-09 17:33:11.029081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:42.284 [2024-12-09 17:33:11.070629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:42.284 17:33:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:42.284 17:33:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:22:42.284 17:33:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:22:42.542 [2024-12-09 17:33:11.587488] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:42.542 [2024-12-09 17:33:11.603573] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:22:42.542 nvme0n1 nvme0n2 00:22:42.542 nvme1n1 00:22:42.542 17:33:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:22:42.542 17:33:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:22:42.542 17:33:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 00:22:43.919 17:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:22:43.919 17:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:22:43.919 17:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:22:43.919 17:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:22:43.919 17:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:22:43.919 17:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:22:43.919 17:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:22:43.919 17:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:22:43.919 17:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:43.919 17:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:43.919 17:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:22:43.919 17:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:22:43.919 17:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:22:44.856 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:44.856 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:44.856 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:44.856 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:44.856 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:22:44.856 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 83fdc80e-5f37-4537-aa4a-a624309a373c 00:22:44.856 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:22:44.856 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:22:44.856 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:22:44.856 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:22:44.856 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:44.856 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=83fdc80e5f374537aa4aa624309a373c 00:22:44.856 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 83FDC80E5F374537AA4AA624309A373C 00:22:44.856 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 83FDC80E5F374537AA4AA624309A373C == \8\3\F\D\C\8\0\E\5\F\3\7\4\5\3\7\A\A\4\A\A\6\2\4\3\0\9\A\3\7\3\C ]] 00:22:44.856 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:22:44.856 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:22:44.856 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:44.856 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:22:44.856 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:44.856 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:22:44.856 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:22:44.856 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 78112b65-41eb-4bee-960a-1019ac3ed0e8 00:22:44.856 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:22:44.856 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:22:44.856 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:22:44.856 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:22:44.856 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:44.856 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=78112b6541eb4bee960a1019ac3ed0e8 00:22:44.856 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 78112B6541EB4BEE960A1019AC3ED0E8 00:22:44.856 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 78112B6541EB4BEE960A1019AC3ED0E8 == \7\8\1\1\2\B\6\5\4\1\E\B\4\B\E\E\9\6\0\A\1\0\1\9\A\C\3\E\D\0\E\8 ]] 00:22:44.856 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:22:44.857 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:22:44.857 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:44.857 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:22:44.857 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:44.857 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:22:44.857 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:22:44.857 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 744f5eb2-34db-4e8e-8c05-c9f2a91cb7e0 00:22:44.857 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:22:44.857 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:22:44.857 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:22:44.857 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:22:44.857 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:44.857 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=744f5eb234db4e8e8c05c9f2a91cb7e0 00:22:44.857 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 744F5EB234DB4E8E8C05C9F2A91CB7E0 00:22:44.857 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 744F5EB234DB4E8E8C05C9F2A91CB7E0 == \7\4\4\F\5\E\B\2\3\4\D\B\4\E\8\E\8\C\0\5\C\9\F\2\A\9\1\C\B\7\E\0 ]] 00:22:44.857 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:22:45.116 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:22:45.116 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:22:45.116 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 2645147 00:22:45.116 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2645147 ']' 00:22:45.116 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2645147 00:22:45.116 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:22:45.116 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:45.116 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2645147 00:22:45.116 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:45.116 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:45.116 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2645147' 00:22:45.116 killing process with pid 2645147 00:22:45.116 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2645147 00:22:45.116 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2645147 00:22:45.375 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:22:45.375 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:45.375 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:22:45.375 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:45.375 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:22:45.375 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:45.375 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:45.375 rmmod nvme_tcp 00:22:45.375 rmmod nvme_fabrics 00:22:45.634 rmmod nvme_keyring 00:22:45.634 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:45.634 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:22:45.634 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:22:45.634 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 2644956 ']' 00:22:45.634 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 2644956 00:22:45.634 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2644956 ']' 00:22:45.634 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2644956 00:22:45.634 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:22:45.634 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:45.634 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2644956 00:22:45.635 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:45.635 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:45.635 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2644956' 00:22:45.635 killing process with pid 2644956 00:22:45.635 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2644956 00:22:45.635 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2644956 00:22:45.635 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:45.635 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:45.635 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:45.635 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:22:45.635 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:22:45.635 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:45.635 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:22:45.635 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:45.635 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:45.635 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:45.635 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:45.635 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:48.168 17:33:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:48.168 00:22:48.168 real 0m12.387s 00:22:48.168 user 0m9.727s 00:22:48.168 sys 0m5.430s 00:22:48.168 17:33:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:48.168 17:33:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:48.168 ************************************ 00:22:48.168 END TEST nvmf_nsid 00:22:48.168 ************************************ 00:22:48.168 17:33:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:22:48.168 00:22:48.168 real 12m3.811s 00:22:48.168 user 25m54.614s 00:22:48.168 sys 3m43.386s 00:22:48.168 17:33:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:48.168 17:33:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:48.168 ************************************ 00:22:48.168 END TEST nvmf_target_extra 00:22:48.168 ************************************ 00:22:48.168 17:33:16 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:48.168 17:33:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:48.168 17:33:16 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:48.168 17:33:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:48.168 ************************************ 00:22:48.168 START TEST nvmf_host 00:22:48.168 ************************************ 00:22:48.168 17:33:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:48.168 * Looking for test storage... 00:22:48.168 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:22:48.168 17:33:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:48.168 17:33:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:22:48.168 17:33:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:48.168 17:33:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:48.168 17:33:17 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:48.168 17:33:17 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:48.168 17:33:17 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:48.168 17:33:17 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:22:48.168 17:33:17 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:22:48.168 17:33:17 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:22:48.168 17:33:17 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:22:48.168 17:33:17 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:22:48.168 17:33:17 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:22:48.168 17:33:17 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:22:48.168 17:33:17 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:48.168 17:33:17 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:22:48.168 17:33:17 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:22:48.169 17:33:17 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:48.169 17:33:17 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:48.169 17:33:17 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:22:48.169 17:33:17 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:22:48.169 17:33:17 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:48.169 17:33:17 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:22:48.169 17:33:17 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:22:48.169 17:33:17 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:22:48.169 17:33:17 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:22:48.169 17:33:17 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:48.169 17:33:17 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:22:48.169 17:33:17 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:22:48.169 17:33:17 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:48.169 17:33:17 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:48.169 17:33:17 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:22:48.169 17:33:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:48.169 17:33:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:48.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:48.169 --rc genhtml_branch_coverage=1 00:22:48.169 --rc genhtml_function_coverage=1 00:22:48.169 --rc genhtml_legend=1 00:22:48.169 --rc geninfo_all_blocks=1 00:22:48.169 --rc geninfo_unexecuted_blocks=1 00:22:48.169 00:22:48.169 ' 00:22:48.169 17:33:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:48.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:48.169 --rc genhtml_branch_coverage=1 00:22:48.169 --rc genhtml_function_coverage=1 00:22:48.169 --rc genhtml_legend=1 00:22:48.169 --rc geninfo_all_blocks=1 00:22:48.169 --rc geninfo_unexecuted_blocks=1 00:22:48.169 00:22:48.169 ' 00:22:48.169 17:33:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:48.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:48.169 --rc genhtml_branch_coverage=1 00:22:48.169 --rc genhtml_function_coverage=1 00:22:48.169 --rc genhtml_legend=1 00:22:48.169 --rc geninfo_all_blocks=1 00:22:48.169 --rc geninfo_unexecuted_blocks=1 00:22:48.169 00:22:48.169 ' 00:22:48.169 17:33:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:48.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:48.169 --rc genhtml_branch_coverage=1 00:22:48.169 --rc genhtml_function_coverage=1 00:22:48.169 --rc genhtml_legend=1 00:22:48.169 --rc geninfo_all_blocks=1 00:22:48.169 --rc geninfo_unexecuted_blocks=1 00:22:48.169 00:22:48.169 ' 00:22:48.169 17:33:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:48.169 17:33:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:22:48.169 17:33:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:48.169 17:33:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:48.169 17:33:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:48.169 17:33:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:48.169 17:33:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:48.169 17:33:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:48.169 17:33:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:48.169 17:33:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:48.169 17:33:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:48.169 17:33:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:48.169 17:33:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:22:48.169 17:33:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:22:48.169 17:33:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:48.169 17:33:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:48.169 17:33:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:48.169 17:33:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:48.169 17:33:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:48.169 17:33:17 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:48.169 17:33:17 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:48.169 17:33:17 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:48.169 17:33:17 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:48.169 17:33:17 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.169 17:33:17 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.169 17:33:17 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.169 17:33:17 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:22:48.169 17:33:17 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.169 17:33:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:22:48.169 17:33:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:48.169 17:33:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:48.169 17:33:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:48.169 17:33:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:48.169 17:33:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:48.169 17:33:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:48.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:48.169 17:33:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:48.169 17:33:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:48.169 17:33:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:48.169 17:33:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:22:48.169 17:33:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:22:48.169 17:33:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:22:48.169 17:33:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:48.169 17:33:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:48.169 17:33:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:48.169 17:33:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.169 ************************************ 00:22:48.169 START TEST nvmf_multicontroller 00:22:48.169 ************************************ 00:22:48.169 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:48.169 * Looking for test storage... 00:22:48.169 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:48.169 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:48.169 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:22:48.169 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:48.429 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:48.429 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:48.429 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:48.429 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:48.429 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:22:48.429 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:22:48.429 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:22:48.429 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:22:48.429 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:22:48.429 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:22:48.429 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:22:48.429 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:48.429 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:22:48.429 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:22:48.429 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:48.429 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:48.429 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:22:48.429 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:22:48.429 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:48.429 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:22:48.429 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:22:48.429 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:22:48.429 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:22:48.429 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:48.429 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:22:48.429 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:22:48.429 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:48.429 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:48.429 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:22:48.429 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:48.429 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:48.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:48.429 --rc genhtml_branch_coverage=1 00:22:48.429 --rc genhtml_function_coverage=1 00:22:48.429 --rc genhtml_legend=1 00:22:48.429 --rc geninfo_all_blocks=1 00:22:48.429 --rc geninfo_unexecuted_blocks=1 00:22:48.429 00:22:48.429 ' 00:22:48.429 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:48.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:48.429 --rc genhtml_branch_coverage=1 00:22:48.429 --rc genhtml_function_coverage=1 00:22:48.429 --rc genhtml_legend=1 00:22:48.429 --rc geninfo_all_blocks=1 00:22:48.429 --rc geninfo_unexecuted_blocks=1 00:22:48.429 00:22:48.429 ' 00:22:48.429 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:48.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:48.429 --rc genhtml_branch_coverage=1 00:22:48.429 --rc genhtml_function_coverage=1 00:22:48.429 --rc genhtml_legend=1 00:22:48.429 --rc geninfo_all_blocks=1 00:22:48.429 --rc geninfo_unexecuted_blocks=1 00:22:48.429 00:22:48.429 ' 00:22:48.429 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:48.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:48.429 --rc genhtml_branch_coverage=1 00:22:48.429 --rc genhtml_function_coverage=1 00:22:48.429 --rc genhtml_legend=1 00:22:48.429 --rc geninfo_all_blocks=1 00:22:48.429 --rc geninfo_unexecuted_blocks=1 00:22:48.429 00:22:48.429 ' 00:22:48.429 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:48.429 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:22:48.429 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:48.429 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:48.429 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:48.429 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:48.429 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:48.429 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:48.429 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:48.429 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:48.429 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:48.429 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:48.429 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:22:48.429 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:22:48.429 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:48.429 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:48.429 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:48.429 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:48.429 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:48.429 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:22:48.429 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:48.429 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:48.429 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:48.429 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.429 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.429 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.429 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:22:48.429 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.429 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:22:48.429 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:48.429 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:48.429 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:48.429 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:48.429 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:48.429 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:48.429 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:48.429 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:48.430 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:48.430 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:48.430 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:48.430 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:48.430 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:22:48.430 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:22:48.430 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:48.430 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:22:48.430 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:22:48.430 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:48.430 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:48.430 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:48.430 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:48.430 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:48.430 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:48.430 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:48.430 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:48.430 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:48.430 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:48.430 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:22:48.430 17:33:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:55.011 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:55.011 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:55.011 Found net devices under 0000:af:00.0: cvl_0_0 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:55.011 Found net devices under 0000:af:00.1: cvl_0_1 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:55.011 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:55.012 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:55.012 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:22:55.012 00:22:55.012 --- 10.0.0.2 ping statistics --- 00:22:55.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:55.012 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:55.012 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:55.012 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:22:55.012 00:22:55.012 --- 10.0.0.1 ping statistics --- 00:22:55.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:55.012 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=2649208 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 2649208 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2649208 ']' 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:55.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:55.012 [2024-12-09 17:33:23.428449] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:22:55.012 [2024-12-09 17:33:23.428496] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:55.012 [2024-12-09 17:33:23.508726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:55.012 [2024-12-09 17:33:23.550007] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:55.012 [2024-12-09 17:33:23.550045] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:55.012 [2024-12-09 17:33:23.550053] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:55.012 [2024-12-09 17:33:23.550058] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:55.012 [2024-12-09 17:33:23.550064] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:55.012 [2024-12-09 17:33:23.551482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:55.012 [2024-12-09 17:33:23.551587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:55.012 [2024-12-09 17:33:23.551588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:55.012 [2024-12-09 17:33:23.700360] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:55.012 Malloc0 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:55.012 [2024-12-09 17:33:23.760881] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:55.012 [2024-12-09 17:33:23.768793] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:55.012 Malloc1 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.012 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2649448 00:22:55.013 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:55.013 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:22:55.013 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2649448 /var/tmp/bdevperf.sock 00:22:55.013 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2649448 ']' 00:22:55.013 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:55.013 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:55.013 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:55.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:55.013 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:55.013 17:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:55.013 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:55.013 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:22:55.013 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:22:55.013 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.013 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:55.013 NVMe0n1 00:22:55.013 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.013 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:55.013 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:22:55.013 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.013 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:55.013 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.013 1 00:22:55.013 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:55.013 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:55.013 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:55.013 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:55.013 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:55.013 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:55.013 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:55.013 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:55.013 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.013 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:55.271 request: 00:22:55.271 { 00:22:55.271 "name": "NVMe0", 00:22:55.271 "trtype": "tcp", 00:22:55.271 "traddr": "10.0.0.2", 00:22:55.271 "adrfam": "ipv4", 00:22:55.271 "trsvcid": "4420", 00:22:55.271 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:55.271 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:22:55.271 "hostaddr": "10.0.0.1", 00:22:55.271 "prchk_reftag": false, 00:22:55.271 "prchk_guard": false, 00:22:55.271 "hdgst": false, 00:22:55.271 "ddgst": false, 00:22:55.271 "allow_unrecognized_csi": false, 00:22:55.271 "method": "bdev_nvme_attach_controller", 00:22:55.271 "req_id": 1 00:22:55.271 } 00:22:55.271 Got JSON-RPC error response 00:22:55.271 response: 00:22:55.271 { 00:22:55.271 "code": -114, 00:22:55.271 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:55.271 } 00:22:55.271 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:55.271 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:55.271 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:55.271 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:55.271 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:55.271 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:55.271 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:55.271 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:55.271 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:55.271 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:55.271 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:55.271 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:55.271 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:55.271 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.271 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:55.271 request: 00:22:55.271 { 00:22:55.271 "name": "NVMe0", 00:22:55.271 "trtype": "tcp", 00:22:55.271 "traddr": "10.0.0.2", 00:22:55.271 "adrfam": "ipv4", 00:22:55.271 "trsvcid": "4420", 00:22:55.271 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:55.271 "hostaddr": "10.0.0.1", 00:22:55.271 "prchk_reftag": false, 00:22:55.271 "prchk_guard": false, 00:22:55.271 "hdgst": false, 00:22:55.271 "ddgst": false, 00:22:55.271 "allow_unrecognized_csi": false, 00:22:55.271 "method": "bdev_nvme_attach_controller", 00:22:55.271 "req_id": 1 00:22:55.271 } 00:22:55.271 Got JSON-RPC error response 00:22:55.271 response: 00:22:55.271 { 00:22:55.271 "code": -114, 00:22:55.271 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:55.271 } 00:22:55.271 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:55.271 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:55.271 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:55.271 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:55.271 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:55.271 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:55.271 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:55.271 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:55.271 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:55.271 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:55.271 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:55.271 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:55.271 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:55.271 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.271 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:55.271 request: 00:22:55.271 { 00:22:55.271 "name": "NVMe0", 00:22:55.271 "trtype": "tcp", 00:22:55.271 "traddr": "10.0.0.2", 00:22:55.271 "adrfam": "ipv4", 00:22:55.271 "trsvcid": "4420", 00:22:55.271 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:55.271 "hostaddr": "10.0.0.1", 00:22:55.271 "prchk_reftag": false, 00:22:55.271 "prchk_guard": false, 00:22:55.271 "hdgst": false, 00:22:55.271 "ddgst": false, 00:22:55.271 "multipath": "disable", 00:22:55.271 "allow_unrecognized_csi": false, 00:22:55.271 "method": "bdev_nvme_attach_controller", 00:22:55.271 "req_id": 1 00:22:55.271 } 00:22:55.271 Got JSON-RPC error response 00:22:55.271 response: 00:22:55.271 { 00:22:55.271 "code": -114, 00:22:55.271 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:22:55.271 } 00:22:55.271 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:55.271 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:55.271 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:55.271 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:55.271 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:55.271 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:55.271 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:55.272 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:55.272 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:55.272 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:55.272 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:55.272 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:55.272 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:55.272 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.272 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:55.272 request: 00:22:55.272 { 00:22:55.272 "name": "NVMe0", 00:22:55.272 "trtype": "tcp", 00:22:55.272 "traddr": "10.0.0.2", 00:22:55.272 "adrfam": "ipv4", 00:22:55.272 "trsvcid": "4420", 00:22:55.272 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:55.272 "hostaddr": "10.0.0.1", 00:22:55.272 "prchk_reftag": false, 00:22:55.272 "prchk_guard": false, 00:22:55.272 "hdgst": false, 00:22:55.272 "ddgst": false, 00:22:55.272 "multipath": "failover", 00:22:55.272 "allow_unrecognized_csi": false, 00:22:55.272 "method": "bdev_nvme_attach_controller", 00:22:55.272 "req_id": 1 00:22:55.272 } 00:22:55.272 Got JSON-RPC error response 00:22:55.272 response: 00:22:55.272 { 00:22:55.272 "code": -114, 00:22:55.272 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:55.272 } 00:22:55.272 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:55.272 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:55.272 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:55.272 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:55.272 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:55.272 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:55.272 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.272 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:55.529 NVMe0n1 00:22:55.529 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.529 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:55.529 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.529 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:55.529 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.529 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:22:55.529 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.529 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:55.529 00:22:55.529 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.529 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:55.529 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:22:55.529 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.529 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:55.529 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.529 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:22:55.529 17:33:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:56.906 { 00:22:56.906 "results": [ 00:22:56.906 { 00:22:56.906 "job": "NVMe0n1", 00:22:56.906 "core_mask": "0x1", 00:22:56.906 "workload": "write", 00:22:56.906 "status": "finished", 00:22:56.906 "queue_depth": 128, 00:22:56.906 "io_size": 4096, 00:22:56.906 "runtime": 1.00488, 00:22:56.906 "iops": 25287.59652893878, 00:22:56.906 "mibps": 98.7796739411671, 00:22:56.906 "io_failed": 0, 00:22:56.906 "io_timeout": 0, 00:22:56.906 "avg_latency_us": 5051.990162790393, 00:22:56.906 "min_latency_us": 2995.9314285714286, 00:22:56.906 "max_latency_us": 8800.548571428571 00:22:56.906 } 00:22:56.906 ], 00:22:56.906 "core_count": 1 00:22:56.906 } 00:22:56.906 17:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:22:56.906 17:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.906 17:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:56.906 17:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.906 17:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:22:56.906 17:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 2649448 00:22:56.906 17:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2649448 ']' 00:22:56.906 17:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2649448 00:22:56.906 17:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:22:56.906 17:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:56.906 17:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2649448 00:22:56.906 17:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:56.906 17:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:56.906 17:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2649448' 00:22:56.906 killing process with pid 2649448 00:22:56.906 17:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2649448 00:22:56.906 17:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2649448 00:22:56.906 17:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:56.906 17:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.906 17:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:56.906 17:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.906 17:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:56.906 17:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.906 17:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:56.906 17:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.906 17:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:22:56.906 17:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:56.906 17:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:22:56.906 17:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:22:56.906 17:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:22:56.906 17:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:22:56.906 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:56.906 [2024-12-09 17:33:23.871904] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:22:56.906 [2024-12-09 17:33:23.871961] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2649448 ] 00:22:56.907 [2024-12-09 17:33:23.944998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:56.907 [2024-12-09 17:33:23.984314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:56.907 [2024-12-09 17:33:24.613899] bdev.c:4934:bdev_name_add: *ERROR*: Bdev name e0883ab7-e284-4b99-adee-366a632ab309 already exists 00:22:56.907 [2024-12-09 17:33:24.613927] bdev.c:8154:bdev_register: *ERROR*: Unable to add uuid:e0883ab7-e284-4b99-adee-366a632ab309 alias for bdev NVMe1n1 00:22:56.907 [2024-12-09 17:33:24.613935] bdev_nvme.c:4665:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:22:56.907 Running I/O for 1 seconds... 00:22:56.907 25219.00 IOPS, 98.51 MiB/s 00:22:56.907 Latency(us) 00:22:56.907 [2024-12-09T16:33:26.086Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:56.907 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:22:56.907 NVMe0n1 : 1.00 25287.60 98.78 0.00 0.00 5051.99 2995.93 8800.55 00:22:56.907 [2024-12-09T16:33:26.086Z] =================================================================================================================== 00:22:56.907 [2024-12-09T16:33:26.086Z] Total : 25287.60 98.78 0.00 0.00 5051.99 2995.93 8800.55 00:22:56.907 Received shutdown signal, test time was about 1.000000 seconds 00:22:56.907 00:22:56.907 Latency(us) 00:22:56.907 [2024-12-09T16:33:26.086Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:56.907 [2024-12-09T16:33:26.086Z] =================================================================================================================== 00:22:56.907 [2024-12-09T16:33:26.086Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:56.907 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:56.907 17:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:56.907 17:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:22:56.907 17:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:22:56.907 17:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:56.907 17:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:22:56.907 17:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:56.907 17:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:22:56.907 17:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:56.907 17:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:56.907 rmmod nvme_tcp 00:22:56.907 rmmod nvme_fabrics 00:22:56.907 rmmod nvme_keyring 00:22:57.165 17:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:57.165 17:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:22:57.165 17:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:22:57.165 17:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 2649208 ']' 00:22:57.165 17:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 2649208 00:22:57.165 17:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2649208 ']' 00:22:57.165 17:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2649208 00:22:57.165 17:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:22:57.165 17:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:57.165 17:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2649208 00:22:57.165 17:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:57.165 17:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:57.165 17:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2649208' 00:22:57.165 killing process with pid 2649208 00:22:57.165 17:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2649208 00:22:57.165 17:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2649208 00:22:57.424 17:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:57.424 17:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:57.424 17:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:57.424 17:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:22:57.424 17:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:22:57.424 17:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:57.424 17:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:22:57.424 17:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:57.424 17:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:57.424 17:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:57.424 17:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:57.424 17:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:59.327 17:33:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:59.327 00:22:59.327 real 0m11.216s 00:22:59.327 user 0m12.388s 00:22:59.327 sys 0m5.256s 00:22:59.327 17:33:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:59.327 17:33:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:59.327 ************************************ 00:22:59.327 END TEST nvmf_multicontroller 00:22:59.327 ************************************ 00:22:59.327 17:33:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:59.327 17:33:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:59.327 17:33:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:59.327 17:33:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.585 ************************************ 00:22:59.585 START TEST nvmf_aer 00:22:59.585 ************************************ 00:22:59.585 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:59.585 * Looking for test storage... 00:22:59.585 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:59.585 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:59.585 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:22:59.585 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:59.585 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:59.585 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:59.585 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:59.585 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:59.585 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:22:59.585 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:22:59.585 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:22:59.585 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:22:59.585 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:22:59.585 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:22:59.585 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:22:59.585 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:59.585 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:22:59.585 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:22:59.586 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:59.586 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:59.586 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:22:59.586 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:22:59.586 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:59.586 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:22:59.586 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:22:59.586 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:22:59.586 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:22:59.586 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:59.586 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:22:59.586 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:22:59.586 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:59.586 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:59.586 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:22:59.586 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:59.586 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:59.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:59.586 --rc genhtml_branch_coverage=1 00:22:59.586 --rc genhtml_function_coverage=1 00:22:59.586 --rc genhtml_legend=1 00:22:59.586 --rc geninfo_all_blocks=1 00:22:59.586 --rc geninfo_unexecuted_blocks=1 00:22:59.586 00:22:59.586 ' 00:22:59.586 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:59.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:59.586 --rc genhtml_branch_coverage=1 00:22:59.586 --rc genhtml_function_coverage=1 00:22:59.586 --rc genhtml_legend=1 00:22:59.586 --rc geninfo_all_blocks=1 00:22:59.586 --rc geninfo_unexecuted_blocks=1 00:22:59.586 00:22:59.586 ' 00:22:59.586 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:59.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:59.586 --rc genhtml_branch_coverage=1 00:22:59.586 --rc genhtml_function_coverage=1 00:22:59.586 --rc genhtml_legend=1 00:22:59.586 --rc geninfo_all_blocks=1 00:22:59.586 --rc geninfo_unexecuted_blocks=1 00:22:59.586 00:22:59.586 ' 00:22:59.586 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:59.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:59.586 --rc genhtml_branch_coverage=1 00:22:59.586 --rc genhtml_function_coverage=1 00:22:59.586 --rc genhtml_legend=1 00:22:59.586 --rc geninfo_all_blocks=1 00:22:59.586 --rc geninfo_unexecuted_blocks=1 00:22:59.586 00:22:59.586 ' 00:22:59.586 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:59.586 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:22:59.586 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:59.586 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:59.586 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:59.586 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:59.586 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:59.586 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:59.586 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:59.586 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:59.586 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:59.586 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:59.586 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:22:59.586 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:22:59.586 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:59.586 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:59.586 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:59.586 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:59.586 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:59.586 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:22:59.586 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:59.586 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:59.586 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:59.586 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.586 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.586 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.586 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:22:59.586 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.586 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:22:59.586 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:59.586 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:59.586 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:59.586 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:59.586 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:59.586 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:59.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:59.586 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:59.586 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:59.586 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:59.586 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:22:59.586 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:59.586 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:59.586 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:59.586 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:59.586 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:59.586 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:59.586 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:59.586 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:59.586 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:59.586 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:59.586 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:22:59.586 17:33:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:06.154 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:06.154 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:23:06.154 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:06.154 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:06.154 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:06.154 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:06.154 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:06.154 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:23:06.154 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:06.154 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:23:06.154 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:23:06.154 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:23:06.154 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:23:06.154 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:23:06.154 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:23:06.154 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:06.154 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:06.154 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:06.154 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:06.154 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:06.154 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:06.154 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:06.155 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:06.155 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:06.155 Found net devices under 0000:af:00.0: cvl_0_0 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:06.155 Found net devices under 0000:af:00.1: cvl_0_1 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:06.155 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:06.155 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:23:06.155 00:23:06.155 --- 10.0.0.2 ping statistics --- 00:23:06.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:06.155 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:06.155 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:06.155 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:23:06.155 00:23:06.155 --- 10.0.0.1 ping statistics --- 00:23:06.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:06.155 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=2653190 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 2653190 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 2653190 ']' 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:06.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:06.155 [2024-12-09 17:33:34.713125] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:23:06.155 [2024-12-09 17:33:34.713166] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:06.155 [2024-12-09 17:33:34.790021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:06.155 [2024-12-09 17:33:34.831518] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:06.155 [2024-12-09 17:33:34.831556] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:06.155 [2024-12-09 17:33:34.831563] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:06.155 [2024-12-09 17:33:34.831569] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:06.155 [2024-12-09 17:33:34.831574] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:06.155 [2024-12-09 17:33:34.833086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:06.155 [2024-12-09 17:33:34.833194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:06.155 [2024-12-09 17:33:34.833305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:06.155 [2024-12-09 17:33:34.833305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:06.155 [2024-12-09 17:33:34.983002] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.155 17:33:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:06.155 Malloc0 00:23:06.155 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.155 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:06.155 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.155 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:06.155 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.155 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:06.155 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.155 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:06.155 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.155 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:06.155 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.155 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:06.155 [2024-12-09 17:33:35.046117] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:06.155 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.156 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:06.156 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.156 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:06.156 [ 00:23:06.156 { 00:23:06.156 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:06.156 "subtype": "Discovery", 00:23:06.156 "listen_addresses": [], 00:23:06.156 "allow_any_host": true, 00:23:06.156 "hosts": [] 00:23:06.156 }, 00:23:06.156 { 00:23:06.156 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:06.156 "subtype": "NVMe", 00:23:06.156 "listen_addresses": [ 00:23:06.156 { 00:23:06.156 "trtype": "TCP", 00:23:06.156 "adrfam": "IPv4", 00:23:06.156 "traddr": "10.0.0.2", 00:23:06.156 "trsvcid": "4420" 00:23:06.156 } 00:23:06.156 ], 00:23:06.156 "allow_any_host": true, 00:23:06.156 "hosts": [], 00:23:06.156 "serial_number": "SPDK00000000000001", 00:23:06.156 "model_number": "SPDK bdev Controller", 00:23:06.156 "max_namespaces": 2, 00:23:06.156 "min_cntlid": 1, 00:23:06.156 "max_cntlid": 65519, 00:23:06.156 "namespaces": [ 00:23:06.156 { 00:23:06.156 "nsid": 1, 00:23:06.156 "bdev_name": "Malloc0", 00:23:06.156 "name": "Malloc0", 00:23:06.156 "nguid": "9EEED8504F3D4DD6A9A32C0CE0618B42", 00:23:06.156 "uuid": "9eeed850-4f3d-4dd6-a9a3-2c0ce0618b42" 00:23:06.156 } 00:23:06.156 ] 00:23:06.156 } 00:23:06.156 ] 00:23:06.156 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.156 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:06.156 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:06.156 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=2653328 00:23:06.156 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:06.156 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:06.156 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:23:06.156 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:06.156 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:23:06.156 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:23:06.156 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:23:06.156 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:06.156 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:23:06.156 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:23:06.156 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:23:06.156 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:06.156 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:23:06.156 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=3 00:23:06.156 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:23:06.414 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:06.414 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:06.414 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:23:06.414 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:06.414 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.414 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:06.414 Malloc1 00:23:06.414 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.414 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:06.414 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.414 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:06.414 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.414 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:06.414 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.414 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:06.414 Asynchronous Event Request test 00:23:06.414 Attaching to 10.0.0.2 00:23:06.414 Attached to 10.0.0.2 00:23:06.414 Registering asynchronous event callbacks... 00:23:06.414 Starting namespace attribute notice tests for all controllers... 00:23:06.414 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:06.414 aer_cb - Changed Namespace 00:23:06.414 Cleaning up... 00:23:06.414 [ 00:23:06.414 { 00:23:06.414 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:06.414 "subtype": "Discovery", 00:23:06.414 "listen_addresses": [], 00:23:06.414 "allow_any_host": true, 00:23:06.414 "hosts": [] 00:23:06.414 }, 00:23:06.414 { 00:23:06.414 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:06.414 "subtype": "NVMe", 00:23:06.414 "listen_addresses": [ 00:23:06.414 { 00:23:06.414 "trtype": "TCP", 00:23:06.414 "adrfam": "IPv4", 00:23:06.414 "traddr": "10.0.0.2", 00:23:06.414 "trsvcid": "4420" 00:23:06.414 } 00:23:06.414 ], 00:23:06.414 "allow_any_host": true, 00:23:06.414 "hosts": [], 00:23:06.414 "serial_number": "SPDK00000000000001", 00:23:06.414 "model_number": "SPDK bdev Controller", 00:23:06.414 "max_namespaces": 2, 00:23:06.414 "min_cntlid": 1, 00:23:06.414 "max_cntlid": 65519, 00:23:06.414 "namespaces": [ 00:23:06.414 { 00:23:06.414 "nsid": 1, 00:23:06.414 "bdev_name": "Malloc0", 00:23:06.414 "name": "Malloc0", 00:23:06.414 "nguid": "9EEED8504F3D4DD6A9A32C0CE0618B42", 00:23:06.414 "uuid": "9eeed850-4f3d-4dd6-a9a3-2c0ce0618b42" 00:23:06.414 }, 00:23:06.414 { 00:23:06.414 "nsid": 2, 00:23:06.414 "bdev_name": "Malloc1", 00:23:06.414 "name": "Malloc1", 00:23:06.414 "nguid": "A507539B83F64A669252D9E8970791C6", 00:23:06.414 "uuid": "a507539b-83f6-4a66-9252-d9e8970791c6" 00:23:06.414 } 00:23:06.414 ] 00:23:06.414 } 00:23:06.414 ] 00:23:06.414 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.414 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 2653328 00:23:06.414 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:06.414 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.414 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:06.414 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.414 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:06.414 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.414 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:06.414 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.414 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:06.414 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.414 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:06.414 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.414 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:06.414 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:06.414 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:06.414 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:23:06.414 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:06.414 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:23:06.414 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:06.414 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:06.414 rmmod nvme_tcp 00:23:06.414 rmmod nvme_fabrics 00:23:06.414 rmmod nvme_keyring 00:23:06.414 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:06.414 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:23:06.414 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:23:06.414 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 2653190 ']' 00:23:06.414 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 2653190 00:23:06.414 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 2653190 ']' 00:23:06.414 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 2653190 00:23:06.414 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:23:06.414 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:06.414 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2653190 00:23:06.672 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:06.672 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:06.672 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2653190' 00:23:06.672 killing process with pid 2653190 00:23:06.673 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 2653190 00:23:06.673 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 2653190 00:23:06.673 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:06.673 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:06.673 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:06.673 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:23:06.673 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:23:06.673 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:06.673 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:23:06.673 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:06.673 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:06.673 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:06.673 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:06.673 17:33:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:09.207 17:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:09.207 00:23:09.207 real 0m9.349s 00:23:09.207 user 0m5.518s 00:23:09.207 sys 0m4.895s 00:23:09.207 17:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:09.207 17:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:09.207 ************************************ 00:23:09.207 END TEST nvmf_aer 00:23:09.207 ************************************ 00:23:09.207 17:33:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:09.207 17:33:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:09.207 17:33:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:09.207 17:33:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.207 ************************************ 00:23:09.207 START TEST nvmf_async_init 00:23:09.207 ************************************ 00:23:09.207 17:33:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:09.207 * Looking for test storage... 00:23:09.207 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:09.207 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:09.207 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:23:09.207 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:09.207 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:09.207 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:09.207 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:09.207 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:09.207 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:23:09.207 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:23:09.207 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:23:09.207 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:23:09.207 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:23:09.207 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:23:09.207 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:23:09.207 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:09.207 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:23:09.207 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:23:09.207 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:09.207 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:09.207 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:23:09.207 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:23:09.207 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:09.207 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:23:09.207 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:23:09.207 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:23:09.207 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:23:09.207 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:09.207 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:23:09.207 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:23:09.207 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:09.207 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:09.207 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:23:09.207 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:09.207 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:09.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:09.207 --rc genhtml_branch_coverage=1 00:23:09.207 --rc genhtml_function_coverage=1 00:23:09.207 --rc genhtml_legend=1 00:23:09.207 --rc geninfo_all_blocks=1 00:23:09.207 --rc geninfo_unexecuted_blocks=1 00:23:09.207 00:23:09.207 ' 00:23:09.207 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:09.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:09.207 --rc genhtml_branch_coverage=1 00:23:09.207 --rc genhtml_function_coverage=1 00:23:09.207 --rc genhtml_legend=1 00:23:09.207 --rc geninfo_all_blocks=1 00:23:09.207 --rc geninfo_unexecuted_blocks=1 00:23:09.207 00:23:09.207 ' 00:23:09.207 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:09.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:09.207 --rc genhtml_branch_coverage=1 00:23:09.207 --rc genhtml_function_coverage=1 00:23:09.207 --rc genhtml_legend=1 00:23:09.207 --rc geninfo_all_blocks=1 00:23:09.207 --rc geninfo_unexecuted_blocks=1 00:23:09.207 00:23:09.207 ' 00:23:09.207 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:09.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:09.207 --rc genhtml_branch_coverage=1 00:23:09.207 --rc genhtml_function_coverage=1 00:23:09.207 --rc genhtml_legend=1 00:23:09.207 --rc geninfo_all_blocks=1 00:23:09.207 --rc geninfo_unexecuted_blocks=1 00:23:09.207 00:23:09.207 ' 00:23:09.207 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:09.207 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:09.207 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:09.207 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:09.207 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:09.207 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:09.207 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:09.207 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:09.207 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:09.207 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:09.207 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:09.207 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:09.207 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:23:09.207 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:23:09.207 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:09.207 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:09.207 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:09.207 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:09.207 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:09.207 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:23:09.208 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:09.208 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:09.208 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:09.208 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.208 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.208 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.208 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:09.208 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.208 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:23:09.208 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:09.208 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:09.208 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:09.208 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:09.208 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:09.208 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:09.208 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:09.208 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:09.208 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:09.208 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:09.208 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:09.208 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:09.208 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:09.208 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:09.208 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:09.208 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:09.208 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=5adb9606a94b4fdf8ca78b755ce125d0 00:23:09.208 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:09.208 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:09.208 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:09.208 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:09.208 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:09.208 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:09.208 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:09.208 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:09.208 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:09.208 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:09.208 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:09.208 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:23:09.208 17:33:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:15.779 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:15.779 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:15.779 Found net devices under 0000:af:00.0: cvl_0_0 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:15.779 Found net devices under 0000:af:00.1: cvl_0_1 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:15.779 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:15.780 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:15.780 17:33:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:15.780 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:15.780 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.355 ms 00:23:15.780 00:23:15.780 --- 10.0.0.2 ping statistics --- 00:23:15.780 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:15.780 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:23:15.780 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:15.780 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:15.780 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:23:15.780 00:23:15.780 --- 10.0.0.1 ping statistics --- 00:23:15.780 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:15.780 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:23:15.780 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:15.780 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:23:15.780 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:15.780 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:15.780 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:15.780 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:15.780 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:15.780 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:15.780 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:15.780 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:23:15.780 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:15.780 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:15.780 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:15.780 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=2656919 00:23:15.780 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:15.780 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 2656919 00:23:15.780 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 2656919 ']' 00:23:15.780 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:15.780 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:15.780 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:15.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:15.780 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:15.780 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:15.780 [2024-12-09 17:33:44.111999] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:23:15.780 [2024-12-09 17:33:44.112047] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:15.780 [2024-12-09 17:33:44.190448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:15.780 [2024-12-09 17:33:44.227728] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:15.780 [2024-12-09 17:33:44.227764] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:15.780 [2024-12-09 17:33:44.227772] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:15.780 [2024-12-09 17:33:44.227778] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:15.780 [2024-12-09 17:33:44.227783] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:15.780 [2024-12-09 17:33:44.228322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:15.780 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:15.780 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:23:15.780 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:15.780 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:15.780 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:15.780 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:15.780 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:23:15.780 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.780 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:15.780 [2024-12-09 17:33:44.371864] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:15.780 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.780 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:15.780 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.780 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:15.780 null0 00:23:15.780 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.780 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:15.780 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.780 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:15.780 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.780 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:15.780 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.780 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:15.780 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.780 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 5adb9606a94b4fdf8ca78b755ce125d0 00:23:15.780 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.780 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:15.780 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.780 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:15.780 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.780 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:15.780 [2024-12-09 17:33:44.416129] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:15.780 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.780 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:15.780 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.780 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:15.780 nvme0n1 00:23:15.780 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.780 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:15.780 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.780 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:15.780 [ 00:23:15.780 { 00:23:15.780 "name": "nvme0n1", 00:23:15.780 "aliases": [ 00:23:15.780 "5adb9606-a94b-4fdf-8ca7-8b755ce125d0" 00:23:15.780 ], 00:23:15.780 "product_name": "NVMe disk", 00:23:15.780 "block_size": 512, 00:23:15.780 "num_blocks": 2097152, 00:23:15.780 "uuid": "5adb9606-a94b-4fdf-8ca7-8b755ce125d0", 00:23:15.780 "numa_id": 1, 00:23:15.780 "assigned_rate_limits": { 00:23:15.780 "rw_ios_per_sec": 0, 00:23:15.780 "rw_mbytes_per_sec": 0, 00:23:15.780 "r_mbytes_per_sec": 0, 00:23:15.780 "w_mbytes_per_sec": 0 00:23:15.780 }, 00:23:15.780 "claimed": false, 00:23:15.780 "zoned": false, 00:23:15.780 "supported_io_types": { 00:23:15.780 "read": true, 00:23:15.780 "write": true, 00:23:15.780 "unmap": false, 00:23:15.780 "flush": true, 00:23:15.780 "reset": true, 00:23:15.780 "nvme_admin": true, 00:23:15.780 "nvme_io": true, 00:23:15.780 "nvme_io_md": false, 00:23:15.780 "write_zeroes": true, 00:23:15.780 "zcopy": false, 00:23:15.780 "get_zone_info": false, 00:23:15.780 "zone_management": false, 00:23:15.780 "zone_append": false, 00:23:15.780 "compare": true, 00:23:15.780 "compare_and_write": true, 00:23:15.780 "abort": true, 00:23:15.780 "seek_hole": false, 00:23:15.780 "seek_data": false, 00:23:15.780 "copy": true, 00:23:15.780 "nvme_iov_md": false 00:23:15.780 }, 00:23:15.780 "memory_domains": [ 00:23:15.780 { 00:23:15.780 "dma_device_id": "system", 00:23:15.780 "dma_device_type": 1 00:23:15.780 } 00:23:15.780 ], 00:23:15.780 "driver_specific": { 00:23:15.780 "nvme": [ 00:23:15.780 { 00:23:15.780 "trid": { 00:23:15.780 "trtype": "TCP", 00:23:15.780 "adrfam": "IPv4", 00:23:15.780 "traddr": "10.0.0.2", 00:23:15.780 "trsvcid": "4420", 00:23:15.780 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:15.780 }, 00:23:15.780 "ctrlr_data": { 00:23:15.780 "cntlid": 1, 00:23:15.780 "vendor_id": "0x8086", 00:23:15.780 "model_number": "SPDK bdev Controller", 00:23:15.780 "serial_number": "00000000000000000000", 00:23:15.780 "firmware_revision": "25.01", 00:23:15.780 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:15.780 "oacs": { 00:23:15.780 "security": 0, 00:23:15.780 "format": 0, 00:23:15.780 "firmware": 0, 00:23:15.780 "ns_manage": 0 00:23:15.780 }, 00:23:15.780 "multi_ctrlr": true, 00:23:15.780 "ana_reporting": false 00:23:15.780 }, 00:23:15.780 "vs": { 00:23:15.781 "nvme_version": "1.3" 00:23:15.781 }, 00:23:15.781 "ns_data": { 00:23:15.781 "id": 1, 00:23:15.781 "can_share": true 00:23:15.781 } 00:23:15.781 } 00:23:15.781 ], 00:23:15.781 "mp_policy": "active_passive" 00:23:15.781 } 00:23:15.781 } 00:23:15.781 ] 00:23:15.781 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.781 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:15.781 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.781 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:15.781 [2024-12-09 17:33:44.677651] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:15.781 [2024-12-09 17:33:44.677704] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e9550 (9): Bad file descriptor 00:23:15.781 [2024-12-09 17:33:44.809298] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:23:15.781 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.781 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:15.781 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.781 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:15.781 [ 00:23:15.781 { 00:23:15.781 "name": "nvme0n1", 00:23:15.781 "aliases": [ 00:23:15.781 "5adb9606-a94b-4fdf-8ca7-8b755ce125d0" 00:23:15.781 ], 00:23:15.781 "product_name": "NVMe disk", 00:23:15.781 "block_size": 512, 00:23:15.781 "num_blocks": 2097152, 00:23:15.781 "uuid": "5adb9606-a94b-4fdf-8ca7-8b755ce125d0", 00:23:15.781 "numa_id": 1, 00:23:15.781 "assigned_rate_limits": { 00:23:15.781 "rw_ios_per_sec": 0, 00:23:15.781 "rw_mbytes_per_sec": 0, 00:23:15.781 "r_mbytes_per_sec": 0, 00:23:15.781 "w_mbytes_per_sec": 0 00:23:15.781 }, 00:23:15.781 "claimed": false, 00:23:15.781 "zoned": false, 00:23:15.781 "supported_io_types": { 00:23:15.781 "read": true, 00:23:15.781 "write": true, 00:23:15.781 "unmap": false, 00:23:15.781 "flush": true, 00:23:15.781 "reset": true, 00:23:15.781 "nvme_admin": true, 00:23:15.781 "nvme_io": true, 00:23:15.781 "nvme_io_md": false, 00:23:15.781 "write_zeroes": true, 00:23:15.781 "zcopy": false, 00:23:15.781 "get_zone_info": false, 00:23:15.781 "zone_management": false, 00:23:15.781 "zone_append": false, 00:23:15.781 "compare": true, 00:23:15.781 "compare_and_write": true, 00:23:15.781 "abort": true, 00:23:15.781 "seek_hole": false, 00:23:15.781 "seek_data": false, 00:23:15.781 "copy": true, 00:23:15.781 "nvme_iov_md": false 00:23:15.781 }, 00:23:15.781 "memory_domains": [ 00:23:15.781 { 00:23:15.781 "dma_device_id": "system", 00:23:15.781 "dma_device_type": 1 00:23:15.781 } 00:23:15.781 ], 00:23:15.781 "driver_specific": { 00:23:15.781 "nvme": [ 00:23:15.781 { 00:23:15.781 "trid": { 00:23:15.781 "trtype": "TCP", 00:23:15.781 "adrfam": "IPv4", 00:23:15.781 "traddr": "10.0.0.2", 00:23:15.781 "trsvcid": "4420", 00:23:15.781 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:15.781 }, 00:23:15.781 "ctrlr_data": { 00:23:15.781 "cntlid": 2, 00:23:15.781 "vendor_id": "0x8086", 00:23:15.781 "model_number": "SPDK bdev Controller", 00:23:15.781 "serial_number": "00000000000000000000", 00:23:15.781 "firmware_revision": "25.01", 00:23:15.781 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:15.781 "oacs": { 00:23:15.781 "security": 0, 00:23:15.781 "format": 0, 00:23:15.781 "firmware": 0, 00:23:15.781 "ns_manage": 0 00:23:15.781 }, 00:23:15.781 "multi_ctrlr": true, 00:23:15.781 "ana_reporting": false 00:23:15.781 }, 00:23:15.781 "vs": { 00:23:15.781 "nvme_version": "1.3" 00:23:15.781 }, 00:23:15.781 "ns_data": { 00:23:15.781 "id": 1, 00:23:15.781 "can_share": true 00:23:15.781 } 00:23:15.781 } 00:23:15.781 ], 00:23:15.781 "mp_policy": "active_passive" 00:23:15.781 } 00:23:15.781 } 00:23:15.781 ] 00:23:15.781 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.781 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:15.781 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.781 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:15.781 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.781 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:23:15.781 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.vz5RMLZeXu 00:23:15.781 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:15.781 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.vz5RMLZeXu 00:23:15.781 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.vz5RMLZeXu 00:23:15.781 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.781 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:15.781 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.781 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:15.781 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.781 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:15.781 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.781 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:23:15.781 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.781 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:15.781 [2024-12-09 17:33:44.882256] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:15.781 [2024-12-09 17:33:44.882348] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:15.781 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.781 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:23:15.781 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.781 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:15.781 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.781 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:15.781 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.781 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:15.781 [2024-12-09 17:33:44.898310] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:16.042 nvme0n1 00:23:16.042 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.042 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:16.042 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.042 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:16.042 [ 00:23:16.042 { 00:23:16.042 "name": "nvme0n1", 00:23:16.042 "aliases": [ 00:23:16.042 "5adb9606-a94b-4fdf-8ca7-8b755ce125d0" 00:23:16.042 ], 00:23:16.042 "product_name": "NVMe disk", 00:23:16.042 "block_size": 512, 00:23:16.042 "num_blocks": 2097152, 00:23:16.042 "uuid": "5adb9606-a94b-4fdf-8ca7-8b755ce125d0", 00:23:16.042 "numa_id": 1, 00:23:16.042 "assigned_rate_limits": { 00:23:16.042 "rw_ios_per_sec": 0, 00:23:16.042 "rw_mbytes_per_sec": 0, 00:23:16.042 "r_mbytes_per_sec": 0, 00:23:16.042 "w_mbytes_per_sec": 0 00:23:16.042 }, 00:23:16.042 "claimed": false, 00:23:16.042 "zoned": false, 00:23:16.042 "supported_io_types": { 00:23:16.042 "read": true, 00:23:16.042 "write": true, 00:23:16.042 "unmap": false, 00:23:16.042 "flush": true, 00:23:16.042 "reset": true, 00:23:16.042 "nvme_admin": true, 00:23:16.042 "nvme_io": true, 00:23:16.042 "nvme_io_md": false, 00:23:16.042 "write_zeroes": true, 00:23:16.042 "zcopy": false, 00:23:16.042 "get_zone_info": false, 00:23:16.042 "zone_management": false, 00:23:16.042 "zone_append": false, 00:23:16.042 "compare": true, 00:23:16.042 "compare_and_write": true, 00:23:16.042 "abort": true, 00:23:16.042 "seek_hole": false, 00:23:16.042 "seek_data": false, 00:23:16.042 "copy": true, 00:23:16.043 "nvme_iov_md": false 00:23:16.043 }, 00:23:16.043 "memory_domains": [ 00:23:16.043 { 00:23:16.043 "dma_device_id": "system", 00:23:16.043 "dma_device_type": 1 00:23:16.043 } 00:23:16.043 ], 00:23:16.043 "driver_specific": { 00:23:16.043 "nvme": [ 00:23:16.043 { 00:23:16.043 "trid": { 00:23:16.043 "trtype": "TCP", 00:23:16.043 "adrfam": "IPv4", 00:23:16.043 "traddr": "10.0.0.2", 00:23:16.043 "trsvcid": "4421", 00:23:16.043 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:16.043 }, 00:23:16.043 "ctrlr_data": { 00:23:16.043 "cntlid": 3, 00:23:16.043 "vendor_id": "0x8086", 00:23:16.043 "model_number": "SPDK bdev Controller", 00:23:16.043 "serial_number": "00000000000000000000", 00:23:16.043 "firmware_revision": "25.01", 00:23:16.043 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:16.043 "oacs": { 00:23:16.043 "security": 0, 00:23:16.043 "format": 0, 00:23:16.043 "firmware": 0, 00:23:16.043 "ns_manage": 0 00:23:16.043 }, 00:23:16.043 "multi_ctrlr": true, 00:23:16.043 "ana_reporting": false 00:23:16.043 }, 00:23:16.043 "vs": { 00:23:16.043 "nvme_version": "1.3" 00:23:16.043 }, 00:23:16.043 "ns_data": { 00:23:16.043 "id": 1, 00:23:16.043 "can_share": true 00:23:16.043 } 00:23:16.043 } 00:23:16.043 ], 00:23:16.043 "mp_policy": "active_passive" 00:23:16.043 } 00:23:16.043 } 00:23:16.043 ] 00:23:16.043 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.043 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:16.043 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.043 17:33:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:16.043 17:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.043 17:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.vz5RMLZeXu 00:23:16.043 17:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:23:16.043 17:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:23:16.043 17:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:16.043 17:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:23:16.043 17:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:16.043 17:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:23:16.043 17:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:16.043 17:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:16.043 rmmod nvme_tcp 00:23:16.043 rmmod nvme_fabrics 00:23:16.043 rmmod nvme_keyring 00:23:16.043 17:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:16.043 17:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:23:16.043 17:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:23:16.043 17:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 2656919 ']' 00:23:16.043 17:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 2656919 00:23:16.043 17:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 2656919 ']' 00:23:16.043 17:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 2656919 00:23:16.043 17:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:23:16.043 17:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:16.043 17:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2656919 00:23:16.043 17:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:16.043 17:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:16.043 17:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2656919' 00:23:16.043 killing process with pid 2656919 00:23:16.043 17:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 2656919 00:23:16.043 17:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 2656919 00:23:16.301 17:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:16.301 17:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:16.301 17:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:16.301 17:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:23:16.301 17:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:23:16.301 17:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:16.301 17:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:23:16.301 17:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:16.301 17:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:16.301 17:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:16.301 17:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:16.301 17:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:18.206 17:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:18.206 00:23:18.206 real 0m9.405s 00:23:18.206 user 0m3.033s 00:23:18.206 sys 0m4.771s 00:23:18.206 17:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:18.206 17:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:18.206 ************************************ 00:23:18.206 END TEST nvmf_async_init 00:23:18.206 ************************************ 00:23:18.206 17:33:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:18.206 17:33:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:18.206 17:33:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:18.206 17:33:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.465 ************************************ 00:23:18.465 START TEST dma 00:23:18.465 ************************************ 00:23:18.465 17:33:47 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:18.465 * Looking for test storage... 00:23:18.465 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:18.465 17:33:47 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:18.465 17:33:47 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:23:18.465 17:33:47 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:18.465 17:33:47 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:18.465 17:33:47 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:18.465 17:33:47 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:18.465 17:33:47 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:18.465 17:33:47 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:23:18.465 17:33:47 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:23:18.465 17:33:47 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:23:18.465 17:33:47 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:23:18.465 17:33:47 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:23:18.465 17:33:47 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:23:18.465 17:33:47 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:23:18.465 17:33:47 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:18.465 17:33:47 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:23:18.465 17:33:47 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:23:18.465 17:33:47 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:18.465 17:33:47 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:18.465 17:33:47 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:23:18.465 17:33:47 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:23:18.465 17:33:47 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:18.465 17:33:47 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:23:18.465 17:33:47 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:23:18.465 17:33:47 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:23:18.465 17:33:47 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:23:18.465 17:33:47 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:18.465 17:33:47 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:23:18.465 17:33:47 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:23:18.465 17:33:47 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:18.465 17:33:47 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:18.465 17:33:47 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:23:18.465 17:33:47 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:18.465 17:33:47 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:18.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:18.465 --rc genhtml_branch_coverage=1 00:23:18.465 --rc genhtml_function_coverage=1 00:23:18.465 --rc genhtml_legend=1 00:23:18.465 --rc geninfo_all_blocks=1 00:23:18.465 --rc geninfo_unexecuted_blocks=1 00:23:18.465 00:23:18.465 ' 00:23:18.465 17:33:47 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:18.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:18.465 --rc genhtml_branch_coverage=1 00:23:18.465 --rc genhtml_function_coverage=1 00:23:18.465 --rc genhtml_legend=1 00:23:18.465 --rc geninfo_all_blocks=1 00:23:18.465 --rc geninfo_unexecuted_blocks=1 00:23:18.465 00:23:18.465 ' 00:23:18.465 17:33:47 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:18.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:18.465 --rc genhtml_branch_coverage=1 00:23:18.465 --rc genhtml_function_coverage=1 00:23:18.465 --rc genhtml_legend=1 00:23:18.465 --rc geninfo_all_blocks=1 00:23:18.465 --rc geninfo_unexecuted_blocks=1 00:23:18.465 00:23:18.465 ' 00:23:18.465 17:33:47 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:18.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:18.465 --rc genhtml_branch_coverage=1 00:23:18.465 --rc genhtml_function_coverage=1 00:23:18.465 --rc genhtml_legend=1 00:23:18.465 --rc geninfo_all_blocks=1 00:23:18.465 --rc geninfo_unexecuted_blocks=1 00:23:18.465 00:23:18.465 ' 00:23:18.465 17:33:47 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:18.465 17:33:47 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:23:18.465 17:33:47 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:18.465 17:33:47 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:18.465 17:33:47 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:18.465 17:33:47 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:18.465 17:33:47 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:18.465 17:33:47 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:18.465 17:33:47 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:18.465 17:33:47 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:18.465 17:33:47 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:18.465 17:33:47 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:18.465 17:33:47 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:23:18.465 17:33:47 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:23:18.465 17:33:47 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:18.465 17:33:47 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:18.465 17:33:47 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:18.465 17:33:47 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:18.465 17:33:47 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:18.465 17:33:47 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:23:18.465 17:33:47 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:18.465 17:33:47 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:18.465 17:33:47 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:18.465 17:33:47 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.465 17:33:47 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.465 17:33:47 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.465 17:33:47 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:23:18.466 17:33:47 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.466 17:33:47 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:23:18.466 17:33:47 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:18.466 17:33:47 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:18.466 17:33:47 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:18.466 17:33:47 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:18.466 17:33:47 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:18.466 17:33:47 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:18.466 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:18.466 17:33:47 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:18.466 17:33:47 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:18.466 17:33:47 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:18.466 17:33:47 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:23:18.466 17:33:47 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:23:18.466 00:23:18.466 real 0m0.209s 00:23:18.466 user 0m0.137s 00:23:18.466 sys 0m0.087s 00:23:18.466 17:33:47 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:18.466 17:33:47 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:23:18.466 ************************************ 00:23:18.466 END TEST dma 00:23:18.466 ************************************ 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.725 ************************************ 00:23:18.725 START TEST nvmf_identify 00:23:18.725 ************************************ 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:18.725 * Looking for test storage... 00:23:18.725 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:18.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:18.725 --rc genhtml_branch_coverage=1 00:23:18.725 --rc genhtml_function_coverage=1 00:23:18.725 --rc genhtml_legend=1 00:23:18.725 --rc geninfo_all_blocks=1 00:23:18.725 --rc geninfo_unexecuted_blocks=1 00:23:18.725 00:23:18.725 ' 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:18.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:18.725 --rc genhtml_branch_coverage=1 00:23:18.725 --rc genhtml_function_coverage=1 00:23:18.725 --rc genhtml_legend=1 00:23:18.725 --rc geninfo_all_blocks=1 00:23:18.725 --rc geninfo_unexecuted_blocks=1 00:23:18.725 00:23:18.725 ' 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:18.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:18.725 --rc genhtml_branch_coverage=1 00:23:18.725 --rc genhtml_function_coverage=1 00:23:18.725 --rc genhtml_legend=1 00:23:18.725 --rc geninfo_all_blocks=1 00:23:18.725 --rc geninfo_unexecuted_blocks=1 00:23:18.725 00:23:18.725 ' 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:18.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:18.725 --rc genhtml_branch_coverage=1 00:23:18.725 --rc genhtml_function_coverage=1 00:23:18.725 --rc genhtml_legend=1 00:23:18.725 --rc geninfo_all_blocks=1 00:23:18.725 --rc geninfo_unexecuted_blocks=1 00:23:18.725 00:23:18.725 ' 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:18.725 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:18.725 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:23:18.985 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:18.985 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:18.985 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:18.985 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:18.985 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:18.985 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:18.985 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:18.985 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:18.985 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:18.985 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:18.985 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:23:18.985 17:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:24.337 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:24.337 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:24.337 Found net devices under 0000:af:00.0: cvl_0_0 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:24.337 Found net devices under 0000:af:00.1: cvl_0_1 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:24.337 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:24.596 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:24.596 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:24.596 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:24.596 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:24.596 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:24.596 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:24.596 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:24.596 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:24.596 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:24.596 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:24.596 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:24.596 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:23:24.596 00:23:24.596 --- 10.0.0.2 ping statistics --- 00:23:24.596 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:24.596 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:23:24.596 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:24.596 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:24.596 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:23:24.596 00:23:24.596 --- 10.0.0.1 ping statistics --- 00:23:24.596 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:24.596 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:23:24.596 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:24.596 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:23:24.596 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:24.596 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:24.596 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:24.596 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:24.596 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:24.596 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:24.596 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:24.855 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:23:24.855 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:24.855 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:24.855 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2660658 00:23:24.855 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:24.855 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:24.855 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2660658 00:23:24.855 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 2660658 ']' 00:23:24.855 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:24.855 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:24.855 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:24.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:24.855 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:24.855 17:33:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:24.855 [2024-12-09 17:33:53.857162] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:23:24.855 [2024-12-09 17:33:53.857208] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:24.855 [2024-12-09 17:33:53.935688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:24.855 [2024-12-09 17:33:53.977263] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:24.855 [2024-12-09 17:33:53.977299] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:24.855 [2024-12-09 17:33:53.977306] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:24.855 [2024-12-09 17:33:53.977312] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:24.855 [2024-12-09 17:33:53.977317] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:24.855 [2024-12-09 17:33:53.978690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:24.855 [2024-12-09 17:33:53.978804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:24.855 [2024-12-09 17:33:53.978909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:24.855 [2024-12-09 17:33:53.978910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:25.114 17:33:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:25.114 17:33:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:23:25.114 17:33:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:25.114 17:33:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.114 17:33:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:25.114 [2024-12-09 17:33:54.080304] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:25.114 17:33:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.114 17:33:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:23:25.114 17:33:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:25.114 17:33:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:25.114 17:33:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:25.114 17:33:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.114 17:33:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:25.114 Malloc0 00:23:25.114 17:33:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.114 17:33:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:25.114 17:33:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.114 17:33:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:25.114 17:33:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.114 17:33:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:23:25.114 17:33:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.114 17:33:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:25.114 17:33:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.114 17:33:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:25.114 17:33:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.114 17:33:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:25.114 [2024-12-09 17:33:54.189140] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:25.114 17:33:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.114 17:33:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:25.114 17:33:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.114 17:33:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:25.114 17:33:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.114 17:33:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:23:25.114 17:33:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.114 17:33:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:25.114 [ 00:23:25.114 { 00:23:25.114 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:25.114 "subtype": "Discovery", 00:23:25.114 "listen_addresses": [ 00:23:25.114 { 00:23:25.114 "trtype": "TCP", 00:23:25.114 "adrfam": "IPv4", 00:23:25.114 "traddr": "10.0.0.2", 00:23:25.114 "trsvcid": "4420" 00:23:25.114 } 00:23:25.114 ], 00:23:25.114 "allow_any_host": true, 00:23:25.114 "hosts": [] 00:23:25.114 }, 00:23:25.114 { 00:23:25.114 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:25.114 "subtype": "NVMe", 00:23:25.114 "listen_addresses": [ 00:23:25.114 { 00:23:25.114 "trtype": "TCP", 00:23:25.114 "adrfam": "IPv4", 00:23:25.114 "traddr": "10.0.0.2", 00:23:25.114 "trsvcid": "4420" 00:23:25.114 } 00:23:25.114 ], 00:23:25.114 "allow_any_host": true, 00:23:25.114 "hosts": [], 00:23:25.114 "serial_number": "SPDK00000000000001", 00:23:25.114 "model_number": "SPDK bdev Controller", 00:23:25.114 "max_namespaces": 32, 00:23:25.114 "min_cntlid": 1, 00:23:25.114 "max_cntlid": 65519, 00:23:25.114 "namespaces": [ 00:23:25.114 { 00:23:25.114 "nsid": 1, 00:23:25.114 "bdev_name": "Malloc0", 00:23:25.114 "name": "Malloc0", 00:23:25.114 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:23:25.114 "eui64": "ABCDEF0123456789", 00:23:25.114 "uuid": "cb0e5e85-2a9b-4d6e-98e9-eb6bc300e08d" 00:23:25.114 } 00:23:25.114 ] 00:23:25.114 } 00:23:25.114 ] 00:23:25.114 17:33:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.114 17:33:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:23:25.114 [2024-12-09 17:33:54.243132] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:23:25.114 [2024-12-09 17:33:54.243179] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2660740 ] 00:23:25.114 [2024-12-09 17:33:54.284731] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:23:25.114 [2024-12-09 17:33:54.284778] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:25.114 [2024-12-09 17:33:54.284783] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:25.114 [2024-12-09 17:33:54.284795] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:25.114 [2024-12-09 17:33:54.284803] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:25.114 [2024-12-09 17:33:54.288442] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:23:25.114 [2024-12-09 17:33:54.288474] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1e7d690 0 00:23:25.375 [2024-12-09 17:33:54.296231] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:25.375 [2024-12-09 17:33:54.296244] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:25.375 [2024-12-09 17:33:54.296249] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:25.375 [2024-12-09 17:33:54.296252] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:25.375 [2024-12-09 17:33:54.296283] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.375 [2024-12-09 17:33:54.296288] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.376 [2024-12-09 17:33:54.296292] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e7d690) 00:23:25.376 [2024-12-09 17:33:54.296305] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:25.376 [2024-12-09 17:33:54.296321] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1edf100, cid 0, qid 0 00:23:25.376 [2024-12-09 17:33:54.303225] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.376 [2024-12-09 17:33:54.303233] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.376 [2024-12-09 17:33:54.303236] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.376 [2024-12-09 17:33:54.303243] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1edf100) on tqpair=0x1e7d690 00:23:25.376 [2024-12-09 17:33:54.303256] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:25.376 [2024-12-09 17:33:54.303262] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:23:25.376 [2024-12-09 17:33:54.303267] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:23:25.376 [2024-12-09 17:33:54.303280] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.376 [2024-12-09 17:33:54.303284] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.376 [2024-12-09 17:33:54.303287] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e7d690) 00:23:25.376 [2024-12-09 17:33:54.303293] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.376 [2024-12-09 17:33:54.303307] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1edf100, cid 0, qid 0 00:23:25.376 [2024-12-09 17:33:54.303475] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.376 [2024-12-09 17:33:54.303481] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.376 [2024-12-09 17:33:54.303484] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.376 [2024-12-09 17:33:54.303487] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1edf100) on tqpair=0x1e7d690 00:23:25.376 [2024-12-09 17:33:54.303492] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:23:25.376 [2024-12-09 17:33:54.303498] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:23:25.376 [2024-12-09 17:33:54.303504] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.376 [2024-12-09 17:33:54.303508] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.376 [2024-12-09 17:33:54.303511] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e7d690) 00:23:25.376 [2024-12-09 17:33:54.303516] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.376 [2024-12-09 17:33:54.303526] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1edf100, cid 0, qid 0 00:23:25.376 [2024-12-09 17:33:54.303591] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.376 [2024-12-09 17:33:54.303596] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.376 [2024-12-09 17:33:54.303599] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.376 [2024-12-09 17:33:54.303602] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1edf100) on tqpair=0x1e7d690 00:23:25.376 [2024-12-09 17:33:54.303607] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:23:25.376 [2024-12-09 17:33:54.303614] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:23:25.376 [2024-12-09 17:33:54.303620] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.376 [2024-12-09 17:33:54.303623] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.376 [2024-12-09 17:33:54.303626] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e7d690) 00:23:25.376 [2024-12-09 17:33:54.303632] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.376 [2024-12-09 17:33:54.303641] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1edf100, cid 0, qid 0 00:23:25.376 [2024-12-09 17:33:54.303702] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.376 [2024-12-09 17:33:54.303708] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.376 [2024-12-09 17:33:54.303711] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.376 [2024-12-09 17:33:54.303716] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1edf100) on tqpair=0x1e7d690 00:23:25.376 [2024-12-09 17:33:54.303721] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:25.376 [2024-12-09 17:33:54.303729] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.376 [2024-12-09 17:33:54.303733] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.376 [2024-12-09 17:33:54.303736] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e7d690) 00:23:25.376 [2024-12-09 17:33:54.303741] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.376 [2024-12-09 17:33:54.303750] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1edf100, cid 0, qid 0 00:23:25.376 [2024-12-09 17:33:54.303812] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.376 [2024-12-09 17:33:54.303818] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.376 [2024-12-09 17:33:54.303821] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.376 [2024-12-09 17:33:54.303824] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1edf100) on tqpair=0x1e7d690 00:23:25.376 [2024-12-09 17:33:54.303828] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:23:25.376 [2024-12-09 17:33:54.303832] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:23:25.376 [2024-12-09 17:33:54.303839] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:25.376 [2024-12-09 17:33:54.303948] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:23:25.376 [2024-12-09 17:33:54.303953] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:25.376 [2024-12-09 17:33:54.303960] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.376 [2024-12-09 17:33:54.303964] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.376 [2024-12-09 17:33:54.303967] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e7d690) 00:23:25.376 [2024-12-09 17:33:54.303972] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.376 [2024-12-09 17:33:54.303981] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1edf100, cid 0, qid 0 00:23:25.376 [2024-12-09 17:33:54.304046] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.376 [2024-12-09 17:33:54.304051] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.376 [2024-12-09 17:33:54.304054] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.376 [2024-12-09 17:33:54.304057] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1edf100) on tqpair=0x1e7d690 00:23:25.376 [2024-12-09 17:33:54.304061] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:25.376 [2024-12-09 17:33:54.304069] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.376 [2024-12-09 17:33:54.304073] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.376 [2024-12-09 17:33:54.304076] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e7d690) 00:23:25.376 [2024-12-09 17:33:54.304081] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.376 [2024-12-09 17:33:54.304090] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1edf100, cid 0, qid 0 00:23:25.376 [2024-12-09 17:33:54.304167] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.376 [2024-12-09 17:33:54.304174] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.376 [2024-12-09 17:33:54.304177] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.376 [2024-12-09 17:33:54.304180] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1edf100) on tqpair=0x1e7d690 00:23:25.376 [2024-12-09 17:33:54.304184] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:25.376 [2024-12-09 17:33:54.304188] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:23:25.376 [2024-12-09 17:33:54.304196] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:23:25.376 [2024-12-09 17:33:54.304203] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:23:25.376 [2024-12-09 17:33:54.304211] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.376 [2024-12-09 17:33:54.304214] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e7d690) 00:23:25.376 [2024-12-09 17:33:54.304226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.376 [2024-12-09 17:33:54.304237] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1edf100, cid 0, qid 0 00:23:25.376 [2024-12-09 17:33:54.304322] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:25.376 [2024-12-09 17:33:54.304327] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:25.376 [2024-12-09 17:33:54.304331] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:25.376 [2024-12-09 17:33:54.304334] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e7d690): datao=0, datal=4096, cccid=0 00:23:25.376 [2024-12-09 17:33:54.304338] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1edf100) on tqpair(0x1e7d690): expected_datao=0, payload_size=4096 00:23:25.376 [2024-12-09 17:33:54.304342] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.376 [2024-12-09 17:33:54.304354] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:25.376 [2024-12-09 17:33:54.304358] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:25.376 [2024-12-09 17:33:54.304389] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.376 [2024-12-09 17:33:54.304395] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.376 [2024-12-09 17:33:54.304398] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.376 [2024-12-09 17:33:54.304401] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1edf100) on tqpair=0x1e7d690 00:23:25.376 [2024-12-09 17:33:54.304407] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:23:25.376 [2024-12-09 17:33:54.304414] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:23:25.376 [2024-12-09 17:33:54.304418] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:23:25.377 [2024-12-09 17:33:54.304423] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:23:25.377 [2024-12-09 17:33:54.304427] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:23:25.377 [2024-12-09 17:33:54.304431] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:23:25.377 [2024-12-09 17:33:54.304438] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:23:25.377 [2024-12-09 17:33:54.304444] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.377 [2024-12-09 17:33:54.304448] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.377 [2024-12-09 17:33:54.304453] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e7d690) 00:23:25.377 [2024-12-09 17:33:54.304459] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:25.377 [2024-12-09 17:33:54.304469] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1edf100, cid 0, qid 0 00:23:25.377 [2024-12-09 17:33:54.304532] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.377 [2024-12-09 17:33:54.304537] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.377 [2024-12-09 17:33:54.304540] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.377 [2024-12-09 17:33:54.304543] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1edf100) on tqpair=0x1e7d690 00:23:25.377 [2024-12-09 17:33:54.304550] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.377 [2024-12-09 17:33:54.304553] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.377 [2024-12-09 17:33:54.304556] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e7d690) 00:23:25.377 [2024-12-09 17:33:54.304561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.377 [2024-12-09 17:33:54.304567] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.377 [2024-12-09 17:33:54.304570] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.377 [2024-12-09 17:33:54.304573] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1e7d690) 00:23:25.377 [2024-12-09 17:33:54.304578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.377 [2024-12-09 17:33:54.304583] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.377 [2024-12-09 17:33:54.304586] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.377 [2024-12-09 17:33:54.304589] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1e7d690) 00:23:25.377 [2024-12-09 17:33:54.304594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.377 [2024-12-09 17:33:54.304599] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.377 [2024-12-09 17:33:54.304602] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.377 [2024-12-09 17:33:54.304605] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e7d690) 00:23:25.377 [2024-12-09 17:33:54.304610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.377 [2024-12-09 17:33:54.304615] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:25.377 [2024-12-09 17:33:54.304625] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:25.377 [2024-12-09 17:33:54.304630] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.377 [2024-12-09 17:33:54.304633] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e7d690) 00:23:25.377 [2024-12-09 17:33:54.304639] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.377 [2024-12-09 17:33:54.304650] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1edf100, cid 0, qid 0 00:23:25.377 [2024-12-09 17:33:54.304654] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1edf280, cid 1, qid 0 00:23:25.377 [2024-12-09 17:33:54.304658] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1edf400, cid 2, qid 0 00:23:25.377 [2024-12-09 17:33:54.304662] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1edf580, cid 3, qid 0 00:23:25.377 [2024-12-09 17:33:54.304666] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1edf700, cid 4, qid 0 00:23:25.377 [2024-12-09 17:33:54.304762] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.377 [2024-12-09 17:33:54.304768] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.377 [2024-12-09 17:33:54.304770] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.377 [2024-12-09 17:33:54.304774] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1edf700) on tqpair=0x1e7d690 00:23:25.377 [2024-12-09 17:33:54.304778] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:23:25.377 [2024-12-09 17:33:54.304783] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:23:25.377 [2024-12-09 17:33:54.304792] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.377 [2024-12-09 17:33:54.304795] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e7d690) 00:23:25.377 [2024-12-09 17:33:54.304801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.377 [2024-12-09 17:33:54.304810] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1edf700, cid 4, qid 0 00:23:25.377 [2024-12-09 17:33:54.304882] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:25.377 [2024-12-09 17:33:54.304888] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:25.377 [2024-12-09 17:33:54.304890] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:25.377 [2024-12-09 17:33:54.304894] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e7d690): datao=0, datal=4096, cccid=4 00:23:25.377 [2024-12-09 17:33:54.304897] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1edf700) on tqpair(0x1e7d690): expected_datao=0, payload_size=4096 00:23:25.377 [2024-12-09 17:33:54.304901] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.377 [2024-12-09 17:33:54.304911] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:25.377 [2024-12-09 17:33:54.304914] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:25.377 [2024-12-09 17:33:54.346223] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.377 [2024-12-09 17:33:54.346233] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.377 [2024-12-09 17:33:54.346237] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.377 [2024-12-09 17:33:54.346240] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1edf700) on tqpair=0x1e7d690 00:23:25.377 [2024-12-09 17:33:54.346252] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:23:25.377 [2024-12-09 17:33:54.346274] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.377 [2024-12-09 17:33:54.346278] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e7d690) 00:23:25.377 [2024-12-09 17:33:54.346285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.377 [2024-12-09 17:33:54.346292] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.377 [2024-12-09 17:33:54.346295] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.377 [2024-12-09 17:33:54.346298] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e7d690) 00:23:25.377 [2024-12-09 17:33:54.346303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.377 [2024-12-09 17:33:54.346318] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1edf700, cid 4, qid 0 00:23:25.377 [2024-12-09 17:33:54.346324] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1edf880, cid 5, qid 0 00:23:25.377 [2024-12-09 17:33:54.346515] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:25.377 [2024-12-09 17:33:54.346520] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:25.377 [2024-12-09 17:33:54.346524] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:25.377 [2024-12-09 17:33:54.346529] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e7d690): datao=0, datal=1024, cccid=4 00:23:25.377 [2024-12-09 17:33:54.346533] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1edf700) on tqpair(0x1e7d690): expected_datao=0, payload_size=1024 00:23:25.377 [2024-12-09 17:33:54.346537] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.377 [2024-12-09 17:33:54.346542] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:25.377 [2024-12-09 17:33:54.346545] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:25.377 [2024-12-09 17:33:54.346550] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.377 [2024-12-09 17:33:54.346555] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.377 [2024-12-09 17:33:54.346558] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.377 [2024-12-09 17:33:54.346561] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1edf880) on tqpair=0x1e7d690 00:23:25.377 [2024-12-09 17:33:54.387354] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.377 [2024-12-09 17:33:54.387367] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.377 [2024-12-09 17:33:54.387370] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.377 [2024-12-09 17:33:54.387373] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1edf700) on tqpair=0x1e7d690 00:23:25.377 [2024-12-09 17:33:54.387385] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.377 [2024-12-09 17:33:54.387389] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e7d690) 00:23:25.377 [2024-12-09 17:33:54.387396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.377 [2024-12-09 17:33:54.387412] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1edf700, cid 4, qid 0 00:23:25.377 [2024-12-09 17:33:54.387487] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:25.377 [2024-12-09 17:33:54.387493] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:25.377 [2024-12-09 17:33:54.387496] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:25.377 [2024-12-09 17:33:54.387500] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e7d690): datao=0, datal=3072, cccid=4 00:23:25.377 [2024-12-09 17:33:54.387504] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1edf700) on tqpair(0x1e7d690): expected_datao=0, payload_size=3072 00:23:25.377 [2024-12-09 17:33:54.387508] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.377 [2024-12-09 17:33:54.387513] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:25.377 [2024-12-09 17:33:54.387517] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:25.377 [2024-12-09 17:33:54.387530] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.377 [2024-12-09 17:33:54.387535] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.377 [2024-12-09 17:33:54.387538] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.377 [2024-12-09 17:33:54.387541] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1edf700) on tqpair=0x1e7d690 00:23:25.377 [2024-12-09 17:33:54.387549] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.377 [2024-12-09 17:33:54.387552] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e7d690) 00:23:25.378 [2024-12-09 17:33:54.387558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.378 [2024-12-09 17:33:54.387571] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1edf700, cid 4, qid 0 00:23:25.378 [2024-12-09 17:33:54.387640] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:25.378 [2024-12-09 17:33:54.387645] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:25.378 [2024-12-09 17:33:54.387648] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:25.378 [2024-12-09 17:33:54.387651] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e7d690): datao=0, datal=8, cccid=4 00:23:25.378 [2024-12-09 17:33:54.387658] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1edf700) on tqpair(0x1e7d690): expected_datao=0, payload_size=8 00:23:25.378 [2024-12-09 17:33:54.387662] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.378 [2024-12-09 17:33:54.387667] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:25.378 [2024-12-09 17:33:54.387670] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:25.378 [2024-12-09 17:33:54.428355] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.378 [2024-12-09 17:33:54.428364] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.378 [2024-12-09 17:33:54.428367] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.378 [2024-12-09 17:33:54.428370] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1edf700) on tqpair=0x1e7d690 00:23:25.378 ===================================================== 00:23:25.378 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:25.378 ===================================================== 00:23:25.378 Controller Capabilities/Features 00:23:25.378 ================================ 00:23:25.378 Vendor ID: 0000 00:23:25.378 Subsystem Vendor ID: 0000 00:23:25.378 Serial Number: .................... 00:23:25.378 Model Number: ........................................ 00:23:25.378 Firmware Version: 25.01 00:23:25.378 Recommended Arb Burst: 0 00:23:25.378 IEEE OUI Identifier: 00 00 00 00:23:25.378 Multi-path I/O 00:23:25.378 May have multiple subsystem ports: No 00:23:25.378 May have multiple controllers: No 00:23:25.378 Associated with SR-IOV VF: No 00:23:25.378 Max Data Transfer Size: 131072 00:23:25.378 Max Number of Namespaces: 0 00:23:25.378 Max Number of I/O Queues: 1024 00:23:25.378 NVMe Specification Version (VS): 1.3 00:23:25.378 NVMe Specification Version (Identify): 1.3 00:23:25.378 Maximum Queue Entries: 128 00:23:25.378 Contiguous Queues Required: Yes 00:23:25.378 Arbitration Mechanisms Supported 00:23:25.378 Weighted Round Robin: Not Supported 00:23:25.378 Vendor Specific: Not Supported 00:23:25.378 Reset Timeout: 15000 ms 00:23:25.378 Doorbell Stride: 4 bytes 00:23:25.378 NVM Subsystem Reset: Not Supported 00:23:25.378 Command Sets Supported 00:23:25.378 NVM Command Set: Supported 00:23:25.378 Boot Partition: Not Supported 00:23:25.378 Memory Page Size Minimum: 4096 bytes 00:23:25.378 Memory Page Size Maximum: 4096 bytes 00:23:25.378 Persistent Memory Region: Not Supported 00:23:25.378 Optional Asynchronous Events Supported 00:23:25.378 Namespace Attribute Notices: Not Supported 00:23:25.378 Firmware Activation Notices: Not Supported 00:23:25.378 ANA Change Notices: Not Supported 00:23:25.378 PLE Aggregate Log Change Notices: Not Supported 00:23:25.378 LBA Status Info Alert Notices: Not Supported 00:23:25.378 EGE Aggregate Log Change Notices: Not Supported 00:23:25.378 Normal NVM Subsystem Shutdown event: Not Supported 00:23:25.378 Zone Descriptor Change Notices: Not Supported 00:23:25.378 Discovery Log Change Notices: Supported 00:23:25.378 Controller Attributes 00:23:25.378 128-bit Host Identifier: Not Supported 00:23:25.378 Non-Operational Permissive Mode: Not Supported 00:23:25.378 NVM Sets: Not Supported 00:23:25.378 Read Recovery Levels: Not Supported 00:23:25.378 Endurance Groups: Not Supported 00:23:25.378 Predictable Latency Mode: Not Supported 00:23:25.378 Traffic Based Keep ALive: Not Supported 00:23:25.378 Namespace Granularity: Not Supported 00:23:25.378 SQ Associations: Not Supported 00:23:25.378 UUID List: Not Supported 00:23:25.378 Multi-Domain Subsystem: Not Supported 00:23:25.378 Fixed Capacity Management: Not Supported 00:23:25.378 Variable Capacity Management: Not Supported 00:23:25.378 Delete Endurance Group: Not Supported 00:23:25.378 Delete NVM Set: Not Supported 00:23:25.378 Extended LBA Formats Supported: Not Supported 00:23:25.378 Flexible Data Placement Supported: Not Supported 00:23:25.378 00:23:25.378 Controller Memory Buffer Support 00:23:25.378 ================================ 00:23:25.378 Supported: No 00:23:25.378 00:23:25.378 Persistent Memory Region Support 00:23:25.378 ================================ 00:23:25.378 Supported: No 00:23:25.378 00:23:25.378 Admin Command Set Attributes 00:23:25.378 ============================ 00:23:25.378 Security Send/Receive: Not Supported 00:23:25.378 Format NVM: Not Supported 00:23:25.378 Firmware Activate/Download: Not Supported 00:23:25.378 Namespace Management: Not Supported 00:23:25.378 Device Self-Test: Not Supported 00:23:25.378 Directives: Not Supported 00:23:25.378 NVMe-MI: Not Supported 00:23:25.378 Virtualization Management: Not Supported 00:23:25.378 Doorbell Buffer Config: Not Supported 00:23:25.378 Get LBA Status Capability: Not Supported 00:23:25.378 Command & Feature Lockdown Capability: Not Supported 00:23:25.378 Abort Command Limit: 1 00:23:25.378 Async Event Request Limit: 4 00:23:25.378 Number of Firmware Slots: N/A 00:23:25.378 Firmware Slot 1 Read-Only: N/A 00:23:25.378 Firmware Activation Without Reset: N/A 00:23:25.378 Multiple Update Detection Support: N/A 00:23:25.378 Firmware Update Granularity: No Information Provided 00:23:25.378 Per-Namespace SMART Log: No 00:23:25.378 Asymmetric Namespace Access Log Page: Not Supported 00:23:25.378 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:25.378 Command Effects Log Page: Not Supported 00:23:25.378 Get Log Page Extended Data: Supported 00:23:25.378 Telemetry Log Pages: Not Supported 00:23:25.378 Persistent Event Log Pages: Not Supported 00:23:25.378 Supported Log Pages Log Page: May Support 00:23:25.378 Commands Supported & Effects Log Page: Not Supported 00:23:25.378 Feature Identifiers & Effects Log Page:May Support 00:23:25.378 NVMe-MI Commands & Effects Log Page: May Support 00:23:25.378 Data Area 4 for Telemetry Log: Not Supported 00:23:25.378 Error Log Page Entries Supported: 128 00:23:25.378 Keep Alive: Not Supported 00:23:25.378 00:23:25.378 NVM Command Set Attributes 00:23:25.378 ========================== 00:23:25.378 Submission Queue Entry Size 00:23:25.378 Max: 1 00:23:25.378 Min: 1 00:23:25.378 Completion Queue Entry Size 00:23:25.378 Max: 1 00:23:25.378 Min: 1 00:23:25.378 Number of Namespaces: 0 00:23:25.378 Compare Command: Not Supported 00:23:25.378 Write Uncorrectable Command: Not Supported 00:23:25.378 Dataset Management Command: Not Supported 00:23:25.378 Write Zeroes Command: Not Supported 00:23:25.378 Set Features Save Field: Not Supported 00:23:25.378 Reservations: Not Supported 00:23:25.378 Timestamp: Not Supported 00:23:25.378 Copy: Not Supported 00:23:25.378 Volatile Write Cache: Not Present 00:23:25.378 Atomic Write Unit (Normal): 1 00:23:25.378 Atomic Write Unit (PFail): 1 00:23:25.378 Atomic Compare & Write Unit: 1 00:23:25.378 Fused Compare & Write: Supported 00:23:25.378 Scatter-Gather List 00:23:25.378 SGL Command Set: Supported 00:23:25.378 SGL Keyed: Supported 00:23:25.378 SGL Bit Bucket Descriptor: Not Supported 00:23:25.378 SGL Metadata Pointer: Not Supported 00:23:25.378 Oversized SGL: Not Supported 00:23:25.378 SGL Metadata Address: Not Supported 00:23:25.378 SGL Offset: Supported 00:23:25.378 Transport SGL Data Block: Not Supported 00:23:25.378 Replay Protected Memory Block: Not Supported 00:23:25.378 00:23:25.378 Firmware Slot Information 00:23:25.378 ========================= 00:23:25.378 Active slot: 0 00:23:25.378 00:23:25.378 00:23:25.378 Error Log 00:23:25.378 ========= 00:23:25.378 00:23:25.378 Active Namespaces 00:23:25.378 ================= 00:23:25.378 Discovery Log Page 00:23:25.378 ================== 00:23:25.378 Generation Counter: 2 00:23:25.378 Number of Records: 2 00:23:25.378 Record Format: 0 00:23:25.378 00:23:25.378 Discovery Log Entry 0 00:23:25.378 ---------------------- 00:23:25.378 Transport Type: 3 (TCP) 00:23:25.378 Address Family: 1 (IPv4) 00:23:25.378 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:25.378 Entry Flags: 00:23:25.378 Duplicate Returned Information: 1 00:23:25.378 Explicit Persistent Connection Support for Discovery: 1 00:23:25.378 Transport Requirements: 00:23:25.378 Secure Channel: Not Required 00:23:25.378 Port ID: 0 (0x0000) 00:23:25.378 Controller ID: 65535 (0xffff) 00:23:25.378 Admin Max SQ Size: 128 00:23:25.378 Transport Service Identifier: 4420 00:23:25.378 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:25.378 Transport Address: 10.0.0.2 00:23:25.378 Discovery Log Entry 1 00:23:25.378 ---------------------- 00:23:25.379 Transport Type: 3 (TCP) 00:23:25.379 Address Family: 1 (IPv4) 00:23:25.379 Subsystem Type: 2 (NVM Subsystem) 00:23:25.379 Entry Flags: 00:23:25.379 Duplicate Returned Information: 0 00:23:25.379 Explicit Persistent Connection Support for Discovery: 0 00:23:25.379 Transport Requirements: 00:23:25.379 Secure Channel: Not Required 00:23:25.379 Port ID: 0 (0x0000) 00:23:25.379 Controller ID: 65535 (0xffff) 00:23:25.379 Admin Max SQ Size: 128 00:23:25.379 Transport Service Identifier: 4420 00:23:25.379 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:23:25.379 Transport Address: 10.0.0.2 [2024-12-09 17:33:54.428453] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:23:25.379 [2024-12-09 17:33:54.428463] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1edf100) on tqpair=0x1e7d690 00:23:25.379 [2024-12-09 17:33:54.428468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.379 [2024-12-09 17:33:54.428473] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1edf280) on tqpair=0x1e7d690 00:23:25.379 [2024-12-09 17:33:54.428477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.379 [2024-12-09 17:33:54.428481] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1edf400) on tqpair=0x1e7d690 00:23:25.379 [2024-12-09 17:33:54.428485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.379 [2024-12-09 17:33:54.428489] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1edf580) on tqpair=0x1e7d690 00:23:25.379 [2024-12-09 17:33:54.428493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.379 [2024-12-09 17:33:54.428502] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.379 [2024-12-09 17:33:54.428506] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.379 [2024-12-09 17:33:54.428509] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e7d690) 00:23:25.379 [2024-12-09 17:33:54.428515] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.379 [2024-12-09 17:33:54.428528] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1edf580, cid 3, qid 0 00:23:25.379 [2024-12-09 17:33:54.428587] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.379 [2024-12-09 17:33:54.428592] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.379 [2024-12-09 17:33:54.428595] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.379 [2024-12-09 17:33:54.428598] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1edf580) on tqpair=0x1e7d690 00:23:25.379 [2024-12-09 17:33:54.428604] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.379 [2024-12-09 17:33:54.428608] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.379 [2024-12-09 17:33:54.428611] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e7d690) 00:23:25.379 [2024-12-09 17:33:54.428616] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.379 [2024-12-09 17:33:54.428628] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1edf580, cid 3, qid 0 00:23:25.379 [2024-12-09 17:33:54.428701] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.379 [2024-12-09 17:33:54.428707] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.379 [2024-12-09 17:33:54.428710] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.379 [2024-12-09 17:33:54.428714] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1edf580) on tqpair=0x1e7d690 00:23:25.379 [2024-12-09 17:33:54.428719] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:23:25.379 [2024-12-09 17:33:54.428723] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:23:25.379 [2024-12-09 17:33:54.428730] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.379 [2024-12-09 17:33:54.428734] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.379 [2024-12-09 17:33:54.428737] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e7d690) 00:23:25.379 [2024-12-09 17:33:54.428742] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.379 [2024-12-09 17:33:54.428752] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1edf580, cid 3, qid 0 00:23:25.379 [2024-12-09 17:33:54.428809] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.379 [2024-12-09 17:33:54.428815] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.379 [2024-12-09 17:33:54.428818] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.379 [2024-12-09 17:33:54.428821] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1edf580) on tqpair=0x1e7d690 00:23:25.379 [2024-12-09 17:33:54.428830] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.379 [2024-12-09 17:33:54.428833] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.379 [2024-12-09 17:33:54.428836] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e7d690) 00:23:25.379 [2024-12-09 17:33:54.428842] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.379 [2024-12-09 17:33:54.428851] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1edf580, cid 3, qid 0 00:23:25.379 [2024-12-09 17:33:54.428933] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.379 [2024-12-09 17:33:54.428938] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.379 [2024-12-09 17:33:54.428941] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.379 [2024-12-09 17:33:54.428944] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1edf580) on tqpair=0x1e7d690 00:23:25.379 [2024-12-09 17:33:54.428953] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.379 [2024-12-09 17:33:54.428957] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.379 [2024-12-09 17:33:54.428960] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e7d690) 00:23:25.379 [2024-12-09 17:33:54.428965] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.379 [2024-12-09 17:33:54.428974] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1edf580, cid 3, qid 0 00:23:25.379 [2024-12-09 17:33:54.429043] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.379 [2024-12-09 17:33:54.429049] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.379 [2024-12-09 17:33:54.429051] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.379 [2024-12-09 17:33:54.429055] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1edf580) on tqpair=0x1e7d690 00:23:25.379 [2024-12-09 17:33:54.429062] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.379 [2024-12-09 17:33:54.429066] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.379 [2024-12-09 17:33:54.429069] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e7d690) 00:23:25.379 [2024-12-09 17:33:54.429074] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.379 [2024-12-09 17:33:54.429083] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1edf580, cid 3, qid 0 00:23:25.379 [2024-12-09 17:33:54.429147] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.379 [2024-12-09 17:33:54.429154] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.379 [2024-12-09 17:33:54.429157] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.379 [2024-12-09 17:33:54.429160] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1edf580) on tqpair=0x1e7d690 00:23:25.379 [2024-12-09 17:33:54.429168] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.379 [2024-12-09 17:33:54.429172] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.379 [2024-12-09 17:33:54.429175] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e7d690) 00:23:25.379 [2024-12-09 17:33:54.429180] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.379 [2024-12-09 17:33:54.429190] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1edf580, cid 3, qid 0 00:23:25.379 [2024-12-09 17:33:54.433224] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.379 [2024-12-09 17:33:54.433231] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.379 [2024-12-09 17:33:54.433234] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.379 [2024-12-09 17:33:54.433237] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1edf580) on tqpair=0x1e7d690 00:23:25.379 [2024-12-09 17:33:54.433246] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.379 [2024-12-09 17:33:54.433250] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.379 [2024-12-09 17:33:54.433253] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e7d690) 00:23:25.379 [2024-12-09 17:33:54.433259] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.379 [2024-12-09 17:33:54.433270] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1edf580, cid 3, qid 0 00:23:25.379 [2024-12-09 17:33:54.433416] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.379 [2024-12-09 17:33:54.433421] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.379 [2024-12-09 17:33:54.433424] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.379 [2024-12-09 17:33:54.433427] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1edf580) on tqpair=0x1e7d690 00:23:25.379 [2024-12-09 17:33:54.433434] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 4 milliseconds 00:23:25.379 00:23:25.379 17:33:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:23:25.379 [2024-12-09 17:33:54.469774] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:23:25.379 [2024-12-09 17:33:54.469810] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2660746 ] 00:23:25.379 [2024-12-09 17:33:54.506922] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:23:25.379 [2024-12-09 17:33:54.506965] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:25.379 [2024-12-09 17:33:54.506970] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:25.379 [2024-12-09 17:33:54.506983] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:25.379 [2024-12-09 17:33:54.506990] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:25.379 [2024-12-09 17:33:54.514356] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:23:25.380 [2024-12-09 17:33:54.514387] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x8e2690 0 00:23:25.380 [2024-12-09 17:33:54.514560] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:25.380 [2024-12-09 17:33:54.514567] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:25.380 [2024-12-09 17:33:54.514570] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:25.380 [2024-12-09 17:33:54.514573] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:25.380 [2024-12-09 17:33:54.514593] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.380 [2024-12-09 17:33:54.514598] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.380 [2024-12-09 17:33:54.514601] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8e2690) 00:23:25.380 [2024-12-09 17:33:54.514611] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:25.380 [2024-12-09 17:33:54.514623] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944100, cid 0, qid 0 00:23:25.380 [2024-12-09 17:33:54.522229] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.380 [2024-12-09 17:33:54.522237] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.380 [2024-12-09 17:33:54.522240] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.380 [2024-12-09 17:33:54.522244] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944100) on tqpair=0x8e2690 00:23:25.380 [2024-12-09 17:33:54.522252] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:25.380 [2024-12-09 17:33:54.522258] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:23:25.380 [2024-12-09 17:33:54.522263] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:23:25.380 [2024-12-09 17:33:54.522273] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.380 [2024-12-09 17:33:54.522277] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.380 [2024-12-09 17:33:54.522280] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8e2690) 00:23:25.380 [2024-12-09 17:33:54.522287] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.380 [2024-12-09 17:33:54.522298] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944100, cid 0, qid 0 00:23:25.380 [2024-12-09 17:33:54.522431] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.380 [2024-12-09 17:33:54.522437] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.380 [2024-12-09 17:33:54.522440] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.380 [2024-12-09 17:33:54.522443] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944100) on tqpair=0x8e2690 00:23:25.380 [2024-12-09 17:33:54.522447] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:23:25.380 [2024-12-09 17:33:54.522453] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:23:25.380 [2024-12-09 17:33:54.522459] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.380 [2024-12-09 17:33:54.522463] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.380 [2024-12-09 17:33:54.522466] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8e2690) 00:23:25.380 [2024-12-09 17:33:54.522472] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.380 [2024-12-09 17:33:54.522482] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944100, cid 0, qid 0 00:23:25.380 [2024-12-09 17:33:54.522543] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.380 [2024-12-09 17:33:54.522548] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.380 [2024-12-09 17:33:54.522551] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.380 [2024-12-09 17:33:54.522557] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944100) on tqpair=0x8e2690 00:23:25.380 [2024-12-09 17:33:54.522561] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:23:25.380 [2024-12-09 17:33:54.522568] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:23:25.380 [2024-12-09 17:33:54.522574] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.380 [2024-12-09 17:33:54.522577] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.380 [2024-12-09 17:33:54.522580] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8e2690) 00:23:25.380 [2024-12-09 17:33:54.522586] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.380 [2024-12-09 17:33:54.522595] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944100, cid 0, qid 0 00:23:25.380 [2024-12-09 17:33:54.522658] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.380 [2024-12-09 17:33:54.522664] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.380 [2024-12-09 17:33:54.522667] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.380 [2024-12-09 17:33:54.522670] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944100) on tqpair=0x8e2690 00:23:25.380 [2024-12-09 17:33:54.522674] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:25.380 [2024-12-09 17:33:54.522683] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.380 [2024-12-09 17:33:54.522687] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.380 [2024-12-09 17:33:54.522690] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8e2690) 00:23:25.380 [2024-12-09 17:33:54.522695] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.380 [2024-12-09 17:33:54.522705] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944100, cid 0, qid 0 00:23:25.380 [2024-12-09 17:33:54.522770] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.380 [2024-12-09 17:33:54.522776] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.380 [2024-12-09 17:33:54.522779] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.380 [2024-12-09 17:33:54.522782] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944100) on tqpair=0x8e2690 00:23:25.380 [2024-12-09 17:33:54.522786] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:23:25.380 [2024-12-09 17:33:54.522791] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:23:25.380 [2024-12-09 17:33:54.522797] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:25.380 [2024-12-09 17:33:54.522904] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:23:25.380 [2024-12-09 17:33:54.522908] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:25.380 [2024-12-09 17:33:54.522915] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.380 [2024-12-09 17:33:54.522918] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.380 [2024-12-09 17:33:54.522921] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8e2690) 00:23:25.380 [2024-12-09 17:33:54.522927] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.380 [2024-12-09 17:33:54.522936] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944100, cid 0, qid 0 00:23:25.380 [2024-12-09 17:33:54.523006] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.380 [2024-12-09 17:33:54.523012] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.380 [2024-12-09 17:33:54.523015] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.380 [2024-12-09 17:33:54.523018] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944100) on tqpair=0x8e2690 00:23:25.380 [2024-12-09 17:33:54.523022] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:25.380 [2024-12-09 17:33:54.523030] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.380 [2024-12-09 17:33:54.523033] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.380 [2024-12-09 17:33:54.523036] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8e2690) 00:23:25.380 [2024-12-09 17:33:54.523042] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.380 [2024-12-09 17:33:54.523051] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944100, cid 0, qid 0 00:23:25.380 [2024-12-09 17:33:54.523116] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.380 [2024-12-09 17:33:54.523122] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.380 [2024-12-09 17:33:54.523125] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.380 [2024-12-09 17:33:54.523128] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944100) on tqpair=0x8e2690 00:23:25.380 [2024-12-09 17:33:54.523132] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:25.380 [2024-12-09 17:33:54.523136] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:23:25.380 [2024-12-09 17:33:54.523142] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:23:25.380 [2024-12-09 17:33:54.523151] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:23:25.380 [2024-12-09 17:33:54.523158] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.381 [2024-12-09 17:33:54.523162] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8e2690) 00:23:25.381 [2024-12-09 17:33:54.523167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.381 [2024-12-09 17:33:54.523176] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944100, cid 0, qid 0 00:23:25.381 [2024-12-09 17:33:54.523272] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:25.381 [2024-12-09 17:33:54.523278] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:25.381 [2024-12-09 17:33:54.523281] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:25.381 [2024-12-09 17:33:54.523284] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8e2690): datao=0, datal=4096, cccid=0 00:23:25.381 [2024-12-09 17:33:54.523288] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x944100) on tqpair(0x8e2690): expected_datao=0, payload_size=4096 00:23:25.381 [2024-12-09 17:33:54.523292] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.381 [2024-12-09 17:33:54.523304] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:25.381 [2024-12-09 17:33:54.523308] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:25.645 [2024-12-09 17:33:54.564360] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.645 [2024-12-09 17:33:54.564370] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.645 [2024-12-09 17:33:54.564373] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.645 [2024-12-09 17:33:54.564376] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944100) on tqpair=0x8e2690 00:23:25.645 [2024-12-09 17:33:54.564383] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:23:25.645 [2024-12-09 17:33:54.564392] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:23:25.645 [2024-12-09 17:33:54.564397] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:23:25.645 [2024-12-09 17:33:54.564401] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:23:25.645 [2024-12-09 17:33:54.564404] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:23:25.645 [2024-12-09 17:33:54.564408] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:23:25.645 [2024-12-09 17:33:54.564417] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:23:25.645 [2024-12-09 17:33:54.564423] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.645 [2024-12-09 17:33:54.564426] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.645 [2024-12-09 17:33:54.564430] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8e2690) 00:23:25.645 [2024-12-09 17:33:54.564436] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:25.645 [2024-12-09 17:33:54.564447] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944100, cid 0, qid 0 00:23:25.645 [2024-12-09 17:33:54.564511] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.645 [2024-12-09 17:33:54.564517] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.645 [2024-12-09 17:33:54.564520] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.645 [2024-12-09 17:33:54.564523] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944100) on tqpair=0x8e2690 00:23:25.645 [2024-12-09 17:33:54.564528] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.645 [2024-12-09 17:33:54.564531] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.645 [2024-12-09 17:33:54.564534] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8e2690) 00:23:25.645 [2024-12-09 17:33:54.564539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.645 [2024-12-09 17:33:54.564544] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.645 [2024-12-09 17:33:54.564548] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.645 [2024-12-09 17:33:54.564551] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x8e2690) 00:23:25.645 [2024-12-09 17:33:54.564555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.645 [2024-12-09 17:33:54.564560] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.645 [2024-12-09 17:33:54.564563] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.645 [2024-12-09 17:33:54.564566] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x8e2690) 00:23:25.645 [2024-12-09 17:33:54.564571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.645 [2024-12-09 17:33:54.564576] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.645 [2024-12-09 17:33:54.564579] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.645 [2024-12-09 17:33:54.564582] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e2690) 00:23:25.645 [2024-12-09 17:33:54.564587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.645 [2024-12-09 17:33:54.564591] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:25.645 [2024-12-09 17:33:54.564601] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:25.645 [2024-12-09 17:33:54.564608] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.645 [2024-12-09 17:33:54.564611] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8e2690) 00:23:25.645 [2024-12-09 17:33:54.564616] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.645 [2024-12-09 17:33:54.564627] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944100, cid 0, qid 0 00:23:25.645 [2024-12-09 17:33:54.564632] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944280, cid 1, qid 0 00:23:25.645 [2024-12-09 17:33:54.564636] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944400, cid 2, qid 0 00:23:25.645 [2024-12-09 17:33:54.564639] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944580, cid 3, qid 0 00:23:25.645 [2024-12-09 17:33:54.564643] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944700, cid 4, qid 0 00:23:25.645 [2024-12-09 17:33:54.564740] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.645 [2024-12-09 17:33:54.564746] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.645 [2024-12-09 17:33:54.564749] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.645 [2024-12-09 17:33:54.564752] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944700) on tqpair=0x8e2690 00:23:25.645 [2024-12-09 17:33:54.564756] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:23:25.645 [2024-12-09 17:33:54.564761] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:23:25.645 [2024-12-09 17:33:54.564768] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:23:25.645 [2024-12-09 17:33:54.564774] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:23:25.645 [2024-12-09 17:33:54.564779] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.645 [2024-12-09 17:33:54.564782] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.645 [2024-12-09 17:33:54.564785] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8e2690) 00:23:25.645 [2024-12-09 17:33:54.564790] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:25.645 [2024-12-09 17:33:54.564800] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944700, cid 4, qid 0 00:23:25.645 [2024-12-09 17:33:54.564863] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.645 [2024-12-09 17:33:54.564868] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.645 [2024-12-09 17:33:54.564871] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.645 [2024-12-09 17:33:54.564874] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944700) on tqpair=0x8e2690 00:23:25.645 [2024-12-09 17:33:54.564927] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:23:25.645 [2024-12-09 17:33:54.564937] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:23:25.645 [2024-12-09 17:33:54.564943] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.645 [2024-12-09 17:33:54.564946] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8e2690) 00:23:25.645 [2024-12-09 17:33:54.564952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.645 [2024-12-09 17:33:54.564961] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944700, cid 4, qid 0 00:23:25.645 [2024-12-09 17:33:54.565037] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:25.645 [2024-12-09 17:33:54.565043] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:25.645 [2024-12-09 17:33:54.565047] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:25.645 [2024-12-09 17:33:54.565050] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8e2690): datao=0, datal=4096, cccid=4 00:23:25.645 [2024-12-09 17:33:54.565053] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x944700) on tqpair(0x8e2690): expected_datao=0, payload_size=4096 00:23:25.645 [2024-12-09 17:33:54.565057] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.645 [2024-12-09 17:33:54.565063] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:25.645 [2024-12-09 17:33:54.565066] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:25.645 [2024-12-09 17:33:54.565080] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.645 [2024-12-09 17:33:54.565086] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.645 [2024-12-09 17:33:54.565089] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.645 [2024-12-09 17:33:54.565092] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944700) on tqpair=0x8e2690 00:23:25.645 [2024-12-09 17:33:54.565100] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:23:25.645 [2024-12-09 17:33:54.565113] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:23:25.645 [2024-12-09 17:33:54.565121] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:23:25.645 [2024-12-09 17:33:54.565127] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.645 [2024-12-09 17:33:54.565130] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8e2690) 00:23:25.645 [2024-12-09 17:33:54.565136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.645 [2024-12-09 17:33:54.565145] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944700, cid 4, qid 0 00:23:25.645 [2024-12-09 17:33:54.565234] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:25.645 [2024-12-09 17:33:54.565240] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:25.645 [2024-12-09 17:33:54.565243] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:25.645 [2024-12-09 17:33:54.565246] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8e2690): datao=0, datal=4096, cccid=4 00:23:25.645 [2024-12-09 17:33:54.565250] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x944700) on tqpair(0x8e2690): expected_datao=0, payload_size=4096 00:23:25.645 [2024-12-09 17:33:54.565254] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.646 [2024-12-09 17:33:54.565259] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:25.646 [2024-12-09 17:33:54.565262] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:25.646 [2024-12-09 17:33:54.565282] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.646 [2024-12-09 17:33:54.565287] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.646 [2024-12-09 17:33:54.565290] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.646 [2024-12-09 17:33:54.565293] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944700) on tqpair=0x8e2690 00:23:25.646 [2024-12-09 17:33:54.565304] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:23:25.646 [2024-12-09 17:33:54.565312] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:23:25.646 [2024-12-09 17:33:54.565318] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.646 [2024-12-09 17:33:54.565322] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8e2690) 00:23:25.646 [2024-12-09 17:33:54.565330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.646 [2024-12-09 17:33:54.565340] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944700, cid 4, qid 0 00:23:25.646 [2024-12-09 17:33:54.565419] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:25.646 [2024-12-09 17:33:54.565425] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:25.646 [2024-12-09 17:33:54.565428] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:25.646 [2024-12-09 17:33:54.565431] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8e2690): datao=0, datal=4096, cccid=4 00:23:25.646 [2024-12-09 17:33:54.565435] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x944700) on tqpair(0x8e2690): expected_datao=0, payload_size=4096 00:23:25.646 [2024-12-09 17:33:54.565438] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.646 [2024-12-09 17:33:54.565444] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:25.646 [2024-12-09 17:33:54.565447] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:25.646 [2024-12-09 17:33:54.565456] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.646 [2024-12-09 17:33:54.565461] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.646 [2024-12-09 17:33:54.565464] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.646 [2024-12-09 17:33:54.565467] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944700) on tqpair=0x8e2690 00:23:25.646 [2024-12-09 17:33:54.565473] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:23:25.646 [2024-12-09 17:33:54.565480] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:23:25.646 [2024-12-09 17:33:54.565487] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:23:25.646 [2024-12-09 17:33:54.565494] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:23:25.646 [2024-12-09 17:33:54.565499] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:23:25.646 [2024-12-09 17:33:54.565504] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:23:25.646 [2024-12-09 17:33:54.565509] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:23:25.646 [2024-12-09 17:33:54.565513] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:23:25.646 [2024-12-09 17:33:54.565518] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:23:25.646 [2024-12-09 17:33:54.565530] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.646 [2024-12-09 17:33:54.565534] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8e2690) 00:23:25.646 [2024-12-09 17:33:54.565540] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.646 [2024-12-09 17:33:54.565545] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.646 [2024-12-09 17:33:54.565548] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.646 [2024-12-09 17:33:54.565551] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x8e2690) 00:23:25.646 [2024-12-09 17:33:54.565557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.646 [2024-12-09 17:33:54.565568] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944700, cid 4, qid 0 00:23:25.646 [2024-12-09 17:33:54.565574] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944880, cid 5, qid 0 00:23:25.646 [2024-12-09 17:33:54.565650] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.646 [2024-12-09 17:33:54.565656] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.646 [2024-12-09 17:33:54.565659] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.646 [2024-12-09 17:33:54.565662] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944700) on tqpair=0x8e2690 00:23:25.646 [2024-12-09 17:33:54.565668] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.646 [2024-12-09 17:33:54.565673] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.646 [2024-12-09 17:33:54.565676] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.646 [2024-12-09 17:33:54.565680] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944880) on tqpair=0x8e2690 00:23:25.646 [2024-12-09 17:33:54.565687] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.646 [2024-12-09 17:33:54.565691] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x8e2690) 00:23:25.646 [2024-12-09 17:33:54.565696] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.646 [2024-12-09 17:33:54.565705] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944880, cid 5, qid 0 00:23:25.646 [2024-12-09 17:33:54.565771] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.646 [2024-12-09 17:33:54.565776] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.646 [2024-12-09 17:33:54.565779] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.646 [2024-12-09 17:33:54.565782] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944880) on tqpair=0x8e2690 00:23:25.646 [2024-12-09 17:33:54.565790] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.646 [2024-12-09 17:33:54.565793] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x8e2690) 00:23:25.646 [2024-12-09 17:33:54.565799] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.646 [2024-12-09 17:33:54.565808] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944880, cid 5, qid 0 00:23:25.646 [2024-12-09 17:33:54.565886] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.646 [2024-12-09 17:33:54.565891] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.646 [2024-12-09 17:33:54.565894] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.646 [2024-12-09 17:33:54.565897] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944880) on tqpair=0x8e2690 00:23:25.646 [2024-12-09 17:33:54.565906] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.646 [2024-12-09 17:33:54.565909] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x8e2690) 00:23:25.646 [2024-12-09 17:33:54.565914] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.646 [2024-12-09 17:33:54.565923] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944880, cid 5, qid 0 00:23:25.646 [2024-12-09 17:33:54.565992] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.646 [2024-12-09 17:33:54.565997] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.646 [2024-12-09 17:33:54.566000] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.646 [2024-12-09 17:33:54.566003] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944880) on tqpair=0x8e2690 00:23:25.646 [2024-12-09 17:33:54.566017] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.646 [2024-12-09 17:33:54.566021] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x8e2690) 00:23:25.646 [2024-12-09 17:33:54.566027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.646 [2024-12-09 17:33:54.566034] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.646 [2024-12-09 17:33:54.566037] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8e2690) 00:23:25.646 [2024-12-09 17:33:54.566043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.646 [2024-12-09 17:33:54.566049] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.646 [2024-12-09 17:33:54.566052] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x8e2690) 00:23:25.646 [2024-12-09 17:33:54.566057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.646 [2024-12-09 17:33:54.566064] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.646 [2024-12-09 17:33:54.566067] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x8e2690) 00:23:25.646 [2024-12-09 17:33:54.566072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.646 [2024-12-09 17:33:54.566082] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944880, cid 5, qid 0 00:23:25.646 [2024-12-09 17:33:54.566087] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944700, cid 4, qid 0 00:23:25.646 [2024-12-09 17:33:54.566091] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944a00, cid 6, qid 0 00:23:25.646 [2024-12-09 17:33:54.566095] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944b80, cid 7, qid 0 00:23:25.646 [2024-12-09 17:33:54.570235] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:25.646 [2024-12-09 17:33:54.570244] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:25.646 [2024-12-09 17:33:54.570247] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:25.646 [2024-12-09 17:33:54.570250] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8e2690): datao=0, datal=8192, cccid=5 00:23:25.646 [2024-12-09 17:33:54.570254] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x944880) on tqpair(0x8e2690): expected_datao=0, payload_size=8192 00:23:25.646 [2024-12-09 17:33:54.570258] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.646 [2024-12-09 17:33:54.570263] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:25.646 [2024-12-09 17:33:54.570267] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:25.646 [2024-12-09 17:33:54.570271] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:25.646 [2024-12-09 17:33:54.570276] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:25.646 [2024-12-09 17:33:54.570279] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:25.646 [2024-12-09 17:33:54.570282] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8e2690): datao=0, datal=512, cccid=4 00:23:25.647 [2024-12-09 17:33:54.570285] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x944700) on tqpair(0x8e2690): expected_datao=0, payload_size=512 00:23:25.647 [2024-12-09 17:33:54.570289] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.647 [2024-12-09 17:33:54.570294] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:25.647 [2024-12-09 17:33:54.570298] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:25.647 [2024-12-09 17:33:54.570302] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:25.647 [2024-12-09 17:33:54.570307] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:25.647 [2024-12-09 17:33:54.570310] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:25.647 [2024-12-09 17:33:54.570313] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8e2690): datao=0, datal=512, cccid=6 00:23:25.647 [2024-12-09 17:33:54.570316] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x944a00) on tqpair(0x8e2690): expected_datao=0, payload_size=512 00:23:25.647 [2024-12-09 17:33:54.570320] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.647 [2024-12-09 17:33:54.570328] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:25.647 [2024-12-09 17:33:54.570331] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:25.647 [2024-12-09 17:33:54.570336] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:25.647 [2024-12-09 17:33:54.570340] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:25.647 [2024-12-09 17:33:54.570343] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:25.647 [2024-12-09 17:33:54.570346] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8e2690): datao=0, datal=4096, cccid=7 00:23:25.647 [2024-12-09 17:33:54.570350] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x944b80) on tqpair(0x8e2690): expected_datao=0, payload_size=4096 00:23:25.647 [2024-12-09 17:33:54.570353] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.647 [2024-12-09 17:33:54.570359] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:25.647 [2024-12-09 17:33:54.570362] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:25.647 [2024-12-09 17:33:54.570366] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.647 [2024-12-09 17:33:54.570371] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.647 [2024-12-09 17:33:54.570374] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.647 [2024-12-09 17:33:54.570377] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944880) on tqpair=0x8e2690 00:23:25.647 [2024-12-09 17:33:54.570387] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.647 [2024-12-09 17:33:54.570392] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.647 [2024-12-09 17:33:54.570395] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.647 [2024-12-09 17:33:54.570398] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944700) on tqpair=0x8e2690 00:23:25.647 [2024-12-09 17:33:54.570406] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.647 [2024-12-09 17:33:54.570411] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.647 [2024-12-09 17:33:54.570414] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.647 [2024-12-09 17:33:54.570417] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944a00) on tqpair=0x8e2690 00:23:25.647 [2024-12-09 17:33:54.570423] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.647 [2024-12-09 17:33:54.570428] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.647 [2024-12-09 17:33:54.570431] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.647 [2024-12-09 17:33:54.570434] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944b80) on tqpair=0x8e2690 00:23:25.647 ===================================================== 00:23:25.647 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:25.647 ===================================================== 00:23:25.647 Controller Capabilities/Features 00:23:25.647 ================================ 00:23:25.647 Vendor ID: 8086 00:23:25.647 Subsystem Vendor ID: 8086 00:23:25.647 Serial Number: SPDK00000000000001 00:23:25.647 Model Number: SPDK bdev Controller 00:23:25.647 Firmware Version: 25.01 00:23:25.647 Recommended Arb Burst: 6 00:23:25.647 IEEE OUI Identifier: e4 d2 5c 00:23:25.647 Multi-path I/O 00:23:25.647 May have multiple subsystem ports: Yes 00:23:25.647 May have multiple controllers: Yes 00:23:25.647 Associated with SR-IOV VF: No 00:23:25.647 Max Data Transfer Size: 131072 00:23:25.647 Max Number of Namespaces: 32 00:23:25.647 Max Number of I/O Queues: 127 00:23:25.647 NVMe Specification Version (VS): 1.3 00:23:25.647 NVMe Specification Version (Identify): 1.3 00:23:25.647 Maximum Queue Entries: 128 00:23:25.647 Contiguous Queues Required: Yes 00:23:25.647 Arbitration Mechanisms Supported 00:23:25.647 Weighted Round Robin: Not Supported 00:23:25.647 Vendor Specific: Not Supported 00:23:25.647 Reset Timeout: 15000 ms 00:23:25.647 Doorbell Stride: 4 bytes 00:23:25.647 NVM Subsystem Reset: Not Supported 00:23:25.647 Command Sets Supported 00:23:25.647 NVM Command Set: Supported 00:23:25.647 Boot Partition: Not Supported 00:23:25.647 Memory Page Size Minimum: 4096 bytes 00:23:25.647 Memory Page Size Maximum: 4096 bytes 00:23:25.647 Persistent Memory Region: Not Supported 00:23:25.647 Optional Asynchronous Events Supported 00:23:25.647 Namespace Attribute Notices: Supported 00:23:25.647 Firmware Activation Notices: Not Supported 00:23:25.647 ANA Change Notices: Not Supported 00:23:25.647 PLE Aggregate Log Change Notices: Not Supported 00:23:25.647 LBA Status Info Alert Notices: Not Supported 00:23:25.647 EGE Aggregate Log Change Notices: Not Supported 00:23:25.647 Normal NVM Subsystem Shutdown event: Not Supported 00:23:25.647 Zone Descriptor Change Notices: Not Supported 00:23:25.647 Discovery Log Change Notices: Not Supported 00:23:25.647 Controller Attributes 00:23:25.647 128-bit Host Identifier: Supported 00:23:25.647 Non-Operational Permissive Mode: Not Supported 00:23:25.647 NVM Sets: Not Supported 00:23:25.647 Read Recovery Levels: Not Supported 00:23:25.647 Endurance Groups: Not Supported 00:23:25.647 Predictable Latency Mode: Not Supported 00:23:25.647 Traffic Based Keep ALive: Not Supported 00:23:25.647 Namespace Granularity: Not Supported 00:23:25.647 SQ Associations: Not Supported 00:23:25.647 UUID List: Not Supported 00:23:25.647 Multi-Domain Subsystem: Not Supported 00:23:25.647 Fixed Capacity Management: Not Supported 00:23:25.647 Variable Capacity Management: Not Supported 00:23:25.647 Delete Endurance Group: Not Supported 00:23:25.647 Delete NVM Set: Not Supported 00:23:25.647 Extended LBA Formats Supported: Not Supported 00:23:25.647 Flexible Data Placement Supported: Not Supported 00:23:25.647 00:23:25.647 Controller Memory Buffer Support 00:23:25.647 ================================ 00:23:25.647 Supported: No 00:23:25.647 00:23:25.647 Persistent Memory Region Support 00:23:25.647 ================================ 00:23:25.647 Supported: No 00:23:25.647 00:23:25.647 Admin Command Set Attributes 00:23:25.647 ============================ 00:23:25.647 Security Send/Receive: Not Supported 00:23:25.647 Format NVM: Not Supported 00:23:25.647 Firmware Activate/Download: Not Supported 00:23:25.647 Namespace Management: Not Supported 00:23:25.647 Device Self-Test: Not Supported 00:23:25.647 Directives: Not Supported 00:23:25.647 NVMe-MI: Not Supported 00:23:25.647 Virtualization Management: Not Supported 00:23:25.647 Doorbell Buffer Config: Not Supported 00:23:25.647 Get LBA Status Capability: Not Supported 00:23:25.647 Command & Feature Lockdown Capability: Not Supported 00:23:25.647 Abort Command Limit: 4 00:23:25.647 Async Event Request Limit: 4 00:23:25.647 Number of Firmware Slots: N/A 00:23:25.647 Firmware Slot 1 Read-Only: N/A 00:23:25.647 Firmware Activation Without Reset: N/A 00:23:25.647 Multiple Update Detection Support: N/A 00:23:25.647 Firmware Update Granularity: No Information Provided 00:23:25.647 Per-Namespace SMART Log: No 00:23:25.647 Asymmetric Namespace Access Log Page: Not Supported 00:23:25.647 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:23:25.647 Command Effects Log Page: Supported 00:23:25.647 Get Log Page Extended Data: Supported 00:23:25.647 Telemetry Log Pages: Not Supported 00:23:25.647 Persistent Event Log Pages: Not Supported 00:23:25.647 Supported Log Pages Log Page: May Support 00:23:25.647 Commands Supported & Effects Log Page: Not Supported 00:23:25.647 Feature Identifiers & Effects Log Page:May Support 00:23:25.647 NVMe-MI Commands & Effects Log Page: May Support 00:23:25.647 Data Area 4 for Telemetry Log: Not Supported 00:23:25.647 Error Log Page Entries Supported: 128 00:23:25.647 Keep Alive: Supported 00:23:25.647 Keep Alive Granularity: 10000 ms 00:23:25.647 00:23:25.647 NVM Command Set Attributes 00:23:25.647 ========================== 00:23:25.647 Submission Queue Entry Size 00:23:25.647 Max: 64 00:23:25.647 Min: 64 00:23:25.647 Completion Queue Entry Size 00:23:25.647 Max: 16 00:23:25.647 Min: 16 00:23:25.647 Number of Namespaces: 32 00:23:25.647 Compare Command: Supported 00:23:25.647 Write Uncorrectable Command: Not Supported 00:23:25.647 Dataset Management Command: Supported 00:23:25.647 Write Zeroes Command: Supported 00:23:25.647 Set Features Save Field: Not Supported 00:23:25.647 Reservations: Supported 00:23:25.647 Timestamp: Not Supported 00:23:25.647 Copy: Supported 00:23:25.647 Volatile Write Cache: Present 00:23:25.647 Atomic Write Unit (Normal): 1 00:23:25.647 Atomic Write Unit (PFail): 1 00:23:25.647 Atomic Compare & Write Unit: 1 00:23:25.647 Fused Compare & Write: Supported 00:23:25.647 Scatter-Gather List 00:23:25.647 SGL Command Set: Supported 00:23:25.647 SGL Keyed: Supported 00:23:25.647 SGL Bit Bucket Descriptor: Not Supported 00:23:25.647 SGL Metadata Pointer: Not Supported 00:23:25.647 Oversized SGL: Not Supported 00:23:25.647 SGL Metadata Address: Not Supported 00:23:25.648 SGL Offset: Supported 00:23:25.648 Transport SGL Data Block: Not Supported 00:23:25.648 Replay Protected Memory Block: Not Supported 00:23:25.648 00:23:25.648 Firmware Slot Information 00:23:25.648 ========================= 00:23:25.648 Active slot: 1 00:23:25.648 Slot 1 Firmware Revision: 25.01 00:23:25.648 00:23:25.648 00:23:25.648 Commands Supported and Effects 00:23:25.648 ============================== 00:23:25.648 Admin Commands 00:23:25.648 -------------- 00:23:25.648 Get Log Page (02h): Supported 00:23:25.648 Identify (06h): Supported 00:23:25.648 Abort (08h): Supported 00:23:25.648 Set Features (09h): Supported 00:23:25.648 Get Features (0Ah): Supported 00:23:25.648 Asynchronous Event Request (0Ch): Supported 00:23:25.648 Keep Alive (18h): Supported 00:23:25.648 I/O Commands 00:23:25.648 ------------ 00:23:25.648 Flush (00h): Supported LBA-Change 00:23:25.648 Write (01h): Supported LBA-Change 00:23:25.648 Read (02h): Supported 00:23:25.648 Compare (05h): Supported 00:23:25.648 Write Zeroes (08h): Supported LBA-Change 00:23:25.648 Dataset Management (09h): Supported LBA-Change 00:23:25.648 Copy (19h): Supported LBA-Change 00:23:25.648 00:23:25.648 Error Log 00:23:25.648 ========= 00:23:25.648 00:23:25.648 Arbitration 00:23:25.648 =========== 00:23:25.648 Arbitration Burst: 1 00:23:25.648 00:23:25.648 Power Management 00:23:25.648 ================ 00:23:25.648 Number of Power States: 1 00:23:25.648 Current Power State: Power State #0 00:23:25.648 Power State #0: 00:23:25.648 Max Power: 0.00 W 00:23:25.648 Non-Operational State: Operational 00:23:25.648 Entry Latency: Not Reported 00:23:25.648 Exit Latency: Not Reported 00:23:25.648 Relative Read Throughput: 0 00:23:25.648 Relative Read Latency: 0 00:23:25.648 Relative Write Throughput: 0 00:23:25.648 Relative Write Latency: 0 00:23:25.648 Idle Power: Not Reported 00:23:25.648 Active Power: Not Reported 00:23:25.648 Non-Operational Permissive Mode: Not Supported 00:23:25.648 00:23:25.648 Health Information 00:23:25.648 ================== 00:23:25.648 Critical Warnings: 00:23:25.648 Available Spare Space: OK 00:23:25.648 Temperature: OK 00:23:25.648 Device Reliability: OK 00:23:25.648 Read Only: No 00:23:25.648 Volatile Memory Backup: OK 00:23:25.648 Current Temperature: 0 Kelvin (-273 Celsius) 00:23:25.648 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:23:25.648 Available Spare: 0% 00:23:25.648 Available Spare Threshold: 0% 00:23:25.648 Life Percentage Used:[2024-12-09 17:33:54.570513] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.648 [2024-12-09 17:33:54.570518] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x8e2690) 00:23:25.648 [2024-12-09 17:33:54.570524] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.648 [2024-12-09 17:33:54.570535] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944b80, cid 7, qid 0 00:23:25.648 [2024-12-09 17:33:54.570683] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.648 [2024-12-09 17:33:54.570689] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.648 [2024-12-09 17:33:54.570693] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.648 [2024-12-09 17:33:54.570696] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944b80) on tqpair=0x8e2690 00:23:25.648 [2024-12-09 17:33:54.570734] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:23:25.648 [2024-12-09 17:33:54.570744] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944100) on tqpair=0x8e2690 00:23:25.648 [2024-12-09 17:33:54.570750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.648 [2024-12-09 17:33:54.570755] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944280) on tqpair=0x8e2690 00:23:25.648 [2024-12-09 17:33:54.570760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.648 [2024-12-09 17:33:54.570765] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944400) on tqpair=0x8e2690 00:23:25.648 [2024-12-09 17:33:54.570769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.648 [2024-12-09 17:33:54.570773] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944580) on tqpair=0x8e2690 00:23:25.648 [2024-12-09 17:33:54.570777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.648 [2024-12-09 17:33:54.570783] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.648 [2024-12-09 17:33:54.570787] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.648 [2024-12-09 17:33:54.570790] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e2690) 00:23:25.648 [2024-12-09 17:33:54.570796] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.648 [2024-12-09 17:33:54.570807] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944580, cid 3, qid 0 00:23:25.648 [2024-12-09 17:33:54.570874] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.648 [2024-12-09 17:33:54.570879] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.648 [2024-12-09 17:33:54.570882] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.648 [2024-12-09 17:33:54.570885] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944580) on tqpair=0x8e2690 00:23:25.648 [2024-12-09 17:33:54.570891] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.648 [2024-12-09 17:33:54.570894] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.648 [2024-12-09 17:33:54.570898] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e2690) 00:23:25.648 [2024-12-09 17:33:54.570906] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.648 [2024-12-09 17:33:54.570920] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944580, cid 3, qid 0 00:23:25.648 [2024-12-09 17:33:54.570997] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.648 [2024-12-09 17:33:54.571003] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.648 [2024-12-09 17:33:54.571006] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.648 [2024-12-09 17:33:54.571009] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944580) on tqpair=0x8e2690 00:23:25.648 [2024-12-09 17:33:54.571012] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:23:25.648 [2024-12-09 17:33:54.571016] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:23:25.648 [2024-12-09 17:33:54.571024] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.648 [2024-12-09 17:33:54.571028] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.648 [2024-12-09 17:33:54.571032] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e2690) 00:23:25.648 [2024-12-09 17:33:54.571037] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.648 [2024-12-09 17:33:54.571047] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944580, cid 3, qid 0 00:23:25.648 [2024-12-09 17:33:54.571107] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.648 [2024-12-09 17:33:54.571113] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.648 [2024-12-09 17:33:54.571116] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.648 [2024-12-09 17:33:54.571119] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944580) on tqpair=0x8e2690 00:23:25.648 [2024-12-09 17:33:54.571128] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.648 [2024-12-09 17:33:54.571133] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.648 [2024-12-09 17:33:54.571136] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e2690) 00:23:25.648 [2024-12-09 17:33:54.571142] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.648 [2024-12-09 17:33:54.571151] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944580, cid 3, qid 0 00:23:25.648 [2024-12-09 17:33:54.571215] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.648 [2024-12-09 17:33:54.571230] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.648 [2024-12-09 17:33:54.571233] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.648 [2024-12-09 17:33:54.571236] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944580) on tqpair=0x8e2690 00:23:25.648 [2024-12-09 17:33:54.571244] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.648 [2024-12-09 17:33:54.571247] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.648 [2024-12-09 17:33:54.571250] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e2690) 00:23:25.648 [2024-12-09 17:33:54.571256] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.648 [2024-12-09 17:33:54.571266] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944580, cid 3, qid 0 00:23:25.648 [2024-12-09 17:33:54.571333] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.648 [2024-12-09 17:33:54.571339] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.648 [2024-12-09 17:33:54.571342] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.648 [2024-12-09 17:33:54.571345] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944580) on tqpair=0x8e2690 00:23:25.648 [2024-12-09 17:33:54.571353] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.648 [2024-12-09 17:33:54.571357] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.648 [2024-12-09 17:33:54.571360] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e2690) 00:23:25.648 [2024-12-09 17:33:54.571365] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.648 [2024-12-09 17:33:54.571375] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944580, cid 3, qid 0 00:23:25.648 [2024-12-09 17:33:54.571449] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.648 [2024-12-09 17:33:54.571455] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.648 [2024-12-09 17:33:54.571458] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.648 [2024-12-09 17:33:54.571461] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944580) on tqpair=0x8e2690 00:23:25.648 [2024-12-09 17:33:54.571469] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.648 [2024-12-09 17:33:54.571472] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.649 [2024-12-09 17:33:54.571475] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e2690) 00:23:25.649 [2024-12-09 17:33:54.571481] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.649 [2024-12-09 17:33:54.571490] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944580, cid 3, qid 0 00:23:25.649 [2024-12-09 17:33:54.571556] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.649 [2024-12-09 17:33:54.571563] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.649 [2024-12-09 17:33:54.571567] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.649 [2024-12-09 17:33:54.571572] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944580) on tqpair=0x8e2690 00:23:25.649 [2024-12-09 17:33:54.571582] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.649 [2024-12-09 17:33:54.571587] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.649 [2024-12-09 17:33:54.571594] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e2690) 00:23:25.649 [2024-12-09 17:33:54.571603] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.649 [2024-12-09 17:33:54.571614] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944580, cid 3, qid 0 00:23:25.649 [2024-12-09 17:33:54.571683] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.649 [2024-12-09 17:33:54.571689] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.649 [2024-12-09 17:33:54.571692] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.649 [2024-12-09 17:33:54.571696] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944580) on tqpair=0x8e2690 00:23:25.649 [2024-12-09 17:33:54.571704] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.649 [2024-12-09 17:33:54.571708] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.649 [2024-12-09 17:33:54.571711] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e2690) 00:23:25.649 [2024-12-09 17:33:54.571716] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.649 [2024-12-09 17:33:54.571726] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944580, cid 3, qid 0 00:23:25.649 [2024-12-09 17:33:54.571800] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.649 [2024-12-09 17:33:54.571806] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.649 [2024-12-09 17:33:54.571809] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.649 [2024-12-09 17:33:54.571812] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944580) on tqpair=0x8e2690 00:23:25.649 [2024-12-09 17:33:54.571820] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.649 [2024-12-09 17:33:54.571824] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.649 [2024-12-09 17:33:54.571828] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e2690) 00:23:25.649 [2024-12-09 17:33:54.571833] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.649 [2024-12-09 17:33:54.571843] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944580, cid 3, qid 0 00:23:25.649 [2024-12-09 17:33:54.571917] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.649 [2024-12-09 17:33:54.571924] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.649 [2024-12-09 17:33:54.571928] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.649 [2024-12-09 17:33:54.571932] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944580) on tqpair=0x8e2690 00:23:25.649 [2024-12-09 17:33:54.571942] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.649 [2024-12-09 17:33:54.571947] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.649 [2024-12-09 17:33:54.571950] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e2690) 00:23:25.649 [2024-12-09 17:33:54.571956] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.649 [2024-12-09 17:33:54.571966] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944580, cid 3, qid 0 00:23:25.649 [2024-12-09 17:33:54.572031] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.649 [2024-12-09 17:33:54.572037] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.649 [2024-12-09 17:33:54.572039] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.649 [2024-12-09 17:33:54.572043] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944580) on tqpair=0x8e2690 00:23:25.649 [2024-12-09 17:33:54.572051] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.649 [2024-12-09 17:33:54.572055] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.649 [2024-12-09 17:33:54.572060] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e2690) 00:23:25.649 [2024-12-09 17:33:54.572068] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.649 [2024-12-09 17:33:54.572077] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944580, cid 3, qid 0 00:23:25.649 [2024-12-09 17:33:54.572156] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.649 [2024-12-09 17:33:54.572162] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.649 [2024-12-09 17:33:54.572166] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.649 [2024-12-09 17:33:54.572170] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944580) on tqpair=0x8e2690 00:23:25.649 [2024-12-09 17:33:54.572178] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.649 [2024-12-09 17:33:54.572183] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.649 [2024-12-09 17:33:54.572187] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e2690) 00:23:25.649 [2024-12-09 17:33:54.572194] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.649 [2024-12-09 17:33:54.572204] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944580, cid 3, qid 0 00:23:25.649 [2024-12-09 17:33:54.572269] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.649 [2024-12-09 17:33:54.572275] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.649 [2024-12-09 17:33:54.572277] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.649 [2024-12-09 17:33:54.572281] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944580) on tqpair=0x8e2690 00:23:25.649 [2024-12-09 17:33:54.572289] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.649 [2024-12-09 17:33:54.572294] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.649 [2024-12-09 17:33:54.572299] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e2690) 00:23:25.649 [2024-12-09 17:33:54.572305] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.649 [2024-12-09 17:33:54.572316] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944580, cid 3, qid 0 00:23:25.649 [2024-12-09 17:33:54.572387] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.649 [2024-12-09 17:33:54.572393] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.649 [2024-12-09 17:33:54.572396] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.649 [2024-12-09 17:33:54.572400] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944580) on tqpair=0x8e2690 00:23:25.649 [2024-12-09 17:33:54.572411] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.649 [2024-12-09 17:33:54.572417] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.649 [2024-12-09 17:33:54.572421] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e2690) 00:23:25.649 [2024-12-09 17:33:54.572426] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.649 [2024-12-09 17:33:54.572435] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944580, cid 3, qid 0 00:23:25.649 [2024-12-09 17:33:54.572497] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.649 [2024-12-09 17:33:54.572504] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.649 [2024-12-09 17:33:54.572510] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.649 [2024-12-09 17:33:54.572517] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944580) on tqpair=0x8e2690 00:23:25.649 [2024-12-09 17:33:54.572527] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.649 [2024-12-09 17:33:54.572531] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.649 [2024-12-09 17:33:54.572534] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e2690) 00:23:25.649 [2024-12-09 17:33:54.572540] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.649 [2024-12-09 17:33:54.572551] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944580, cid 3, qid 0 00:23:25.649 [2024-12-09 17:33:54.572620] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.649 [2024-12-09 17:33:54.572626] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.649 [2024-12-09 17:33:54.572630] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.649 [2024-12-09 17:33:54.572634] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944580) on tqpair=0x8e2690 00:23:25.649 [2024-12-09 17:33:54.572642] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.649 [2024-12-09 17:33:54.572648] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.649 [2024-12-09 17:33:54.572652] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e2690) 00:23:25.649 [2024-12-09 17:33:54.572658] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.649 [2024-12-09 17:33:54.572670] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944580, cid 3, qid 0 00:23:25.649 [2024-12-09 17:33:54.572737] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.649 [2024-12-09 17:33:54.572744] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.650 [2024-12-09 17:33:54.572748] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.650 [2024-12-09 17:33:54.572751] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944580) on tqpair=0x8e2690 00:23:25.650 [2024-12-09 17:33:54.572760] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.650 [2024-12-09 17:33:54.572763] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.650 [2024-12-09 17:33:54.572766] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e2690) 00:23:25.650 [2024-12-09 17:33:54.572772] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.650 [2024-12-09 17:33:54.572781] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944580, cid 3, qid 0 00:23:25.650 [2024-12-09 17:33:54.572859] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.650 [2024-12-09 17:33:54.572865] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.650 [2024-12-09 17:33:54.572867] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.650 [2024-12-09 17:33:54.572871] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944580) on tqpair=0x8e2690 00:23:25.650 [2024-12-09 17:33:54.572879] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.650 [2024-12-09 17:33:54.572883] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.650 [2024-12-09 17:33:54.572887] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e2690) 00:23:25.650 [2024-12-09 17:33:54.572893] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.650 [2024-12-09 17:33:54.572904] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944580, cid 3, qid 0 00:23:25.650 [2024-12-09 17:33:54.572966] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.650 [2024-12-09 17:33:54.572971] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.650 [2024-12-09 17:33:54.572974] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.650 [2024-12-09 17:33:54.572978] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944580) on tqpair=0x8e2690 00:23:25.650 [2024-12-09 17:33:54.572986] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.650 [2024-12-09 17:33:54.572990] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.650 [2024-12-09 17:33:54.572993] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e2690) 00:23:25.650 [2024-12-09 17:33:54.572998] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.650 [2024-12-09 17:33:54.573007] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944580, cid 3, qid 0 00:23:25.650 [2024-12-09 17:33:54.573067] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.650 [2024-12-09 17:33:54.573073] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.650 [2024-12-09 17:33:54.573076] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.650 [2024-12-09 17:33:54.573079] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944580) on tqpair=0x8e2690 00:23:25.650 [2024-12-09 17:33:54.573087] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.650 [2024-12-09 17:33:54.573090] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.650 [2024-12-09 17:33:54.573093] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e2690) 00:23:25.650 [2024-12-09 17:33:54.573099] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.650 [2024-12-09 17:33:54.573108] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944580, cid 3, qid 0 00:23:25.650 [2024-12-09 17:33:54.573167] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.650 [2024-12-09 17:33:54.573173] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.650 [2024-12-09 17:33:54.573176] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.650 [2024-12-09 17:33:54.573179] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944580) on tqpair=0x8e2690 00:23:25.650 [2024-12-09 17:33:54.573187] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.650 [2024-12-09 17:33:54.573190] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.650 [2024-12-09 17:33:54.573193] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e2690) 00:23:25.650 [2024-12-09 17:33:54.573199] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.650 [2024-12-09 17:33:54.573207] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944580, cid 3, qid 0 00:23:25.650 [2024-12-09 17:33:54.573287] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.650 [2024-12-09 17:33:54.573293] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.650 [2024-12-09 17:33:54.573296] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.650 [2024-12-09 17:33:54.573299] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944580) on tqpair=0x8e2690 00:23:25.650 [2024-12-09 17:33:54.573307] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.650 [2024-12-09 17:33:54.573310] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.650 [2024-12-09 17:33:54.573313] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e2690) 00:23:25.650 [2024-12-09 17:33:54.573319] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.650 [2024-12-09 17:33:54.573328] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944580, cid 3, qid 0 00:23:25.650 [2024-12-09 17:33:54.573392] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.650 [2024-12-09 17:33:54.573398] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.650 [2024-12-09 17:33:54.573401] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.650 [2024-12-09 17:33:54.573404] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944580) on tqpair=0x8e2690 00:23:25.650 [2024-12-09 17:33:54.573412] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.650 [2024-12-09 17:33:54.573416] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.650 [2024-12-09 17:33:54.573419] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e2690) 00:23:25.650 [2024-12-09 17:33:54.573424] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.650 [2024-12-09 17:33:54.573433] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944580, cid 3, qid 0 00:23:25.650 [2024-12-09 17:33:54.573502] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.650 [2024-12-09 17:33:54.573510] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.650 [2024-12-09 17:33:54.573513] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.650 [2024-12-09 17:33:54.573516] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944580) on tqpair=0x8e2690 00:23:25.650 [2024-12-09 17:33:54.573524] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.650 [2024-12-09 17:33:54.573527] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.650 [2024-12-09 17:33:54.573530] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e2690) 00:23:25.650 [2024-12-09 17:33:54.573536] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.650 [2024-12-09 17:33:54.573544] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944580, cid 3, qid 0 00:23:25.650 [2024-12-09 17:33:54.573611] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.650 [2024-12-09 17:33:54.573617] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.650 [2024-12-09 17:33:54.573620] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.650 [2024-12-09 17:33:54.573623] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944580) on tqpair=0x8e2690 00:23:25.650 [2024-12-09 17:33:54.573631] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.650 [2024-12-09 17:33:54.573634] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.650 [2024-12-09 17:33:54.573637] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e2690) 00:23:25.650 [2024-12-09 17:33:54.573643] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.650 [2024-12-09 17:33:54.573652] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944580, cid 3, qid 0 00:23:25.650 [2024-12-09 17:33:54.573714] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.650 [2024-12-09 17:33:54.573720] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.650 [2024-12-09 17:33:54.573722] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.650 [2024-12-09 17:33:54.573725] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944580) on tqpair=0x8e2690 00:23:25.650 [2024-12-09 17:33:54.573733] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.650 [2024-12-09 17:33:54.573737] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.650 [2024-12-09 17:33:54.573740] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e2690) 00:23:25.650 [2024-12-09 17:33:54.573745] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.650 [2024-12-09 17:33:54.573754] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944580, cid 3, qid 0 00:23:25.650 [2024-12-09 17:33:54.573821] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.650 [2024-12-09 17:33:54.573826] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.650 [2024-12-09 17:33:54.573829] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.650 [2024-12-09 17:33:54.573832] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944580) on tqpair=0x8e2690 00:23:25.650 [2024-12-09 17:33:54.573840] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.650 [2024-12-09 17:33:54.573844] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.650 [2024-12-09 17:33:54.573847] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e2690) 00:23:25.650 [2024-12-09 17:33:54.573852] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.650 [2024-12-09 17:33:54.573861] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944580, cid 3, qid 0 00:23:25.650 [2024-12-09 17:33:54.573920] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.650 [2024-12-09 17:33:54.573926] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.650 [2024-12-09 17:33:54.573930] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.650 [2024-12-09 17:33:54.573934] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944580) on tqpair=0x8e2690 00:23:25.650 [2024-12-09 17:33:54.573941] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.650 [2024-12-09 17:33:54.573945] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.650 [2024-12-09 17:33:54.573948] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e2690) 00:23:25.650 [2024-12-09 17:33:54.573953] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.650 [2024-12-09 17:33:54.573962] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944580, cid 3, qid 0 00:23:25.650 [2024-12-09 17:33:54.574033] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.650 [2024-12-09 17:33:54.574038] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.650 [2024-12-09 17:33:54.574041] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.650 [2024-12-09 17:33:54.574045] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944580) on tqpair=0x8e2690 00:23:25.650 [2024-12-09 17:33:54.574053] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.651 [2024-12-09 17:33:54.574056] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.651 [2024-12-09 17:33:54.574059] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e2690) 00:23:25.651 [2024-12-09 17:33:54.574065] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.651 [2024-12-09 17:33:54.574074] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944580, cid 3, qid 0 00:23:25.651 [2024-12-09 17:33:54.574136] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.651 [2024-12-09 17:33:54.574141] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.651 [2024-12-09 17:33:54.574144] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.651 [2024-12-09 17:33:54.574147] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944580) on tqpair=0x8e2690 00:23:25.651 [2024-12-09 17:33:54.574155] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.651 [2024-12-09 17:33:54.574159] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.651 [2024-12-09 17:33:54.574162] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e2690) 00:23:25.651 [2024-12-09 17:33:54.574167] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.651 [2024-12-09 17:33:54.574176] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944580, cid 3, qid 0 00:23:25.651 [2024-12-09 17:33:54.574248] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.651 [2024-12-09 17:33:54.574254] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.651 [2024-12-09 17:33:54.574257] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.651 [2024-12-09 17:33:54.574260] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944580) on tqpair=0x8e2690 00:23:25.651 [2024-12-09 17:33:54.574269] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.651 [2024-12-09 17:33:54.574272] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.651 [2024-12-09 17:33:54.574275] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e2690) 00:23:25.651 [2024-12-09 17:33:54.574281] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.651 [2024-12-09 17:33:54.574290] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944580, cid 3, qid 0 00:23:25.651 [2024-12-09 17:33:54.574355] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.651 [2024-12-09 17:33:54.574361] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.651 [2024-12-09 17:33:54.574363] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.651 [2024-12-09 17:33:54.574368] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944580) on tqpair=0x8e2690 00:23:25.651 [2024-12-09 17:33:54.574376] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.651 [2024-12-09 17:33:54.574380] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.651 [2024-12-09 17:33:54.574383] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e2690) 00:23:25.651 [2024-12-09 17:33:54.574388] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.651 [2024-12-09 17:33:54.574397] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944580, cid 3, qid 0 00:23:25.651 [2024-12-09 17:33:54.574466] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.651 [2024-12-09 17:33:54.574473] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.651 [2024-12-09 17:33:54.574476] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.651 [2024-12-09 17:33:54.574479] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944580) on tqpair=0x8e2690 00:23:25.651 [2024-12-09 17:33:54.574487] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.651 [2024-12-09 17:33:54.574491] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.651 [2024-12-09 17:33:54.574494] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e2690) 00:23:25.651 [2024-12-09 17:33:54.574499] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.651 [2024-12-09 17:33:54.574508] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944580, cid 3, qid 0 00:23:25.651 [2024-12-09 17:33:54.574573] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.651 [2024-12-09 17:33:54.574579] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.651 [2024-12-09 17:33:54.574582] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.651 [2024-12-09 17:33:54.574585] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944580) on tqpair=0x8e2690 00:23:25.651 [2024-12-09 17:33:54.574593] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.651 [2024-12-09 17:33:54.574597] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.651 [2024-12-09 17:33:54.574600] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e2690) 00:23:25.651 [2024-12-09 17:33:54.574605] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.651 [2024-12-09 17:33:54.574616] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944580, cid 3, qid 0 00:23:25.651 [2024-12-09 17:33:54.574681] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.651 [2024-12-09 17:33:54.574686] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.651 [2024-12-09 17:33:54.574689] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.651 [2024-12-09 17:33:54.574692] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944580) on tqpair=0x8e2690 00:23:25.651 [2024-12-09 17:33:54.574701] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.651 [2024-12-09 17:33:54.574704] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.651 [2024-12-09 17:33:54.574707] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e2690) 00:23:25.651 [2024-12-09 17:33:54.574713] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.651 [2024-12-09 17:33:54.574723] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944580, cid 3, qid 0 00:23:25.651 [2024-12-09 17:33:54.574783] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.651 [2024-12-09 17:33:54.574788] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.651 [2024-12-09 17:33:54.574791] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.651 [2024-12-09 17:33:54.574795] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944580) on tqpair=0x8e2690 00:23:25.651 [2024-12-09 17:33:54.574805] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.651 [2024-12-09 17:33:54.574808] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.651 [2024-12-09 17:33:54.574811] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e2690) 00:23:25.651 [2024-12-09 17:33:54.574817] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.651 [2024-12-09 17:33:54.574826] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944580, cid 3, qid 0 00:23:25.651 [2024-12-09 17:33:54.574892] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.651 [2024-12-09 17:33:54.574898] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.651 [2024-12-09 17:33:54.574901] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.651 [2024-12-09 17:33:54.574904] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944580) on tqpair=0x8e2690 00:23:25.651 [2024-12-09 17:33:54.574912] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.651 [2024-12-09 17:33:54.574915] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.651 [2024-12-09 17:33:54.574918] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e2690) 00:23:25.651 [2024-12-09 17:33:54.574924] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.651 [2024-12-09 17:33:54.574933] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944580, cid 3, qid 0 00:23:25.651 [2024-12-09 17:33:54.575006] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.651 [2024-12-09 17:33:54.575011] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.651 [2024-12-09 17:33:54.575014] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.651 [2024-12-09 17:33:54.575018] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944580) on tqpair=0x8e2690 00:23:25.651 [2024-12-09 17:33:54.575027] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.651 [2024-12-09 17:33:54.575030] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.651 [2024-12-09 17:33:54.575033] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e2690) 00:23:25.651 [2024-12-09 17:33:54.575038] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.651 [2024-12-09 17:33:54.575047] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944580, cid 3, qid 0 00:23:25.651 [2024-12-09 17:33:54.575109] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.651 [2024-12-09 17:33:54.575114] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.651 [2024-12-09 17:33:54.575117] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.651 [2024-12-09 17:33:54.575120] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944580) on tqpair=0x8e2690 00:23:25.651 [2024-12-09 17:33:54.575128] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.651 [2024-12-09 17:33:54.575132] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.651 [2024-12-09 17:33:54.575134] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e2690) 00:23:25.651 [2024-12-09 17:33:54.575140] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.651 [2024-12-09 17:33:54.575149] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944580, cid 3, qid 0 00:23:25.651 [2024-12-09 17:33:54.575224] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.651 [2024-12-09 17:33:54.575229] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.651 [2024-12-09 17:33:54.575232] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.651 [2024-12-09 17:33:54.575235] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944580) on tqpair=0x8e2690 00:23:25.651 [2024-12-09 17:33:54.575243] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.651 [2024-12-09 17:33:54.575247] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.651 [2024-12-09 17:33:54.575251] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e2690) 00:23:25.651 [2024-12-09 17:33:54.575256] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.651 [2024-12-09 17:33:54.575266] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944580, cid 3, qid 0 00:23:25.651 [2024-12-09 17:33:54.575329] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.651 [2024-12-09 17:33:54.575334] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.651 [2024-12-09 17:33:54.575337] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.651 [2024-12-09 17:33:54.575340] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944580) on tqpair=0x8e2690 00:23:25.651 [2024-12-09 17:33:54.575348] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.651 [2024-12-09 17:33:54.575351] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.651 [2024-12-09 17:33:54.575354] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e2690) 00:23:25.651 [2024-12-09 17:33:54.575360] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.652 [2024-12-09 17:33:54.575369] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944580, cid 3, qid 0 00:23:25.652 [2024-12-09 17:33:54.575438] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.652 [2024-12-09 17:33:54.575444] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.652 [2024-12-09 17:33:54.575447] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.652 [2024-12-09 17:33:54.575450] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944580) on tqpair=0x8e2690 00:23:25.652 [2024-12-09 17:33:54.575458] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.652 [2024-12-09 17:33:54.575461] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.652 [2024-12-09 17:33:54.575464] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e2690) 00:23:25.652 [2024-12-09 17:33:54.575469] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.652 [2024-12-09 17:33:54.575478] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944580, cid 3, qid 0 00:23:25.652 [2024-12-09 17:33:54.575539] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.652 [2024-12-09 17:33:54.575545] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.652 [2024-12-09 17:33:54.575548] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.652 [2024-12-09 17:33:54.575551] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944580) on tqpair=0x8e2690 00:23:25.652 [2024-12-09 17:33:54.575558] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.652 [2024-12-09 17:33:54.575562] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.652 [2024-12-09 17:33:54.575565] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e2690) 00:23:25.652 [2024-12-09 17:33:54.575570] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.652 [2024-12-09 17:33:54.575579] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944580, cid 3, qid 0 00:23:25.652 [2024-12-09 17:33:54.575648] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.652 [2024-12-09 17:33:54.575653] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.652 [2024-12-09 17:33:54.575656] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.652 [2024-12-09 17:33:54.575659] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944580) on tqpair=0x8e2690 00:23:25.652 [2024-12-09 17:33:54.575667] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.652 [2024-12-09 17:33:54.575671] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.652 [2024-12-09 17:33:54.575674] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e2690) 00:23:25.652 [2024-12-09 17:33:54.575681] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.652 [2024-12-09 17:33:54.575690] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944580, cid 3, qid 0 00:23:25.652 [2024-12-09 17:33:54.575758] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.652 [2024-12-09 17:33:54.575764] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.652 [2024-12-09 17:33:54.575766] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.652 [2024-12-09 17:33:54.575769] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944580) on tqpair=0x8e2690 00:23:25.652 [2024-12-09 17:33:54.575777] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.652 [2024-12-09 17:33:54.575781] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.652 [2024-12-09 17:33:54.575784] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e2690) 00:23:25.652 [2024-12-09 17:33:54.575789] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.652 [2024-12-09 17:33:54.575798] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944580, cid 3, qid 0 00:23:25.652 [2024-12-09 17:33:54.575867] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.652 [2024-12-09 17:33:54.575872] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.652 [2024-12-09 17:33:54.575875] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.652 [2024-12-09 17:33:54.575878] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944580) on tqpair=0x8e2690 00:23:25.652 [2024-12-09 17:33:54.575886] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.652 [2024-12-09 17:33:54.575889] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.652 [2024-12-09 17:33:54.575892] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e2690) 00:23:25.652 [2024-12-09 17:33:54.575898] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.652 [2024-12-09 17:33:54.575907] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944580, cid 3, qid 0 00:23:25.652 [2024-12-09 17:33:54.575971] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.652 [2024-12-09 17:33:54.575976] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.652 [2024-12-09 17:33:54.575979] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.652 [2024-12-09 17:33:54.575982] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944580) on tqpair=0x8e2690 00:23:25.652 [2024-12-09 17:33:54.575990] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.652 [2024-12-09 17:33:54.575993] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.652 [2024-12-09 17:33:54.575996] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e2690) 00:23:25.652 [2024-12-09 17:33:54.576001] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.652 [2024-12-09 17:33:54.576011] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944580, cid 3, qid 0 00:23:25.652 [2024-12-09 17:33:54.576071] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.652 [2024-12-09 17:33:54.576076] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.652 [2024-12-09 17:33:54.576079] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.652 [2024-12-09 17:33:54.576082] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944580) on tqpair=0x8e2690 00:23:25.652 [2024-12-09 17:33:54.576090] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.652 [2024-12-09 17:33:54.576094] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.652 [2024-12-09 17:33:54.576096] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e2690) 00:23:25.652 [2024-12-09 17:33:54.576102] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.652 [2024-12-09 17:33:54.576113] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944580, cid 3, qid 0 00:23:25.652 [2024-12-09 17:33:54.576185] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.652 [2024-12-09 17:33:54.576190] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.652 [2024-12-09 17:33:54.576193] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.652 [2024-12-09 17:33:54.576196] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944580) on tqpair=0x8e2690 00:23:25.652 [2024-12-09 17:33:54.576204] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.652 [2024-12-09 17:33:54.576207] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.652 [2024-12-09 17:33:54.576210] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e2690) 00:23:25.652 [2024-12-09 17:33:54.576216] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.652 [2024-12-09 17:33:54.576239] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944580, cid 3, qid 0 00:23:25.652 [2024-12-09 17:33:54.576301] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.652 [2024-12-09 17:33:54.576306] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.652 [2024-12-09 17:33:54.576309] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.652 [2024-12-09 17:33:54.576312] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944580) on tqpair=0x8e2690 00:23:25.652 [2024-12-09 17:33:54.576320] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.652 [2024-12-09 17:33:54.576324] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.652 [2024-12-09 17:33:54.576327] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e2690) 00:23:25.652 [2024-12-09 17:33:54.576332] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.652 [2024-12-09 17:33:54.576341] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944580, cid 3, qid 0 00:23:25.652 [2024-12-09 17:33:54.576407] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.652 [2024-12-09 17:33:54.576412] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.652 [2024-12-09 17:33:54.576415] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.652 [2024-12-09 17:33:54.576419] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944580) on tqpair=0x8e2690 00:23:25.652 [2024-12-09 17:33:54.576426] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.652 [2024-12-09 17:33:54.576430] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.652 [2024-12-09 17:33:54.576433] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e2690) 00:23:25.652 [2024-12-09 17:33:54.576438] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.652 [2024-12-09 17:33:54.576447] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944580, cid 3, qid 0 00:23:25.652 [2024-12-09 17:33:54.576509] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.652 [2024-12-09 17:33:54.576514] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.652 [2024-12-09 17:33:54.576517] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.652 [2024-12-09 17:33:54.576520] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944580) on tqpair=0x8e2690 00:23:25.652 [2024-12-09 17:33:54.576528] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.652 [2024-12-09 17:33:54.576531] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.652 [2024-12-09 17:33:54.576534] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e2690) 00:23:25.652 [2024-12-09 17:33:54.576540] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.652 [2024-12-09 17:33:54.576551] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944580, cid 3, qid 0 00:23:25.652 [2024-12-09 17:33:54.576618] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.652 [2024-12-09 17:33:54.576623] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.652 [2024-12-09 17:33:54.576626] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.652 [2024-12-09 17:33:54.576629] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944580) on tqpair=0x8e2690 00:23:25.652 [2024-12-09 17:33:54.576637] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.652 [2024-12-09 17:33:54.576641] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.652 [2024-12-09 17:33:54.576643] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e2690) 00:23:25.652 [2024-12-09 17:33:54.576649] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.652 [2024-12-09 17:33:54.576658] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944580, cid 3, qid 0 00:23:25.652 [2024-12-09 17:33:54.576727] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.652 [2024-12-09 17:33:54.576733] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.653 [2024-12-09 17:33:54.576735] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.653 [2024-12-09 17:33:54.576738] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944580) on tqpair=0x8e2690 00:23:25.653 [2024-12-09 17:33:54.576746] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.653 [2024-12-09 17:33:54.576750] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.653 [2024-12-09 17:33:54.576753] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e2690) 00:23:25.653 [2024-12-09 17:33:54.576758] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.653 [2024-12-09 17:33:54.576767] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944580, cid 3, qid 0 00:23:25.653 [2024-12-09 17:33:54.576831] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.653 [2024-12-09 17:33:54.576837] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.653 [2024-12-09 17:33:54.576840] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.653 [2024-12-09 17:33:54.576843] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944580) on tqpair=0x8e2690 00:23:25.653 [2024-12-09 17:33:54.576851] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.653 [2024-12-09 17:33:54.576854] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.653 [2024-12-09 17:33:54.576857] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e2690) 00:23:25.653 [2024-12-09 17:33:54.576862] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.653 [2024-12-09 17:33:54.576871] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944580, cid 3, qid 0 00:23:25.653 [2024-12-09 17:33:54.576933] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.653 [2024-12-09 17:33:54.576938] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.653 [2024-12-09 17:33:54.576941] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.653 [2024-12-09 17:33:54.576944] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944580) on tqpair=0x8e2690 00:23:25.653 [2024-12-09 17:33:54.576952] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.653 [2024-12-09 17:33:54.576955] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.653 [2024-12-09 17:33:54.576958] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e2690) 00:23:25.653 [2024-12-09 17:33:54.576963] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.653 [2024-12-09 17:33:54.576972] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944580, cid 3, qid 0 00:23:25.653 [2024-12-09 17:33:54.577032] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.653 [2024-12-09 17:33:54.577038] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.653 [2024-12-09 17:33:54.577041] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.653 [2024-12-09 17:33:54.577044] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944580) on tqpair=0x8e2690 00:23:25.653 [2024-12-09 17:33:54.577052] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.653 [2024-12-09 17:33:54.577055] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.653 [2024-12-09 17:33:54.577058] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e2690) 00:23:25.653 [2024-12-09 17:33:54.577063] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.653 [2024-12-09 17:33:54.577072] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944580, cid 3, qid 0 00:23:25.653 [2024-12-09 17:33:54.577142] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.653 [2024-12-09 17:33:54.577148] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.653 [2024-12-09 17:33:54.577151] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.653 [2024-12-09 17:33:54.577154] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944580) on tqpair=0x8e2690 00:23:25.653 [2024-12-09 17:33:54.577162] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.653 [2024-12-09 17:33:54.577165] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.653 [2024-12-09 17:33:54.577168] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e2690) 00:23:25.653 [2024-12-09 17:33:54.577173] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.653 [2024-12-09 17:33:54.577182] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944580, cid 3, qid 0 00:23:25.653 [2024-12-09 17:33:54.581227] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.653 [2024-12-09 17:33:54.581235] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.653 [2024-12-09 17:33:54.581238] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.653 [2024-12-09 17:33:54.581241] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944580) on tqpair=0x8e2690 00:23:25.653 [2024-12-09 17:33:54.581250] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:25.653 [2024-12-09 17:33:54.581253] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:25.653 [2024-12-09 17:33:54.581256] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e2690) 00:23:25.653 [2024-12-09 17:33:54.581262] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.653 [2024-12-09 17:33:54.581272] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x944580, cid 3, qid 0 00:23:25.653 [2024-12-09 17:33:54.581398] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:25.653 [2024-12-09 17:33:54.581403] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:25.653 [2024-12-09 17:33:54.581406] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:25.653 [2024-12-09 17:33:54.581409] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x944580) on tqpair=0x8e2690 00:23:25.653 [2024-12-09 17:33:54.581416] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 10 milliseconds 00:23:25.653 0% 00:23:25.653 Data Units Read: 0 00:23:25.653 Data Units Written: 0 00:23:25.653 Host Read Commands: 0 00:23:25.653 Host Write Commands: 0 00:23:25.653 Controller Busy Time: 0 minutes 00:23:25.653 Power Cycles: 0 00:23:25.653 Power On Hours: 0 hours 00:23:25.653 Unsafe Shutdowns: 0 00:23:25.653 Unrecoverable Media Errors: 0 00:23:25.653 Lifetime Error Log Entries: 0 00:23:25.653 Warning Temperature Time: 0 minutes 00:23:25.653 Critical Temperature Time: 0 minutes 00:23:25.653 00:23:25.653 Number of Queues 00:23:25.653 ================ 00:23:25.653 Number of I/O Submission Queues: 127 00:23:25.653 Number of I/O Completion Queues: 127 00:23:25.653 00:23:25.653 Active Namespaces 00:23:25.653 ================= 00:23:25.653 Namespace ID:1 00:23:25.653 Error Recovery Timeout: Unlimited 00:23:25.653 Command Set Identifier: NVM (00h) 00:23:25.653 Deallocate: Supported 00:23:25.653 Deallocated/Unwritten Error: Not Supported 00:23:25.653 Deallocated Read Value: Unknown 00:23:25.653 Deallocate in Write Zeroes: Not Supported 00:23:25.653 Deallocated Guard Field: 0xFFFF 00:23:25.653 Flush: Supported 00:23:25.653 Reservation: Supported 00:23:25.653 Namespace Sharing Capabilities: Multiple Controllers 00:23:25.653 Size (in LBAs): 131072 (0GiB) 00:23:25.653 Capacity (in LBAs): 131072 (0GiB) 00:23:25.653 Utilization (in LBAs): 131072 (0GiB) 00:23:25.653 NGUID: ABCDEF0123456789ABCDEF0123456789 00:23:25.653 EUI64: ABCDEF0123456789 00:23:25.653 UUID: cb0e5e85-2a9b-4d6e-98e9-eb6bc300e08d 00:23:25.653 Thin Provisioning: Not Supported 00:23:25.653 Per-NS Atomic Units: Yes 00:23:25.653 Atomic Boundary Size (Normal): 0 00:23:25.653 Atomic Boundary Size (PFail): 0 00:23:25.653 Atomic Boundary Offset: 0 00:23:25.653 Maximum Single Source Range Length: 65535 00:23:25.653 Maximum Copy Length: 65535 00:23:25.653 Maximum Source Range Count: 1 00:23:25.653 NGUID/EUI64 Never Reused: No 00:23:25.653 Namespace Write Protected: No 00:23:25.653 Number of LBA Formats: 1 00:23:25.653 Current LBA Format: LBA Format #00 00:23:25.653 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:25.653 00:23:25.653 17:33:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:23:25.653 17:33:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:25.653 17:33:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.653 17:33:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:25.653 17:33:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.653 17:33:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:23:25.653 17:33:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:23:25.653 17:33:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:25.653 17:33:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:23:25.653 17:33:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:25.653 17:33:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:23:25.653 17:33:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:25.653 17:33:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:25.653 rmmod nvme_tcp 00:23:25.653 rmmod nvme_fabrics 00:23:25.653 rmmod nvme_keyring 00:23:25.653 17:33:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:25.653 17:33:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:23:25.653 17:33:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:23:25.653 17:33:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 2660658 ']' 00:23:25.653 17:33:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 2660658 00:23:25.653 17:33:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 2660658 ']' 00:23:25.653 17:33:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 2660658 00:23:25.653 17:33:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:23:25.654 17:33:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:25.654 17:33:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2660658 00:23:25.654 17:33:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:25.654 17:33:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:25.654 17:33:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2660658' 00:23:25.654 killing process with pid 2660658 00:23:25.654 17:33:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 2660658 00:23:25.654 17:33:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 2660658 00:23:25.913 17:33:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:25.913 17:33:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:25.913 17:33:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:25.913 17:33:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:23:25.913 17:33:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:23:25.913 17:33:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:23:25.913 17:33:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:25.913 17:33:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:25.913 17:33:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:25.913 17:33:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:25.913 17:33:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:25.913 17:33:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:27.819 17:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:27.819 00:23:27.819 real 0m9.293s 00:23:27.819 user 0m5.431s 00:23:27.819 sys 0m4.787s 00:23:27.819 17:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:27.819 17:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:27.819 ************************************ 00:23:27.819 END TEST nvmf_identify 00:23:27.819 ************************************ 00:23:28.080 17:33:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:28.080 17:33:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:28.080 17:33:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:28.080 17:33:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.080 ************************************ 00:23:28.080 START TEST nvmf_perf 00:23:28.080 ************************************ 00:23:28.080 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:28.080 * Looking for test storage... 00:23:28.080 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:28.080 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:28.080 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:23:28.080 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:28.080 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:28.080 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:28.080 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:28.080 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:28.080 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:23:28.080 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:23:28.080 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:23:28.080 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:23:28.080 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:23:28.080 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:23:28.080 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:23:28.080 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:28.080 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:23:28.080 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:23:28.080 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:28.080 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:28.080 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:23:28.080 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:23:28.080 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:28.080 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:23:28.080 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:23:28.080 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:23:28.080 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:23:28.080 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:28.080 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:23:28.080 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:23:28.080 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:28.080 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:28.080 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:23:28.080 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:28.080 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:28.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:28.080 --rc genhtml_branch_coverage=1 00:23:28.080 --rc genhtml_function_coverage=1 00:23:28.080 --rc genhtml_legend=1 00:23:28.080 --rc geninfo_all_blocks=1 00:23:28.080 --rc geninfo_unexecuted_blocks=1 00:23:28.080 00:23:28.080 ' 00:23:28.080 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:28.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:28.080 --rc genhtml_branch_coverage=1 00:23:28.080 --rc genhtml_function_coverage=1 00:23:28.080 --rc genhtml_legend=1 00:23:28.080 --rc geninfo_all_blocks=1 00:23:28.080 --rc geninfo_unexecuted_blocks=1 00:23:28.080 00:23:28.080 ' 00:23:28.080 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:28.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:28.080 --rc genhtml_branch_coverage=1 00:23:28.080 --rc genhtml_function_coverage=1 00:23:28.080 --rc genhtml_legend=1 00:23:28.080 --rc geninfo_all_blocks=1 00:23:28.080 --rc geninfo_unexecuted_blocks=1 00:23:28.080 00:23:28.080 ' 00:23:28.080 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:28.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:28.080 --rc genhtml_branch_coverage=1 00:23:28.080 --rc genhtml_function_coverage=1 00:23:28.080 --rc genhtml_legend=1 00:23:28.080 --rc geninfo_all_blocks=1 00:23:28.080 --rc geninfo_unexecuted_blocks=1 00:23:28.080 00:23:28.080 ' 00:23:28.081 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:28.081 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:23:28.081 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:28.081 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:28.081 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:28.081 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:28.081 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:28.081 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:28.081 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:28.081 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:28.081 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:28.081 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:28.081 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:23:28.081 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:23:28.081 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:28.081 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:28.081 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:28.081 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:28.081 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:28.081 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:23:28.081 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:28.081 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:28.081 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:28.081 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:28.081 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:28.081 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:28.081 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:23:28.081 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:28.081 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:23:28.081 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:28.081 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:28.081 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:28.081 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:28.081 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:28.081 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:28.081 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:28.081 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:28.081 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:28.341 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:28.341 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:28.342 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:28.342 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:28.342 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:23:28.342 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:28.342 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:28.342 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:28.342 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:28.342 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:28.342 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:28.342 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:28.342 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:28.342 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:28.342 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:28.342 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:23:28.342 17:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:34.926 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:34.926 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:34.926 Found net devices under 0000:af:00.0: cvl_0_0 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:34.926 Found net devices under 0000:af:00.1: cvl_0_1 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:34.926 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:34.927 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:34.927 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:34.927 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:34.927 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:34.927 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:34.927 17:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:34.927 17:34:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:34.927 17:34:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:34.927 17:34:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:34.927 17:34:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:34.927 17:34:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:34.927 17:34:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:34.927 17:34:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:34.927 17:34:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:34.927 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:34.927 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.363 ms 00:23:34.927 00:23:34.927 --- 10.0.0.2 ping statistics --- 00:23:34.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:34.927 rtt min/avg/max/mdev = 0.363/0.363/0.363/0.000 ms 00:23:34.927 17:34:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:34.927 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:34.927 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:23:34.927 00:23:34.927 --- 10.0.0.1 ping statistics --- 00:23:34.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:34.927 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:23:34.927 17:34:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:34.927 17:34:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:23:34.927 17:34:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:34.927 17:34:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:34.927 17:34:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:34.927 17:34:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:34.927 17:34:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:34.927 17:34:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:34.927 17:34:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:34.927 17:34:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:23:34.927 17:34:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:34.927 17:34:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:34.927 17:34:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:34.927 17:34:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=2664236 00:23:34.927 17:34:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:34.927 17:34:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 2664236 00:23:34.927 17:34:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 2664236 ']' 00:23:34.927 17:34:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:34.927 17:34:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:34.927 17:34:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:34.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:34.927 17:34:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:34.927 17:34:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:34.927 [2024-12-09 17:34:03.376067] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:23:34.927 [2024-12-09 17:34:03.376115] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:34.927 [2024-12-09 17:34:03.454714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:34.927 [2024-12-09 17:34:03.494067] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:34.927 [2024-12-09 17:34:03.494105] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:34.927 [2024-12-09 17:34:03.494112] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:34.927 [2024-12-09 17:34:03.494119] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:34.927 [2024-12-09 17:34:03.494123] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:34.927 [2024-12-09 17:34:03.495678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:34.927 [2024-12-09 17:34:03.495788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:34.927 [2024-12-09 17:34:03.495893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:34.927 [2024-12-09 17:34:03.495894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:34.927 17:34:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:34.927 17:34:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:23:34.927 17:34:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:34.927 17:34:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:34.927 17:34:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:34.927 17:34:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:34.927 17:34:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:23:34.927 17:34:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:23:38.213 17:34:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:23:38.213 17:34:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:23:38.213 17:34:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:23:38.213 17:34:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:38.213 17:34:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:23:38.213 17:34:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:23:38.213 17:34:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:23:38.213 17:34:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:23:38.213 17:34:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:38.213 [2024-12-09 17:34:07.284981] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:38.213 17:34:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:38.471 17:34:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:38.471 17:34:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:38.729 17:34:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:38.729 17:34:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:23:38.988 17:34:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:38.988 [2024-12-09 17:34:08.078464] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:38.988 17:34:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:39.246 17:34:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:23:39.246 17:34:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:23:39.246 17:34:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:23:39.246 17:34:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:23:40.621 Initializing NVMe Controllers 00:23:40.621 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:23:40.621 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:23:40.621 Initialization complete. Launching workers. 00:23:40.621 ======================================================== 00:23:40.621 Latency(us) 00:23:40.621 Device Information : IOPS MiB/s Average min max 00:23:40.621 PCIE (0000:5e:00.0) NSID 1 from core 0: 98313.82 384.04 325.07 23.17 7184.60 00:23:40.621 ======================================================== 00:23:40.621 Total : 98313.82 384.04 325.07 23.17 7184.60 00:23:40.621 00:23:40.621 17:34:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:41.997 Initializing NVMe Controllers 00:23:41.997 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:41.997 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:41.997 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:41.997 Initialization complete. Launching workers. 00:23:41.997 ======================================================== 00:23:41.997 Latency(us) 00:23:41.997 Device Information : IOPS MiB/s Average min max 00:23:41.997 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 133.00 0.52 7743.53 107.33 44719.96 00:23:41.997 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 74.00 0.29 13887.65 6980.53 47888.06 00:23:41.997 ======================================================== 00:23:41.997 Total : 207.00 0.81 9939.98 107.33 47888.06 00:23:41.997 00:23:41.997 17:34:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:43.374 Initializing NVMe Controllers 00:23:43.374 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:43.374 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:43.374 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:43.374 Initialization complete. Launching workers. 00:23:43.374 ======================================================== 00:23:43.374 Latency(us) 00:23:43.374 Device Information : IOPS MiB/s Average min max 00:23:43.374 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11229.37 43.86 2848.45 468.49 9975.34 00:23:43.374 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3792.41 14.81 8463.93 7160.14 19163.94 00:23:43.374 ======================================================== 00:23:43.374 Total : 15021.77 58.68 4266.14 468.49 19163.94 00:23:43.374 00:23:43.374 17:34:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:23:43.374 17:34:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:23:43.375 17:34:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:45.907 Initializing NVMe Controllers 00:23:45.907 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:45.907 Controller IO queue size 128, less than required. 00:23:45.907 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:45.907 Controller IO queue size 128, less than required. 00:23:45.907 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:45.907 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:45.907 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:45.907 Initialization complete. Launching workers. 00:23:45.907 ======================================================== 00:23:45.907 Latency(us) 00:23:45.907 Device Information : IOPS MiB/s Average min max 00:23:45.907 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1781.82 445.46 72901.54 49779.07 124428.03 00:23:45.907 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 607.77 151.94 220506.90 66645.26 333969.08 00:23:45.907 ======================================================== 00:23:45.907 Total : 2389.59 597.40 110443.51 49779.07 333969.08 00:23:45.907 00:23:45.907 17:34:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:23:45.907 No valid NVMe controllers or AIO or URING devices found 00:23:45.907 Initializing NVMe Controllers 00:23:45.907 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:45.907 Controller IO queue size 128, less than required. 00:23:45.907 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:45.907 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:23:45.907 Controller IO queue size 128, less than required. 00:23:45.907 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:45.907 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:23:45.907 WARNING: Some requested NVMe devices were skipped 00:23:45.907 17:34:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:23:48.440 Initializing NVMe Controllers 00:23:48.440 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:48.440 Controller IO queue size 128, less than required. 00:23:48.440 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:48.440 Controller IO queue size 128, less than required. 00:23:48.440 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:48.440 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:48.440 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:48.440 Initialization complete. Launching workers. 00:23:48.440 00:23:48.440 ==================== 00:23:48.440 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:23:48.440 TCP transport: 00:23:48.440 polls: 15386 00:23:48.440 idle_polls: 11931 00:23:48.440 sock_completions: 3455 00:23:48.440 nvme_completions: 6343 00:23:48.440 submitted_requests: 9470 00:23:48.440 queued_requests: 1 00:23:48.440 00:23:48.440 ==================== 00:23:48.440 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:23:48.440 TCP transport: 00:23:48.440 polls: 15633 00:23:48.440 idle_polls: 11744 00:23:48.440 sock_completions: 3889 00:23:48.440 nvme_completions: 6687 00:23:48.440 submitted_requests: 9958 00:23:48.440 queued_requests: 1 00:23:48.440 ======================================================== 00:23:48.440 Latency(us) 00:23:48.440 Device Information : IOPS MiB/s Average min max 00:23:48.440 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1585.30 396.32 82443.65 54790.29 135161.81 00:23:48.440 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1671.29 417.82 77452.50 44180.05 134420.74 00:23:48.440 ======================================================== 00:23:48.440 Total : 3256.58 814.15 79882.18 44180.05 135161.81 00:23:48.440 00:23:48.440 17:34:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:23:48.440 17:34:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:48.699 17:34:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:23:48.699 17:34:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:23:48.699 17:34:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:23:48.699 17:34:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:48.699 17:34:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:23:48.699 17:34:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:48.699 17:34:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:23:48.699 17:34:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:48.699 17:34:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:48.699 rmmod nvme_tcp 00:23:48.699 rmmod nvme_fabrics 00:23:48.699 rmmod nvme_keyring 00:23:48.699 17:34:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:48.699 17:34:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:23:48.699 17:34:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:23:48.699 17:34:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 2664236 ']' 00:23:48.699 17:34:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 2664236 00:23:48.699 17:34:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 2664236 ']' 00:23:48.699 17:34:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 2664236 00:23:48.699 17:34:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:23:48.699 17:34:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:48.699 17:34:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2664236 00:23:48.699 17:34:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:48.958 17:34:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:48.958 17:34:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2664236' 00:23:48.958 killing process with pid 2664236 00:23:48.958 17:34:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 2664236 00:23:48.958 17:34:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 2664236 00:23:50.336 17:34:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:50.336 17:34:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:50.336 17:34:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:50.336 17:34:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:23:50.336 17:34:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:23:50.336 17:34:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:50.336 17:34:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:23:50.336 17:34:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:50.336 17:34:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:50.336 17:34:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:50.336 17:34:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:50.336 17:34:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:52.875 17:34:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:52.875 00:23:52.875 real 0m24.392s 00:23:52.875 user 1m3.044s 00:23:52.875 sys 0m8.343s 00:23:52.875 17:34:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:52.875 17:34:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:52.875 ************************************ 00:23:52.875 END TEST nvmf_perf 00:23:52.875 ************************************ 00:23:52.875 17:34:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:52.875 17:34:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:52.875 17:34:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:52.875 17:34:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.875 ************************************ 00:23:52.875 START TEST nvmf_fio_host 00:23:52.875 ************************************ 00:23:52.875 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:52.875 * Looking for test storage... 00:23:52.875 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:52.875 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:52.875 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:23:52.875 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:52.875 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:52.875 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:52.875 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:52.875 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:52.875 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:52.875 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:52.875 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:52.875 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:52.875 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:52.875 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:52.875 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:52.875 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:52.875 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:23:52.875 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:23:52.875 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:52.875 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:52.875 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:23:52.875 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:23:52.875 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:52.875 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:23:52.875 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:52.875 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:23:52.875 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:23:52.875 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:52.875 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:23:52.875 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:52.875 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:52.875 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:52.875 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:23:52.875 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:52.875 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:52.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:52.875 --rc genhtml_branch_coverage=1 00:23:52.875 --rc genhtml_function_coverage=1 00:23:52.875 --rc genhtml_legend=1 00:23:52.875 --rc geninfo_all_blocks=1 00:23:52.875 --rc geninfo_unexecuted_blocks=1 00:23:52.875 00:23:52.875 ' 00:23:52.875 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:52.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:52.875 --rc genhtml_branch_coverage=1 00:23:52.875 --rc genhtml_function_coverage=1 00:23:52.875 --rc genhtml_legend=1 00:23:52.875 --rc geninfo_all_blocks=1 00:23:52.875 --rc geninfo_unexecuted_blocks=1 00:23:52.875 00:23:52.875 ' 00:23:52.875 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:52.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:52.875 --rc genhtml_branch_coverage=1 00:23:52.875 --rc genhtml_function_coverage=1 00:23:52.875 --rc genhtml_legend=1 00:23:52.875 --rc geninfo_all_blocks=1 00:23:52.875 --rc geninfo_unexecuted_blocks=1 00:23:52.875 00:23:52.875 ' 00:23:52.875 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:52.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:52.876 --rc genhtml_branch_coverage=1 00:23:52.876 --rc genhtml_function_coverage=1 00:23:52.876 --rc genhtml_legend=1 00:23:52.876 --rc geninfo_all_blocks=1 00:23:52.876 --rc geninfo_unexecuted_blocks=1 00:23:52.876 00:23:52.876 ' 00:23:52.876 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:52.876 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:52.876 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:52.876 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:52.876 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:52.876 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.876 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.876 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.876 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:52.876 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.876 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:52.876 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:23:52.876 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:52.876 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:52.876 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:52.876 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:52.876 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:52.876 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:52.876 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:52.876 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:52.876 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:52.876 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:52.876 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:23:52.876 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:23:52.876 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:52.876 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:52.876 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:52.876 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:52.876 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:52.876 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:52.876 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:52.876 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:52.876 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:52.876 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.876 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.876 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.876 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:52.876 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.876 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:23:52.876 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:52.876 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:52.876 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:52.876 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:52.876 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:52.876 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:52.876 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:52.876 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:52.876 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:52.876 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:52.876 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:52.876 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:23:52.876 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:52.876 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:52.876 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:52.876 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:52.876 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:52.876 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:52.876 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:52.876 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:52.876 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:52.876 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:52.876 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:23:52.876 17:34:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.449 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:59.449 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:23:59.449 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:59.449 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:59.449 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:59.449 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:59.449 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:59.449 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:23:59.449 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:59.449 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:23:59.449 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:23:59.449 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:23:59.449 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:23:59.449 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:23:59.449 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:23:59.449 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:59.449 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:59.449 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:59.449 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:59.449 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:59.449 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:59.449 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:59.449 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:59.449 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:59.449 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:59.449 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:59.449 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:59.449 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:59.449 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:59.449 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:59.449 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:59.449 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:59.449 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:59.449 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:59.449 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:59.449 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:59.449 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:59.449 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:59.450 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:59.450 Found net devices under 0000:af:00.0: cvl_0_0 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:59.450 Found net devices under 0000:af:00.1: cvl_0_1 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:59.450 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:59.450 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.296 ms 00:23:59.450 00:23:59.450 --- 10.0.0.2 ping statistics --- 00:23:59.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:59.450 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:59.450 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:59.450 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:23:59.450 00:23:59.450 --- 10.0.0.1 ping statistics --- 00:23:59.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:59.450 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2670296 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2670296 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 2670296 ']' 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:59.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:59.450 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.450 [2024-12-09 17:34:27.751230] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:23:59.450 [2024-12-09 17:34:27.751281] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:59.450 [2024-12-09 17:34:27.831936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:59.450 [2024-12-09 17:34:27.875614] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:59.450 [2024-12-09 17:34:27.875648] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:59.450 [2024-12-09 17:34:27.875655] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:59.450 [2024-12-09 17:34:27.875665] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:59.450 [2024-12-09 17:34:27.875670] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:59.450 [2024-12-09 17:34:27.877257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:59.451 [2024-12-09 17:34:27.877364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:59.451 [2024-12-09 17:34:27.877471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:59.451 [2024-12-09 17:34:27.877472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:59.451 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:59.451 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:23:59.451 17:34:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:59.451 [2024-12-09 17:34:28.151542] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:59.451 17:34:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:23:59.451 17:34:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:59.451 17:34:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.451 17:34:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:23:59.451 Malloc1 00:23:59.451 17:34:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:59.710 17:34:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:59.710 17:34:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:59.969 [2024-12-09 17:34:28.985264] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:59.969 17:34:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:00.229 17:34:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:00.229 17:34:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:00.229 17:34:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:00.229 17:34:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:00.229 17:34:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:00.229 17:34:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:00.229 17:34:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:00.229 17:34:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:24:00.229 17:34:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:00.229 17:34:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:00.229 17:34:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:00.229 17:34:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:24:00.229 17:34:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:00.229 17:34:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:00.229 17:34:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:00.229 17:34:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:00.229 17:34:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:00.229 17:34:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:24:00.229 17:34:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:00.229 17:34:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:00.229 17:34:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:00.229 17:34:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:00.229 17:34:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:00.489 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:00.489 fio-3.35 00:24:00.489 Starting 1 thread 00:24:03.157 00:24:03.157 test: (groupid=0, jobs=1): err= 0: pid=2670858: Mon Dec 9 17:34:31 2024 00:24:03.157 read: IOPS=12.0k, BW=46.8MiB/s (49.0MB/s)(93.8MiB/2005msec) 00:24:03.157 slat (nsec): min=1522, max=251333, avg=1692.24, stdev=2230.97 00:24:03.157 clat (usec): min=3171, max=9909, avg=5904.83, stdev=468.42 00:24:03.157 lat (usec): min=3203, max=9911, avg=5906.52, stdev=468.34 00:24:03.157 clat percentiles (usec): 00:24:03.157 | 1.00th=[ 4817], 5.00th=[ 5145], 10.00th=[ 5342], 20.00th=[ 5538], 00:24:03.157 | 30.00th=[ 5669], 40.00th=[ 5800], 50.00th=[ 5932], 60.00th=[ 5997], 00:24:03.157 | 70.00th=[ 6128], 80.00th=[ 6259], 90.00th=[ 6456], 95.00th=[ 6587], 00:24:03.157 | 99.00th=[ 6915], 99.50th=[ 7046], 99.90th=[ 8848], 99.95th=[ 9634], 00:24:03.157 | 99.99th=[ 9896] 00:24:03.157 bw ( KiB/s): min=47072, max=48416, per=99.98%, avg=47882.00, stdev=588.36, samples=4 00:24:03.157 iops : min=11768, max=12104, avg=11970.50, stdev=147.09, samples=4 00:24:03.157 write: IOPS=11.9k, BW=46.6MiB/s (48.8MB/s)(93.4MiB/2005msec); 0 zone resets 00:24:03.157 slat (nsec): min=1559, max=233092, avg=1757.08, stdev=1696.11 00:24:03.157 clat (usec): min=2428, max=9306, avg=4774.53, stdev=372.81 00:24:03.157 lat (usec): min=2443, max=9308, avg=4776.29, stdev=372.82 00:24:03.157 clat percentiles (usec): 00:24:03.157 | 1.00th=[ 3884], 5.00th=[ 4178], 10.00th=[ 4359], 20.00th=[ 4490], 00:24:03.157 | 30.00th=[ 4621], 40.00th=[ 4686], 50.00th=[ 4752], 60.00th=[ 4883], 00:24:03.157 | 70.00th=[ 4948], 80.00th=[ 5080], 90.00th=[ 5211], 95.00th=[ 5342], 00:24:03.157 | 99.00th=[ 5604], 99.50th=[ 5669], 99.90th=[ 6456], 99.95th=[ 7767], 00:24:03.157 | 99.99th=[ 8848] 00:24:03.157 bw ( KiB/s): min=47304, max=48128, per=99.99%, avg=47686.00, stdev=338.74, samples=4 00:24:03.157 iops : min=11826, max=12032, avg=11921.50, stdev=84.69, samples=4 00:24:03.157 lat (msec) : 4=0.95%, 10=99.05% 00:24:03.157 cpu : usr=75.00%, sys=24.05%, ctx=83, majf=0, minf=2 00:24:03.157 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:24:03.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:03.157 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:03.157 issued rwts: total=24005,23906,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:03.157 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:03.157 00:24:03.157 Run status group 0 (all jobs): 00:24:03.157 READ: bw=46.8MiB/s (49.0MB/s), 46.8MiB/s-46.8MiB/s (49.0MB/s-49.0MB/s), io=93.8MiB (98.3MB), run=2005-2005msec 00:24:03.157 WRITE: bw=46.6MiB/s (48.8MB/s), 46.6MiB/s-46.6MiB/s (48.8MB/s-48.8MB/s), io=93.4MiB (97.9MB), run=2005-2005msec 00:24:03.157 17:34:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:03.157 17:34:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:03.157 17:34:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:03.157 17:34:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:03.157 17:34:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:03.157 17:34:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:03.157 17:34:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:24:03.157 17:34:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:03.157 17:34:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:03.157 17:34:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:24:03.157 17:34:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:03.157 17:34:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:03.157 17:34:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:03.157 17:34:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:03.157 17:34:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:03.157 17:34:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:03.157 17:34:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:24:03.157 17:34:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:03.157 17:34:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:03.157 17:34:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:03.157 17:34:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:03.157 17:34:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:03.415 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:03.415 fio-3.35 00:24:03.415 Starting 1 thread 00:24:05.943 00:24:05.943 test: (groupid=0, jobs=1): err= 0: pid=2671425: Mon Dec 9 17:34:34 2024 00:24:05.943 read: IOPS=10.8k, BW=169MiB/s (178MB/s)(340MiB/2006msec) 00:24:05.943 slat (usec): min=2, max=106, avg= 2.79, stdev= 1.36 00:24:05.943 clat (usec): min=1147, max=49716, avg=6902.51, stdev=3415.18 00:24:05.943 lat (usec): min=1150, max=49719, avg=6905.29, stdev=3415.26 00:24:05.943 clat percentiles (usec): 00:24:05.943 | 1.00th=[ 3687], 5.00th=[ 4293], 10.00th=[ 4686], 20.00th=[ 5276], 00:24:05.943 | 30.00th=[ 5735], 40.00th=[ 6128], 50.00th=[ 6587], 60.00th=[ 7111], 00:24:05.943 | 70.00th=[ 7439], 80.00th=[ 7963], 90.00th=[ 8848], 95.00th=[ 9503], 00:24:05.943 | 99.00th=[11338], 99.50th=[44303], 99.90th=[48497], 99.95th=[49021], 00:24:05.943 | 99.99th=[49546] 00:24:05.943 bw ( KiB/s): min=80896, max=99552, per=50.88%, avg=88256.00, stdev=8638.30, samples=4 00:24:05.943 iops : min= 5056, max= 6222, avg=5516.00, stdev=539.89, samples=4 00:24:05.943 write: IOPS=6779, BW=106MiB/s (111MB/s)(181MiB/1708msec); 0 zone resets 00:24:05.943 slat (usec): min=29, max=380, avg=31.23, stdev= 7.62 00:24:05.943 clat (usec): min=3779, max=14635, avg=8557.43, stdev=1415.41 00:24:05.943 lat (usec): min=3808, max=14746, avg=8588.66, stdev=1417.09 00:24:05.943 clat percentiles (usec): 00:24:05.943 | 1.00th=[ 5800], 5.00th=[ 6456], 10.00th=[ 6915], 20.00th=[ 7373], 00:24:05.943 | 30.00th=[ 7767], 40.00th=[ 8094], 50.00th=[ 8455], 60.00th=[ 8717], 00:24:05.943 | 70.00th=[ 9110], 80.00th=[ 9765], 90.00th=[10552], 95.00th=[11207], 00:24:05.943 | 99.00th=[12125], 99.50th=[13173], 99.90th=[14091], 99.95th=[14353], 00:24:05.943 | 99.99th=[14484] 00:24:05.943 bw ( KiB/s): min=83136, max=104224, per=84.80%, avg=91992.00, stdev=9095.79, samples=4 00:24:05.943 iops : min= 5196, max= 6514, avg=5749.50, stdev=568.49, samples=4 00:24:05.943 lat (msec) : 2=0.01%, 4=1.74%, 10=90.94%, 20=6.93%, 50=0.38% 00:24:05.943 cpu : usr=85.59%, sys=13.37%, ctx=65, majf=0, minf=2 00:24:05.943 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:24:05.943 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.943 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:05.943 issued rwts: total=21746,11580,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.943 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:05.943 00:24:05.943 Run status group 0 (all jobs): 00:24:05.943 READ: bw=169MiB/s (178MB/s), 169MiB/s-169MiB/s (178MB/s-178MB/s), io=340MiB (356MB), run=2006-2006msec 00:24:05.943 WRITE: bw=106MiB/s (111MB/s), 106MiB/s-106MiB/s (111MB/s-111MB/s), io=181MiB (190MB), run=1708-1708msec 00:24:05.943 17:34:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:05.943 17:34:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:24:05.943 17:34:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:05.943 17:34:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:05.943 17:34:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:24:05.943 17:34:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:05.943 17:34:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:24:05.943 17:34:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:05.943 17:34:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:24:05.943 17:34:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:05.943 17:34:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:05.943 rmmod nvme_tcp 00:24:05.943 rmmod nvme_fabrics 00:24:05.943 rmmod nvme_keyring 00:24:05.943 17:34:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:05.943 17:34:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:24:05.943 17:34:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:24:05.943 17:34:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 2670296 ']' 00:24:05.943 17:34:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 2670296 00:24:05.943 17:34:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 2670296 ']' 00:24:05.943 17:34:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 2670296 00:24:05.943 17:34:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:24:05.943 17:34:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:05.943 17:34:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2670296 00:24:05.943 17:34:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:05.943 17:34:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:05.943 17:34:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2670296' 00:24:05.943 killing process with pid 2670296 00:24:05.943 17:34:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 2670296 00:24:05.943 17:34:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 2670296 00:24:06.202 17:34:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:06.202 17:34:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:06.202 17:34:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:06.202 17:34:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:24:06.202 17:34:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:24:06.202 17:34:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:06.202 17:34:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:24:06.202 17:34:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:06.202 17:34:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:06.202 17:34:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:06.202 17:34:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:06.202 17:34:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:08.108 17:34:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:08.108 00:24:08.108 real 0m15.704s 00:24:08.108 user 0m46.120s 00:24:08.108 sys 0m6.450s 00:24:08.108 17:34:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:08.108 17:34:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.108 ************************************ 00:24:08.108 END TEST nvmf_fio_host 00:24:08.108 ************************************ 00:24:08.108 17:34:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:08.108 17:34:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:08.108 17:34:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:08.108 17:34:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.368 ************************************ 00:24:08.368 START TEST nvmf_failover 00:24:08.368 ************************************ 00:24:08.368 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:08.368 * Looking for test storage... 00:24:08.368 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:08.368 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:08.368 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:24:08.368 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:08.368 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:08.368 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:08.368 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:08.368 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:08.368 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:24:08.368 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:24:08.368 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:24:08.368 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:24:08.368 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:24:08.368 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:24:08.368 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:24:08.368 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:08.368 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:24:08.368 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:24:08.368 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:08.368 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:08.368 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:24:08.368 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:24:08.368 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:08.368 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:24:08.369 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:24:08.369 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:24:08.369 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:24:08.369 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:08.369 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:24:08.369 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:24:08.369 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:08.369 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:08.369 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:24:08.369 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:08.369 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:08.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.369 --rc genhtml_branch_coverage=1 00:24:08.369 --rc genhtml_function_coverage=1 00:24:08.369 --rc genhtml_legend=1 00:24:08.369 --rc geninfo_all_blocks=1 00:24:08.369 --rc geninfo_unexecuted_blocks=1 00:24:08.369 00:24:08.369 ' 00:24:08.369 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:08.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.369 --rc genhtml_branch_coverage=1 00:24:08.369 --rc genhtml_function_coverage=1 00:24:08.369 --rc genhtml_legend=1 00:24:08.369 --rc geninfo_all_blocks=1 00:24:08.369 --rc geninfo_unexecuted_blocks=1 00:24:08.369 00:24:08.369 ' 00:24:08.369 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:08.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.369 --rc genhtml_branch_coverage=1 00:24:08.369 --rc genhtml_function_coverage=1 00:24:08.369 --rc genhtml_legend=1 00:24:08.369 --rc geninfo_all_blocks=1 00:24:08.369 --rc geninfo_unexecuted_blocks=1 00:24:08.369 00:24:08.369 ' 00:24:08.369 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:08.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.369 --rc genhtml_branch_coverage=1 00:24:08.369 --rc genhtml_function_coverage=1 00:24:08.369 --rc genhtml_legend=1 00:24:08.369 --rc geninfo_all_blocks=1 00:24:08.369 --rc geninfo_unexecuted_blocks=1 00:24:08.369 00:24:08.369 ' 00:24:08.369 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:08.369 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:08.369 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:08.369 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:08.369 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:08.369 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:08.369 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:08.369 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:08.369 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:08.369 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:08.369 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:08.369 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:08.369 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:24:08.369 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:24:08.369 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:08.369 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:08.369 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:08.369 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:08.369 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:08.369 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:24:08.369 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:08.369 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:08.369 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:08.369 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.369 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.369 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.369 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:08.369 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.369 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:24:08.369 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:08.369 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:08.369 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:08.369 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:08.369 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:08.369 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:08.369 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:08.369 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:08.369 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:08.369 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:08.369 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:08.369 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:08.369 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:08.369 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:08.369 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:08.369 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:08.369 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:08.369 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:08.369 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:08.369 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:08.369 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:08.369 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:08.369 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:08.369 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:08.369 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:08.369 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:24:08.369 17:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:14.949 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:14.949 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:24:14.949 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:14.949 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:14.949 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:14.949 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:14.949 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:14.949 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:24:14.949 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:14.949 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:24:14.949 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:24:14.949 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:24:14.949 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:24:14.949 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:24:14.949 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:24:14.949 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:14.949 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:14.949 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:14.949 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:14.949 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:14.949 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:14.949 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:14.949 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:14.949 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:14.949 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:14.949 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:14.949 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:14.949 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:14.949 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:14.949 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:14.949 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:14.949 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:14.949 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:14.949 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:14.949 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:14.949 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:14.949 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:14.949 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:14.949 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:14.949 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:14.949 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:14.949 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:14.949 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:14.949 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:14.950 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:14.950 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:14.950 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:14.950 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:14.950 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:14.950 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:14.950 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:14.950 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:14.950 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:14.950 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:14.950 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:14.950 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:14.950 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:14.950 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:14.950 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:14.950 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:14.950 Found net devices under 0000:af:00.0: cvl_0_0 00:24:14.950 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:14.950 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:14.950 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:14.950 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:14.950 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:14.950 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:14.950 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:14.950 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:14.950 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:14.950 Found net devices under 0000:af:00.1: cvl_0_1 00:24:14.950 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:14.950 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:14.950 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:24:14.950 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:14.950 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:14.950 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:14.950 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:14.950 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:14.950 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:14.950 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:14.950 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:14.950 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:14.950 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:14.950 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:14.950 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:14.950 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:14.950 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:14.950 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:14.950 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:14.950 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:14.950 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:14.950 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:14.950 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:14.950 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:14.950 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:14.950 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:14.950 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:14.950 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:14.950 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:14.950 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:14.950 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.322 ms 00:24:14.950 00:24:14.950 --- 10.0.0.2 ping statistics --- 00:24:14.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:14.950 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:24:14.950 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:14.950 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:14.950 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:24:14.950 00:24:14.950 --- 10.0.0.1 ping statistics --- 00:24:14.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:14.950 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:24:14.950 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:14.950 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:24:14.950 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:14.950 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:14.950 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:14.950 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:14.950 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:14.950 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:14.950 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:14.950 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:14.950 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:14.950 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:14.950 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:14.950 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=2675184 00:24:14.950 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 2675184 00:24:14.950 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:14.950 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2675184 ']' 00:24:14.950 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:14.950 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:14.950 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:14.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:14.950 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:14.950 17:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:14.950 [2024-12-09 17:34:43.454821] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:24:14.950 [2024-12-09 17:34:43.454871] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:14.950 [2024-12-09 17:34:43.535273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:14.950 [2024-12-09 17:34:43.575317] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:14.950 [2024-12-09 17:34:43.575354] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:14.950 [2024-12-09 17:34:43.575361] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:14.950 [2024-12-09 17:34:43.575367] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:14.950 [2024-12-09 17:34:43.575372] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:14.950 [2024-12-09 17:34:43.576827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:14.950 [2024-12-09 17:34:43.576865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:14.950 [2024-12-09 17:34:43.576865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:15.210 17:34:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:15.210 17:34:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:24:15.210 17:34:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:15.210 17:34:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:15.210 17:34:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:15.210 17:34:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:15.210 17:34:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:15.468 [2024-12-09 17:34:44.496614] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:15.468 17:34:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:15.727 Malloc0 00:24:15.727 17:34:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:15.985 17:34:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:16.243 17:34:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:16.243 [2024-12-09 17:34:45.337251] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:16.243 17:34:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:16.500 [2024-12-09 17:34:45.525779] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:16.500 17:34:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:16.758 [2024-12-09 17:34:45.718398] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:16.758 17:34:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2675625 00:24:16.758 17:34:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:16.758 17:34:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:16.758 17:34:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2675625 /var/tmp/bdevperf.sock 00:24:16.758 17:34:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2675625 ']' 00:24:16.758 17:34:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:16.758 17:34:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:16.758 17:34:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:16.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:16.758 17:34:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:16.758 17:34:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:17.016 17:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:17.016 17:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:24:17.017 17:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:17.275 NVMe0n1 00:24:17.275 17:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:17.532 00:24:17.532 17:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:17.532 17:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2675850 00:24:17.532 17:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:24:18.906 17:34:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:18.906 [2024-12-09 17:34:47.862153] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fa760 is same with the state(6) to be set 00:24:18.906 [2024-12-09 17:34:47.862197] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fa760 is same with the state(6) to be set 00:24:18.906 [2024-12-09 17:34:47.862206] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fa760 is same with the state(6) to be set 00:24:18.906 [2024-12-09 17:34:47.862213] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fa760 is same with the state(6) to be set 00:24:18.906 [2024-12-09 17:34:47.862229] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fa760 is same with the state(6) to be set 00:24:18.906 [2024-12-09 17:34:47.862235] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fa760 is same with the state(6) to be set 00:24:18.906 [2024-12-09 17:34:47.862242] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fa760 is same with the state(6) to be set 00:24:18.906 17:34:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:24:22.188 17:34:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:22.188 00:24:22.188 17:34:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:22.446 [2024-12-09 17:34:51.463895] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fb410 is same with the state(6) to be set 00:24:22.446 [2024-12-09 17:34:51.463932] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fb410 is same with the state(6) to be set 00:24:22.446 [2024-12-09 17:34:51.463940] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fb410 is same with the state(6) to be set 00:24:22.446 [2024-12-09 17:34:51.463947] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fb410 is same with the state(6) to be set 00:24:22.446 [2024-12-09 17:34:51.463954] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fb410 is same with the state(6) to be set 00:24:22.446 [2024-12-09 17:34:51.463960] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fb410 is same with the state(6) to be set 00:24:22.446 [2024-12-09 17:34:51.463966] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fb410 is same with the state(6) to be set 00:24:22.446 [2024-12-09 17:34:51.463971] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fb410 is same with the state(6) to be set 00:24:22.446 [2024-12-09 17:34:51.463977] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fb410 is same with the state(6) to be set 00:24:22.446 17:34:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:24:25.729 17:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:25.729 [2024-12-09 17:34:54.681965] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:25.729 17:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:24:26.663 17:34:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:26.921 [2024-12-09 17:34:55.893859] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x847980 is same with the state(6) to be set 00:24:26.921 [2024-12-09 17:34:55.893896] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x847980 is same with the state(6) to be set 00:24:26.921 [2024-12-09 17:34:55.893903] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x847980 is same with the state(6) to be set 00:24:26.921 [2024-12-09 17:34:55.893910] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x847980 is same with the state(6) to be set 00:24:26.921 [2024-12-09 17:34:55.893916] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x847980 is same with the state(6) to be set 00:24:26.921 [2024-12-09 17:34:55.893922] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x847980 is same with the state(6) to be set 00:24:26.921 17:34:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 2675850 00:24:33.486 { 00:24:33.486 "results": [ 00:24:33.486 { 00:24:33.486 "job": "NVMe0n1", 00:24:33.486 "core_mask": "0x1", 00:24:33.486 "workload": "verify", 00:24:33.486 "status": "finished", 00:24:33.486 "verify_range": { 00:24:33.486 "start": 0, 00:24:33.486 "length": 16384 00:24:33.486 }, 00:24:33.486 "queue_depth": 128, 00:24:33.486 "io_size": 4096, 00:24:33.486 "runtime": 15.00601, 00:24:33.486 "iops": 11289.543322975262, 00:24:33.486 "mibps": 44.099778605372116, 00:24:33.486 "io_failed": 12477, 00:24:33.486 "io_timeout": 0, 00:24:33.486 "avg_latency_us": 10537.391620924232, 00:24:33.486 "min_latency_us": 421.30285714285714, 00:24:33.486 "max_latency_us": 21221.180952380953 00:24:33.486 } 00:24:33.486 ], 00:24:33.486 "core_count": 1 00:24:33.486 } 00:24:33.486 17:35:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 2675625 00:24:33.486 17:35:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2675625 ']' 00:24:33.486 17:35:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2675625 00:24:33.486 17:35:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:24:33.486 17:35:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:33.486 17:35:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2675625 00:24:33.486 17:35:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:33.486 17:35:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:33.486 17:35:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2675625' 00:24:33.486 killing process with pid 2675625 00:24:33.486 17:35:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2675625 00:24:33.487 17:35:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2675625 00:24:33.487 17:35:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:33.487 [2024-12-09 17:34:45.791542] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:24:33.487 [2024-12-09 17:34:45.791596] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2675625 ] 00:24:33.487 [2024-12-09 17:34:45.864414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:33.487 [2024-12-09 17:34:45.904411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:33.487 Running I/O for 15 seconds... 00:24:33.487 11193.00 IOPS, 43.72 MiB/s [2024-12-09T16:35:02.666Z] [2024-12-09 17:34:47.862776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:101368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.487 [2024-12-09 17:34:47.862809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.487 [2024-12-09 17:34:47.862824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:101376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.487 [2024-12-09 17:34:47.862832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.487 [2024-12-09 17:34:47.862841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:101384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.487 [2024-12-09 17:34:47.862849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.487 [2024-12-09 17:34:47.862857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:101392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.487 [2024-12-09 17:34:47.862864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.487 [2024-12-09 17:34:47.862872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:101400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.487 [2024-12-09 17:34:47.862879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.487 [2024-12-09 17:34:47.862887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:101408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.487 [2024-12-09 17:34:47.862894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.487 [2024-12-09 17:34:47.862902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:101416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.487 [2024-12-09 17:34:47.862909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.487 [2024-12-09 17:34:47.862917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:101424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.487 [2024-12-09 17:34:47.862924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.487 [2024-12-09 17:34:47.862932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:101432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.487 [2024-12-09 17:34:47.862938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.487 [2024-12-09 17:34:47.862946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:101440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.487 [2024-12-09 17:34:47.862953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.487 [2024-12-09 17:34:47.862961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:101448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.487 [2024-12-09 17:34:47.862967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.487 [2024-12-09 17:34:47.862985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:101456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.487 [2024-12-09 17:34:47.862992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.487 [2024-12-09 17:34:47.863000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:101464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.487 [2024-12-09 17:34:47.863007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.487 [2024-12-09 17:34:47.863015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:101472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.487 [2024-12-09 17:34:47.863022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.487 [2024-12-09 17:34:47.863030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:101480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.487 [2024-12-09 17:34:47.863036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.487 [2024-12-09 17:34:47.863044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:101488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.487 [2024-12-09 17:34:47.863050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.487 [2024-12-09 17:34:47.863058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.487 [2024-12-09 17:34:47.863065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.487 [2024-12-09 17:34:47.863073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:101504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.487 [2024-12-09 17:34:47.863080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.487 [2024-12-09 17:34:47.863088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:101512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.487 [2024-12-09 17:34:47.863095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.487 [2024-12-09 17:34:47.863103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:101520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.487 [2024-12-09 17:34:47.863109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.487 [2024-12-09 17:34:47.863118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:101528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.487 [2024-12-09 17:34:47.863124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.487 [2024-12-09 17:34:47.863132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:101536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.487 [2024-12-09 17:34:47.863139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.487 [2024-12-09 17:34:47.863147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:101544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.487 [2024-12-09 17:34:47.863154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.487 [2024-12-09 17:34:47.863162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:101552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.487 [2024-12-09 17:34:47.863171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.487 [2024-12-09 17:34:47.863179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:101560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.487 [2024-12-09 17:34:47.863185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.487 [2024-12-09 17:34:47.863193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:101568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.487 [2024-12-09 17:34:47.863200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.487 [2024-12-09 17:34:47.863208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:101576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.487 [2024-12-09 17:34:47.863214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.487 [2024-12-09 17:34:47.863229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:101584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.487 [2024-12-09 17:34:47.863236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.487 [2024-12-09 17:34:47.863244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:101592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.487 [2024-12-09 17:34:47.863250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.487 [2024-12-09 17:34:47.863258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:101600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.487 [2024-12-09 17:34:47.863264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.487 [2024-12-09 17:34:47.863272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:101608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.487 [2024-12-09 17:34:47.863279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.487 [2024-12-09 17:34:47.863287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:101616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.487 [2024-12-09 17:34:47.863294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.487 [2024-12-09 17:34:47.863302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:100984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.487 [2024-12-09 17:34:47.863309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.487 [2024-12-09 17:34:47.863317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:101624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.487 [2024-12-09 17:34:47.863324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.488 [2024-12-09 17:34:47.863332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:101632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.488 [2024-12-09 17:34:47.863339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.488 [2024-12-09 17:34:47.863347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:101640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.488 [2024-12-09 17:34:47.863353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.488 [2024-12-09 17:34:47.863366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:101648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.488 [2024-12-09 17:34:47.863373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.488 [2024-12-09 17:34:47.863381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:101656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.488 [2024-12-09 17:34:47.863387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.488 [2024-12-09 17:34:47.863395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:101664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.488 [2024-12-09 17:34:47.863404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.488 [2024-12-09 17:34:47.863412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:101672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.488 [2024-12-09 17:34:47.863419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.488 [2024-12-09 17:34:47.863427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:101680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.488 [2024-12-09 17:34:47.863433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.488 [2024-12-09 17:34:47.863441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.488 [2024-12-09 17:34:47.863448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.488 [2024-12-09 17:34:47.863455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:101696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.488 [2024-12-09 17:34:47.863462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.488 [2024-12-09 17:34:47.863470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:101704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.488 [2024-12-09 17:34:47.863476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.488 [2024-12-09 17:34:47.863484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:101712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.488 [2024-12-09 17:34:47.863491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.488 [2024-12-09 17:34:47.863498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:101720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.488 [2024-12-09 17:34:47.863505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.488 [2024-12-09 17:34:47.863512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.488 [2024-12-09 17:34:47.863519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.488 [2024-12-09 17:34:47.863527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:101736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.488 [2024-12-09 17:34:47.863534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.488 [2024-12-09 17:34:47.863542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:101744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.488 [2024-12-09 17:34:47.863550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.488 [2024-12-09 17:34:47.863559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:101752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.488 [2024-12-09 17:34:47.863565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.488 [2024-12-09 17:34:47.863573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:101760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.488 [2024-12-09 17:34:47.863580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.488 [2024-12-09 17:34:47.863588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:101768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.488 [2024-12-09 17:34:47.863594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.488 [2024-12-09 17:34:47.863602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:101776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.488 [2024-12-09 17:34:47.863608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.488 [2024-12-09 17:34:47.863616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:101784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.488 [2024-12-09 17:34:47.863622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.488 [2024-12-09 17:34:47.863630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:101792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.488 [2024-12-09 17:34:47.863637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.488 [2024-12-09 17:34:47.863644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:101800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.488 [2024-12-09 17:34:47.863650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.488 [2024-12-09 17:34:47.863658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:101808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.488 [2024-12-09 17:34:47.863664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.488 [2024-12-09 17:34:47.863672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:101816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.488 [2024-12-09 17:34:47.863679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.488 [2024-12-09 17:34:47.863686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:101824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.488 [2024-12-09 17:34:47.863693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.488 [2024-12-09 17:34:47.863701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:101832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.488 [2024-12-09 17:34:47.863708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.488 [2024-12-09 17:34:47.863716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.488 [2024-12-09 17:34:47.863722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.488 [2024-12-09 17:34:47.863730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:101848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.488 [2024-12-09 17:34:47.863738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.488 [2024-12-09 17:34:47.863747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:101856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.488 [2024-12-09 17:34:47.863754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.488 [2024-12-09 17:34:47.863761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:101864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.488 [2024-12-09 17:34:47.863768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.488 [2024-12-09 17:34:47.863776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.488 [2024-12-09 17:34:47.863788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.488 [2024-12-09 17:34:47.863796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:101880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.488 [2024-12-09 17:34:47.863803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.488 [2024-12-09 17:34:47.863811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:101888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.488 [2024-12-09 17:34:47.863817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.488 [2024-12-09 17:34:47.863825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:101896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.488 [2024-12-09 17:34:47.863831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.488 [2024-12-09 17:34:47.863839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:101904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.488 [2024-12-09 17:34:47.863845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.488 [2024-12-09 17:34:47.863853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:101912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.488 [2024-12-09 17:34:47.863859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.488 [2024-12-09 17:34:47.863867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:101920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.488 [2024-12-09 17:34:47.863874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.489 [2024-12-09 17:34:47.863881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:101928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.489 [2024-12-09 17:34:47.863888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.489 [2024-12-09 17:34:47.863896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.489 [2024-12-09 17:34:47.863902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.489 [2024-12-09 17:34:47.863910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:101000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.489 [2024-12-09 17:34:47.863917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.489 [2024-12-09 17:34:47.863926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:101008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.489 [2024-12-09 17:34:47.863933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.489 [2024-12-09 17:34:47.863940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:101016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.489 [2024-12-09 17:34:47.863947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.489 [2024-12-09 17:34:47.863955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:101024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.489 [2024-12-09 17:34:47.863962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.489 [2024-12-09 17:34:47.863969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:101032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.489 [2024-12-09 17:34:47.863975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.489 [2024-12-09 17:34:47.863984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:101040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.489 [2024-12-09 17:34:47.863990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.489 [2024-12-09 17:34:47.863998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:101936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.489 [2024-12-09 17:34:47.864005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.489 [2024-12-09 17:34:47.864013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:101048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.489 [2024-12-09 17:34:47.864020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.489 [2024-12-09 17:34:47.864028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:101056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.489 [2024-12-09 17:34:47.864034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.489 [2024-12-09 17:34:47.864042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:101064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.489 [2024-12-09 17:34:47.864049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.489 [2024-12-09 17:34:47.864057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.489 [2024-12-09 17:34:47.864063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.489 [2024-12-09 17:34:47.864071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:101080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.489 [2024-12-09 17:34:47.864077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.489 [2024-12-09 17:34:47.864085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:101088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.489 [2024-12-09 17:34:47.864091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.489 [2024-12-09 17:34:47.864099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:101096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.489 [2024-12-09 17:34:47.864107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.489 [2024-12-09 17:34:47.864115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:101104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.489 [2024-12-09 17:34:47.864121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.489 [2024-12-09 17:34:47.864129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:101112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.489 [2024-12-09 17:34:47.864137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.489 [2024-12-09 17:34:47.864145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:101120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.489 [2024-12-09 17:34:47.864152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.489 [2024-12-09 17:34:47.864160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:101128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.489 [2024-12-09 17:34:47.864166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.489 [2024-12-09 17:34:47.864174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:101136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.489 [2024-12-09 17:34:47.864180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.489 [2024-12-09 17:34:47.864187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:101144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.489 [2024-12-09 17:34:47.864194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.489 [2024-12-09 17:34:47.864202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:101152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.489 [2024-12-09 17:34:47.864208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.489 [2024-12-09 17:34:47.864216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:101160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.489 [2024-12-09 17:34:47.864227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.489 [2024-12-09 17:34:47.864234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:101168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.489 [2024-12-09 17:34:47.864240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.489 [2024-12-09 17:34:47.864248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:101176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.489 [2024-12-09 17:34:47.864256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.489 [2024-12-09 17:34:47.864264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:101184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.489 [2024-12-09 17:34:47.864271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.489 [2024-12-09 17:34:47.864278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:101192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.489 [2024-12-09 17:34:47.864285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.489 [2024-12-09 17:34:47.864294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:101200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.489 [2024-12-09 17:34:47.864300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.489 [2024-12-09 17:34:47.864308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:101208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.489 [2024-12-09 17:34:47.864314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.489 [2024-12-09 17:34:47.864322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:101216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.489 [2024-12-09 17:34:47.864328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.489 [2024-12-09 17:34:47.864336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:101224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.489 [2024-12-09 17:34:47.864342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.489 [2024-12-09 17:34:47.864349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:101232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.489 [2024-12-09 17:34:47.864356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.489 [2024-12-09 17:34:47.864363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:101240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.489 [2024-12-09 17:34:47.864372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.489 [2024-12-09 17:34:47.864379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:101248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.489 [2024-12-09 17:34:47.864385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.489 [2024-12-09 17:34:47.864393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:101256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.489 [2024-12-09 17:34:47.864399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.489 [2024-12-09 17:34:47.864407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:101264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.489 [2024-12-09 17:34:47.864413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.489 [2024-12-09 17:34:47.864421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:101272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.489 [2024-12-09 17:34:47.864427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.489 [2024-12-09 17:34:47.864435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:101280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.489 [2024-12-09 17:34:47.864441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.490 [2024-12-09 17:34:47.864449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:101288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.490 [2024-12-09 17:34:47.864455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.490 [2024-12-09 17:34:47.864463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:101296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.490 [2024-12-09 17:34:47.864470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.490 [2024-12-09 17:34:47.864479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:101304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.490 [2024-12-09 17:34:47.864486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.490 [2024-12-09 17:34:47.864494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:101312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.490 [2024-12-09 17:34:47.864500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.490 [2024-12-09 17:34:47.864508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:101320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.490 [2024-12-09 17:34:47.864514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.490 [2024-12-09 17:34:47.864522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:101328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.490 [2024-12-09 17:34:47.864529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.490 [2024-12-09 17:34:47.864537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.490 [2024-12-09 17:34:47.864543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.490 [2024-12-09 17:34:47.864551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:101344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.490 [2024-12-09 17:34:47.864557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.490 [2024-12-09 17:34:47.864564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:101352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.490 [2024-12-09 17:34:47.864571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.490 [2024-12-09 17:34:47.864579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:101360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.490 [2024-12-09 17:34:47.864585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.490 [2024-12-09 17:34:47.864593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:101944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.490 [2024-12-09 17:34:47.864600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.490 [2024-12-09 17:34:47.864607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:101952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.490 [2024-12-09 17:34:47.864613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.490 [2024-12-09 17:34:47.864621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:101960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.490 [2024-12-09 17:34:47.864628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.490 [2024-12-09 17:34:47.864636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:101968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.490 [2024-12-09 17:34:47.864642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.490 [2024-12-09 17:34:47.864651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.490 [2024-12-09 17:34:47.864658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.490 [2024-12-09 17:34:47.864666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:101984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.490 [2024-12-09 17:34:47.864672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.490 [2024-12-09 17:34:47.864680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:101992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.490 [2024-12-09 17:34:47.864686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.490 [2024-12-09 17:34:47.864706] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.490 [2024-12-09 17:34:47.864712] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.490 [2024-12-09 17:34:47.864718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102000 len:8 PRP1 0x0 PRP2 0x0 00:24:33.490 [2024-12-09 17:34:47.864727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.490 [2024-12-09 17:34:47.864771] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:33.490 [2024-12-09 17:34:47.864793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:33.490 [2024-12-09 17:34:47.864800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.490 [2024-12-09 17:34:47.864807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:33.490 [2024-12-09 17:34:47.864813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.490 [2024-12-09 17:34:47.864820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:33.490 [2024-12-09 17:34:47.864827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.490 [2024-12-09 17:34:47.864834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:33.490 [2024-12-09 17:34:47.864840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.490 [2024-12-09 17:34:47.864847] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:24:33.490 [2024-12-09 17:34:47.867630] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:24:33.490 [2024-12-09 17:34:47.867656] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23128d0 (9): Bad file descriptor 00:24:33.490 [2024-12-09 17:34:47.938829] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:24:33.490 10916.50 IOPS, 42.64 MiB/s [2024-12-09T16:35:02.669Z] 11140.00 IOPS, 43.52 MiB/s [2024-12-09T16:35:02.669Z] 11292.50 IOPS, 44.11 MiB/s [2024-12-09T16:35:02.669Z] [2024-12-09 17:34:51.465020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:62368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.490 [2024-12-09 17:34:51.465056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.490 [2024-12-09 17:34:51.465070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:62376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.490 [2024-12-09 17:34:51.465084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.490 [2024-12-09 17:34:51.465093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:62384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.490 [2024-12-09 17:34:51.465099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.490 [2024-12-09 17:34:51.465108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:62392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.490 [2024-12-09 17:34:51.465115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.490 [2024-12-09 17:34:51.465123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:62400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.490 [2024-12-09 17:34:51.465130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.490 [2024-12-09 17:34:51.465138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:62408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.490 [2024-12-09 17:34:51.465145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.490 [2024-12-09 17:34:51.465153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:62416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.490 [2024-12-09 17:34:51.465159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.490 [2024-12-09 17:34:51.465167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:62424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.490 [2024-12-09 17:34:51.465174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.490 [2024-12-09 17:34:51.465183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:62432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.490 [2024-12-09 17:34:51.465189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.490 [2024-12-09 17:34:51.465197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:62440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.490 [2024-12-09 17:34:51.465204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.490 [2024-12-09 17:34:51.465212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:62448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.490 [2024-12-09 17:34:51.465225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.490 [2024-12-09 17:34:51.465233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:62456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.490 [2024-12-09 17:34:51.465240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.490 [2024-12-09 17:34:51.465248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:62464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.490 [2024-12-09 17:34:51.465254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.490 [2024-12-09 17:34:51.465262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:62472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.490 [2024-12-09 17:34:51.465269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.490 [2024-12-09 17:34:51.465279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:62480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.490 [2024-12-09 17:34:51.465285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.491 [2024-12-09 17:34:51.465294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:62488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.491 [2024-12-09 17:34:51.465300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.491 [2024-12-09 17:34:51.465309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:62496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.491 [2024-12-09 17:34:51.465315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.491 [2024-12-09 17:34:51.465324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:62504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.491 [2024-12-09 17:34:51.465331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.491 [2024-12-09 17:34:51.465339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:62512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.491 [2024-12-09 17:34:51.465345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.491 [2024-12-09 17:34:51.465353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:62520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.491 [2024-12-09 17:34:51.465359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.491 [2024-12-09 17:34:51.465367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:62528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.491 [2024-12-09 17:34:51.465373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.491 [2024-12-09 17:34:51.465381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:62536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.491 [2024-12-09 17:34:51.465387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.491 [2024-12-09 17:34:51.465395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:62544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.491 [2024-12-09 17:34:51.465402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.491 [2024-12-09 17:34:51.465409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:62552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.491 [2024-12-09 17:34:51.465415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.491 [2024-12-09 17:34:51.465423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:62560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.491 [2024-12-09 17:34:51.465429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.491 [2024-12-09 17:34:51.465437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:62568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.491 [2024-12-09 17:34:51.465443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.491 [2024-12-09 17:34:51.465451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:62576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.491 [2024-12-09 17:34:51.465457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.491 [2024-12-09 17:34:51.465467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:62584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.491 [2024-12-09 17:34:51.465473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.491 [2024-12-09 17:34:51.465481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:62592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.491 [2024-12-09 17:34:51.465487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.491 [2024-12-09 17:34:51.465494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:62600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.491 [2024-12-09 17:34:51.465500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.491 [2024-12-09 17:34:51.465508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:62608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.491 [2024-12-09 17:34:51.465515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.491 [2024-12-09 17:34:51.465523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:62616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.491 [2024-12-09 17:34:51.465529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.491 [2024-12-09 17:34:51.465537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:62624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.491 [2024-12-09 17:34:51.465543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.491 [2024-12-09 17:34:51.465551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:62632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.491 [2024-12-09 17:34:51.465558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.491 [2024-12-09 17:34:51.465565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:62640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.491 [2024-12-09 17:34:51.465572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.491 [2024-12-09 17:34:51.465580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:62648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.491 [2024-12-09 17:34:51.465586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.491 [2024-12-09 17:34:51.465594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:62656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.491 [2024-12-09 17:34:51.465600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.491 [2024-12-09 17:34:51.465607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:62664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.491 [2024-12-09 17:34:51.465615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.491 [2024-12-09 17:34:51.465623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:62672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.491 [2024-12-09 17:34:51.465630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.491 [2024-12-09 17:34:51.465638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:62680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.491 [2024-12-09 17:34:51.465645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.491 [2024-12-09 17:34:51.465653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:62688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.491 [2024-12-09 17:34:51.465659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.491 [2024-12-09 17:34:51.465667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:62696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.491 [2024-12-09 17:34:51.465674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.491 [2024-12-09 17:34:51.465681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:62704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.491 [2024-12-09 17:34:51.465688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.491 [2024-12-09 17:34:51.465696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:62712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.491 [2024-12-09 17:34:51.465702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.491 [2024-12-09 17:34:51.465710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:62720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.491 [2024-12-09 17:34:51.465716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.491 [2024-12-09 17:34:51.465724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:62728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.491 [2024-12-09 17:34:51.465731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.491 [2024-12-09 17:34:51.465739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:62736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.491 [2024-12-09 17:34:51.465745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.491 [2024-12-09 17:34:51.465752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:62744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.491 [2024-12-09 17:34:51.465759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.491 [2024-12-09 17:34:51.465766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:62752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.492 [2024-12-09 17:34:51.465773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.492 [2024-12-09 17:34:51.465781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:62760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.492 [2024-12-09 17:34:51.465787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.492 [2024-12-09 17:34:51.465795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:62768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.492 [2024-12-09 17:34:51.465801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.492 [2024-12-09 17:34:51.465809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:62776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.492 [2024-12-09 17:34:51.465815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.492 [2024-12-09 17:34:51.465825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:62784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.492 [2024-12-09 17:34:51.465831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.492 [2024-12-09 17:34:51.465839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:62792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.492 [2024-12-09 17:34:51.465845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.492 [2024-12-09 17:34:51.465853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:62800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.492 [2024-12-09 17:34:51.465859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.492 [2024-12-09 17:34:51.465867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:62808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.492 [2024-12-09 17:34:51.465873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.492 [2024-12-09 17:34:51.465882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:62816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.492 [2024-12-09 17:34:51.465888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.492 [2024-12-09 17:34:51.465896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:62824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.492 [2024-12-09 17:34:51.465902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.492 [2024-12-09 17:34:51.465910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:62832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.492 [2024-12-09 17:34:51.465916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.492 [2024-12-09 17:34:51.465924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:62840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.492 [2024-12-09 17:34:51.465930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.492 [2024-12-09 17:34:51.465938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:62848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.492 [2024-12-09 17:34:51.465945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.492 [2024-12-09 17:34:51.465953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:62856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.492 [2024-12-09 17:34:51.465959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.492 [2024-12-09 17:34:51.465967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:62864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.492 [2024-12-09 17:34:51.465973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.492 [2024-12-09 17:34:51.465981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:62872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.492 [2024-12-09 17:34:51.465987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.492 [2024-12-09 17:34:51.465995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:62880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.492 [2024-12-09 17:34:51.466003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.492 [2024-12-09 17:34:51.466011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:62888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.492 [2024-12-09 17:34:51.466017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.492 [2024-12-09 17:34:51.466025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:62896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.492 [2024-12-09 17:34:51.466032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.492 [2024-12-09 17:34:51.466039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:62904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.492 [2024-12-09 17:34:51.466046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.492 [2024-12-09 17:34:51.466054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:62912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.492 [2024-12-09 17:34:51.466061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.492 [2024-12-09 17:34:51.466069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:62920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.492 [2024-12-09 17:34:51.466075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.492 [2024-12-09 17:34:51.466083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:62928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.492 [2024-12-09 17:34:51.466089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.492 [2024-12-09 17:34:51.466097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:62936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.492 [2024-12-09 17:34:51.466103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.492 [2024-12-09 17:34:51.466111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:62944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.492 [2024-12-09 17:34:51.466117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.492 [2024-12-09 17:34:51.466125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:62952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.492 [2024-12-09 17:34:51.466131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.492 [2024-12-09 17:34:51.466139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:62960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.492 [2024-12-09 17:34:51.466145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.492 [2024-12-09 17:34:51.466153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:62968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.492 [2024-12-09 17:34:51.466159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.492 [2024-12-09 17:34:51.466167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:62976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.492 [2024-12-09 17:34:51.466173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.492 [2024-12-09 17:34:51.466181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:62984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.492 [2024-12-09 17:34:51.466189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.492 [2024-12-09 17:34:51.466197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:62992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.492 [2024-12-09 17:34:51.466203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.492 [2024-12-09 17:34:51.466211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:63000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.492 [2024-12-09 17:34:51.466220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.492 [2024-12-09 17:34:51.466229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:63008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.492 [2024-12-09 17:34:51.466235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.492 [2024-12-09 17:34:51.466243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:63016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.492 [2024-12-09 17:34:51.466249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.492 [2024-12-09 17:34:51.466258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:63024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.492 [2024-12-09 17:34:51.466265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.492 [2024-12-09 17:34:51.466272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:63032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.492 [2024-12-09 17:34:51.466279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.492 [2024-12-09 17:34:51.466286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:63040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.492 [2024-12-09 17:34:51.466293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.492 [2024-12-09 17:34:51.466300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:63048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.492 [2024-12-09 17:34:51.466307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.492 [2024-12-09 17:34:51.466315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:63056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.492 [2024-12-09 17:34:51.466321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.492 [2024-12-09 17:34:51.466328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:63064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.492 [2024-12-09 17:34:51.466335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.492 [2024-12-09 17:34:51.466343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:63072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.492 [2024-12-09 17:34:51.466349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.493 [2024-12-09 17:34:51.466356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:63080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.493 [2024-12-09 17:34:51.466363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.493 [2024-12-09 17:34:51.466385] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.493 [2024-12-09 17:34:51.466392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63088 len:8 PRP1 0x0 PRP2 0x0 00:24:33.493 [2024-12-09 17:34:51.466398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.493 [2024-12-09 17:34:51.466407] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.493 [2024-12-09 17:34:51.466412] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.493 [2024-12-09 17:34:51.466418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63096 len:8 PRP1 0x0 PRP2 0x0 00:24:33.493 [2024-12-09 17:34:51.466424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.493 [2024-12-09 17:34:51.466430] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.493 [2024-12-09 17:34:51.466435] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.493 [2024-12-09 17:34:51.466440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63104 len:8 PRP1 0x0 PRP2 0x0 00:24:33.493 [2024-12-09 17:34:51.466446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.493 [2024-12-09 17:34:51.466452] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.493 [2024-12-09 17:34:51.466457] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.493 [2024-12-09 17:34:51.466462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63112 len:8 PRP1 0x0 PRP2 0x0 00:24:33.493 [2024-12-09 17:34:51.466469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.493 [2024-12-09 17:34:51.466475] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.493 [2024-12-09 17:34:51.466480] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.493 [2024-12-09 17:34:51.466485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63120 len:8 PRP1 0x0 PRP2 0x0 00:24:33.493 [2024-12-09 17:34:51.466491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.493 [2024-12-09 17:34:51.466497] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.493 [2024-12-09 17:34:51.466503] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.493 [2024-12-09 17:34:51.466508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63128 len:8 PRP1 0x0 PRP2 0x0 00:24:33.493 [2024-12-09 17:34:51.466514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.493 [2024-12-09 17:34:51.466521] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.493 [2024-12-09 17:34:51.466526] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.493 [2024-12-09 17:34:51.466531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63136 len:8 PRP1 0x0 PRP2 0x0 00:24:33.493 [2024-12-09 17:34:51.466537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.493 [2024-12-09 17:34:51.466543] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.493 [2024-12-09 17:34:51.466548] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.493 [2024-12-09 17:34:51.466553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63144 len:8 PRP1 0x0 PRP2 0x0 00:24:33.493 [2024-12-09 17:34:51.466559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.493 [2024-12-09 17:34:51.466567] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.493 [2024-12-09 17:34:51.466572] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.493 [2024-12-09 17:34:51.466577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63152 len:8 PRP1 0x0 PRP2 0x0 00:24:33.493 [2024-12-09 17:34:51.466584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.493 [2024-12-09 17:34:51.466590] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.493 [2024-12-09 17:34:51.466594] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.493 [2024-12-09 17:34:51.466600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63160 len:8 PRP1 0x0 PRP2 0x0 00:24:33.493 [2024-12-09 17:34:51.466605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.493 [2024-12-09 17:34:51.466612] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.493 [2024-12-09 17:34:51.466617] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.493 [2024-12-09 17:34:51.466622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63168 len:8 PRP1 0x0 PRP2 0x0 00:24:33.493 [2024-12-09 17:34:51.466628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.493 [2024-12-09 17:34:51.466634] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.493 [2024-12-09 17:34:51.466639] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.493 [2024-12-09 17:34:51.466644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63176 len:8 PRP1 0x0 PRP2 0x0 00:24:33.493 [2024-12-09 17:34:51.466650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.493 [2024-12-09 17:34:51.466656] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.493 [2024-12-09 17:34:51.466661] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.493 [2024-12-09 17:34:51.466666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63184 len:8 PRP1 0x0 PRP2 0x0 00:24:33.493 [2024-12-09 17:34:51.466673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.493 [2024-12-09 17:34:51.466679] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.493 [2024-12-09 17:34:51.466683] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.493 [2024-12-09 17:34:51.466690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63192 len:8 PRP1 0x0 PRP2 0x0 00:24:33.493 [2024-12-09 17:34:51.466696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.493 [2024-12-09 17:34:51.466702] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.493 [2024-12-09 17:34:51.466707] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.493 [2024-12-09 17:34:51.466712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63200 len:8 PRP1 0x0 PRP2 0x0 00:24:33.493 [2024-12-09 17:34:51.466718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.493 [2024-12-09 17:34:51.466725] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.493 [2024-12-09 17:34:51.466730] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.493 [2024-12-09 17:34:51.466735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63208 len:8 PRP1 0x0 PRP2 0x0 00:24:33.493 [2024-12-09 17:34:51.466742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.493 [2024-12-09 17:34:51.466748] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.493 [2024-12-09 17:34:51.466753] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.493 [2024-12-09 17:34:51.466758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63216 len:8 PRP1 0x0 PRP2 0x0 00:24:33.493 [2024-12-09 17:34:51.466764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.493 [2024-12-09 17:34:51.466771] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.493 [2024-12-09 17:34:51.466776] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.493 [2024-12-09 17:34:51.466781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63224 len:8 PRP1 0x0 PRP2 0x0 00:24:33.493 [2024-12-09 17:34:51.466787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.493 [2024-12-09 17:34:51.466793] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.493 [2024-12-09 17:34:51.466798] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.493 [2024-12-09 17:34:51.466803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63232 len:8 PRP1 0x0 PRP2 0x0 00:24:33.493 [2024-12-09 17:34:51.466809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.493 [2024-12-09 17:34:51.466815] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.493 [2024-12-09 17:34:51.466820] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.493 [2024-12-09 17:34:51.466825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63240 len:8 PRP1 0x0 PRP2 0x0 00:24:33.493 [2024-12-09 17:34:51.466832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.493 [2024-12-09 17:34:51.466838] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.493 [2024-12-09 17:34:51.466843] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.493 [2024-12-09 17:34:51.466848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63248 len:8 PRP1 0x0 PRP2 0x0 00:24:33.493 [2024-12-09 17:34:51.466855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.493 [2024-12-09 17:34:51.466861] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.493 [2024-12-09 17:34:51.466866] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.493 [2024-12-09 17:34:51.466871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63256 len:8 PRP1 0x0 PRP2 0x0 00:24:33.493 [2024-12-09 17:34:51.466878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.493 [2024-12-09 17:34:51.466884] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.493 [2024-12-09 17:34:51.466889] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.493 [2024-12-09 17:34:51.466894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63264 len:8 PRP1 0x0 PRP2 0x0 00:24:33.493 [2024-12-09 17:34:51.466900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.493 [2024-12-09 17:34:51.466906] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.493 [2024-12-09 17:34:51.466912] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.494 [2024-12-09 17:34:51.466917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63272 len:8 PRP1 0x0 PRP2 0x0 00:24:33.494 [2024-12-09 17:34:51.466924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.494 [2024-12-09 17:34:51.466930] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.494 [2024-12-09 17:34:51.466935] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.494 [2024-12-09 17:34:51.466941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63280 len:8 PRP1 0x0 PRP2 0x0 00:24:33.494 [2024-12-09 17:34:51.466947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.494 [2024-12-09 17:34:51.466953] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.494 [2024-12-09 17:34:51.466958] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.494 [2024-12-09 17:34:51.466963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63288 len:8 PRP1 0x0 PRP2 0x0 00:24:33.494 [2024-12-09 17:34:51.466969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.494 [2024-12-09 17:34:51.466975] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.494 [2024-12-09 17:34:51.466980] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.494 [2024-12-09 17:34:51.466985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63296 len:8 PRP1 0x0 PRP2 0x0 00:24:33.494 [2024-12-09 17:34:51.466992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.494 [2024-12-09 17:34:51.466998] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.494 [2024-12-09 17:34:51.467003] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.494 [2024-12-09 17:34:51.467008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63304 len:8 PRP1 0x0 PRP2 0x0 00:24:33.494 [2024-12-09 17:34:51.467017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.494 [2024-12-09 17:34:51.467024] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.494 [2024-12-09 17:34:51.467029] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.494 [2024-12-09 17:34:51.467034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63312 len:8 PRP1 0x0 PRP2 0x0 00:24:33.494 [2024-12-09 17:34:51.467040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.494 [2024-12-09 17:34:51.467046] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.494 [2024-12-09 17:34:51.467051] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.494 [2024-12-09 17:34:51.467056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63320 len:8 PRP1 0x0 PRP2 0x0 00:24:33.494 [2024-12-09 17:34:51.467062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.494 [2024-12-09 17:34:51.467069] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.494 [2024-12-09 17:34:51.467073] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.494 [2024-12-09 17:34:51.467079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63328 len:8 PRP1 0x0 PRP2 0x0 00:24:33.494 [2024-12-09 17:34:51.467085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.494 [2024-12-09 17:34:51.467096] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.494 [2024-12-09 17:34:51.467101] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.494 [2024-12-09 17:34:51.467107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63336 len:8 PRP1 0x0 PRP2 0x0 00:24:33.494 [2024-12-09 17:34:51.467113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.494 [2024-12-09 17:34:51.467119] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.494 [2024-12-09 17:34:51.467123] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.494 [2024-12-09 17:34:51.467129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63344 len:8 PRP1 0x0 PRP2 0x0 00:24:33.494 [2024-12-09 17:34:51.467136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.494 [2024-12-09 17:34:51.467142] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.494 [2024-12-09 17:34:51.467147] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.494 [2024-12-09 17:34:51.467152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63352 len:8 PRP1 0x0 PRP2 0x0 00:24:33.494 [2024-12-09 17:34:51.477634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.494 [2024-12-09 17:34:51.477648] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.494 [2024-12-09 17:34:51.477656] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.494 [2024-12-09 17:34:51.477664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63360 len:8 PRP1 0x0 PRP2 0x0 00:24:33.494 [2024-12-09 17:34:51.477672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.494 [2024-12-09 17:34:51.477680] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.494 [2024-12-09 17:34:51.477687] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.494 [2024-12-09 17:34:51.477694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63368 len:8 PRP1 0x0 PRP2 0x0 00:24:33.494 [2024-12-09 17:34:51.477703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.494 [2024-12-09 17:34:51.477712] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.494 [2024-12-09 17:34:51.477718] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.494 [2024-12-09 17:34:51.477725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63376 len:8 PRP1 0x0 PRP2 0x0 00:24:33.494 [2024-12-09 17:34:51.477734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.494 [2024-12-09 17:34:51.477743] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.494 [2024-12-09 17:34:51.477749] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.494 [2024-12-09 17:34:51.477756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63384 len:8 PRP1 0x0 PRP2 0x0 00:24:33.494 [2024-12-09 17:34:51.477764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.494 [2024-12-09 17:34:51.477811] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:24:33.494 [2024-12-09 17:34:51.477838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:33.494 [2024-12-09 17:34:51.477848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.494 [2024-12-09 17:34:51.477861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:33.494 [2024-12-09 17:34:51.477869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.494 [2024-12-09 17:34:51.477878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:33.494 [2024-12-09 17:34:51.477887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.494 [2024-12-09 17:34:51.477896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:33.494 [2024-12-09 17:34:51.477905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.494 [2024-12-09 17:34:51.477913] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:24:33.494 [2024-12-09 17:34:51.477950] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23128d0 (9): Bad file descriptor 00:24:33.494 [2024-12-09 17:34:51.481685] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:24:33.494 [2024-12-09 17:34:51.512663] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:24:33.494 11204.60 IOPS, 43.77 MiB/s [2024-12-09T16:35:02.673Z] 11280.50 IOPS, 44.06 MiB/s [2024-12-09T16:35:02.673Z] 11294.86 IOPS, 44.12 MiB/s [2024-12-09T16:35:02.673Z] 11319.75 IOPS, 44.22 MiB/s [2024-12-09T16:35:02.673Z] 11316.11 IOPS, 44.20 MiB/s [2024-12-09T16:35:02.673Z] [2024-12-09 17:34:55.894097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.494 [2024-12-09 17:34:55.894130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.494 [2024-12-09 17:34:55.894144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:82400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.494 [2024-12-09 17:34:55.894152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.494 [2024-12-09 17:34:55.894161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:82408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.494 [2024-12-09 17:34:55.894168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.494 [2024-12-09 17:34:55.894176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:82416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.494 [2024-12-09 17:34:55.894183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.494 [2024-12-09 17:34:55.894192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:82424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.494 [2024-12-09 17:34:55.894198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.494 [2024-12-09 17:34:55.894206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:82432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.494 [2024-12-09 17:34:55.894213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.494 [2024-12-09 17:34:55.894226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:82440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.494 [2024-12-09 17:34:55.894232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.494 [2024-12-09 17:34:55.894245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:82448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.494 [2024-12-09 17:34:55.894252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.495 [2024-12-09 17:34:55.894260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:82456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.495 [2024-12-09 17:34:55.894266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.495 [2024-12-09 17:34:55.894274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:82464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.495 [2024-12-09 17:34:55.894281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.495 [2024-12-09 17:34:55.894288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:82472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.495 [2024-12-09 17:34:55.894295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.495 [2024-12-09 17:34:55.894303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:82480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.495 [2024-12-09 17:34:55.894309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.495 [2024-12-09 17:34:55.894317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.495 [2024-12-09 17:34:55.894323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.495 [2024-12-09 17:34:55.894331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:82496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.495 [2024-12-09 17:34:55.894338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.495 [2024-12-09 17:34:55.894345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:82504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.495 [2024-12-09 17:34:55.894352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.495 [2024-12-09 17:34:55.894359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:81560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.495 [2024-12-09 17:34:55.894366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.495 [2024-12-09 17:34:55.894374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:81568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.495 [2024-12-09 17:34:55.894381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.495 [2024-12-09 17:34:55.894389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:81576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.495 [2024-12-09 17:34:55.894396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.495 [2024-12-09 17:34:55.894404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:81584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.495 [2024-12-09 17:34:55.894411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.495 [2024-12-09 17:34:55.894419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:81592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.495 [2024-12-09 17:34:55.894425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.495 [2024-12-09 17:34:55.894435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:81600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.495 [2024-12-09 17:34:55.894442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.495 [2024-12-09 17:34:55.894449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:81608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.495 [2024-12-09 17:34:55.894456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.495 [2024-12-09 17:34:55.894463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:81616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.495 [2024-12-09 17:34:55.894470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.495 [2024-12-09 17:34:55.894478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.495 [2024-12-09 17:34:55.894484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.495 [2024-12-09 17:34:55.894493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:81632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.495 [2024-12-09 17:34:55.894499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.495 [2024-12-09 17:34:55.894507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:81640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.495 [2024-12-09 17:34:55.894513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.495 [2024-12-09 17:34:55.894521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:81648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.495 [2024-12-09 17:34:55.894527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.495 [2024-12-09 17:34:55.894535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:81656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.495 [2024-12-09 17:34:55.894541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.495 [2024-12-09 17:34:55.894549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:81664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.495 [2024-12-09 17:34:55.894556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.495 [2024-12-09 17:34:55.894563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:81672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.495 [2024-12-09 17:34:55.894571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.495 [2024-12-09 17:34:55.894579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:81680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.495 [2024-12-09 17:34:55.894585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.495 [2024-12-09 17:34:55.894593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.495 [2024-12-09 17:34:55.894599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.495 [2024-12-09 17:34:55.894607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:81696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.495 [2024-12-09 17:34:55.894615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.495 [2024-12-09 17:34:55.894623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:81704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.495 [2024-12-09 17:34:55.894629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.495 [2024-12-09 17:34:55.894637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:81712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.495 [2024-12-09 17:34:55.894644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.495 [2024-12-09 17:34:55.894652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:81720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.495 [2024-12-09 17:34:55.894658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.495 [2024-12-09 17:34:55.894666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:81728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.495 [2024-12-09 17:34:55.894673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.495 [2024-12-09 17:34:55.894680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:81736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.495 [2024-12-09 17:34:55.894687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.495 [2024-12-09 17:34:55.894694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.495 [2024-12-09 17:34:55.894701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.495 [2024-12-09 17:34:55.894708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:81752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.495 [2024-12-09 17:34:55.894715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.495 [2024-12-09 17:34:55.894723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:81760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.495 [2024-12-09 17:34:55.894729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.495 [2024-12-09 17:34:55.894736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:81768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.495 [2024-12-09 17:34:55.894743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.495 [2024-12-09 17:34:55.894751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:81776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.495 [2024-12-09 17:34:55.894757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.495 [2024-12-09 17:34:55.894765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:81784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.496 [2024-12-09 17:34:55.894771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.496 [2024-12-09 17:34:55.894779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:81792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.496 [2024-12-09 17:34:55.894785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.496 [2024-12-09 17:34:55.894794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:81800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.496 [2024-12-09 17:34:55.894801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.496 [2024-12-09 17:34:55.894809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:81808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.496 [2024-12-09 17:34:55.894816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.496 [2024-12-09 17:34:55.894824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:81816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.496 [2024-12-09 17:34:55.894831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.496 [2024-12-09 17:34:55.894839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.496 [2024-12-09 17:34:55.894846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.496 [2024-12-09 17:34:55.894854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:81832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.496 [2024-12-09 17:34:55.894861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.496 [2024-12-09 17:34:55.894868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:81840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.496 [2024-12-09 17:34:55.894875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.496 [2024-12-09 17:34:55.894883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:81848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.496 [2024-12-09 17:34:55.894889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.496 [2024-12-09 17:34:55.894897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:81856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.496 [2024-12-09 17:34:55.894904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.496 [2024-12-09 17:34:55.894912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:81864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.496 [2024-12-09 17:34:55.894918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.496 [2024-12-09 17:34:55.894927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:81872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.496 [2024-12-09 17:34:55.894933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.496 [2024-12-09 17:34:55.894941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:82512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.496 [2024-12-09 17:34:55.894947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.496 [2024-12-09 17:34:55.894955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:82520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.496 [2024-12-09 17:34:55.894962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.496 [2024-12-09 17:34:55.894969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:81880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.496 [2024-12-09 17:34:55.894977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.496 [2024-12-09 17:34:55.894985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:81888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.496 [2024-12-09 17:34:55.894991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.496 [2024-12-09 17:34:55.894999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:81896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.496 [2024-12-09 17:34:55.895006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.496 [2024-12-09 17:34:55.895014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:81904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.496 [2024-12-09 17:34:55.895020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.496 [2024-12-09 17:34:55.895028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:81912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.496 [2024-12-09 17:34:55.895034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.496 [2024-12-09 17:34:55.895041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:81920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.496 [2024-12-09 17:34:55.895048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.496 [2024-12-09 17:34:55.895056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:81928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.496 [2024-12-09 17:34:55.895062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.496 [2024-12-09 17:34:55.895070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:81936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.496 [2024-12-09 17:34:55.895077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.496 [2024-12-09 17:34:55.895086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:81944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.496 [2024-12-09 17:34:55.895092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.496 [2024-12-09 17:34:55.895100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:81952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.496 [2024-12-09 17:34:55.895106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.496 [2024-12-09 17:34:55.895114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:81960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.496 [2024-12-09 17:34:55.895120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.496 [2024-12-09 17:34:55.895128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:81968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.496 [2024-12-09 17:34:55.895134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.496 [2024-12-09 17:34:55.895142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:81976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.496 [2024-12-09 17:34:55.895149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.496 [2024-12-09 17:34:55.895158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:81984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.496 [2024-12-09 17:34:55.895164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.496 [2024-12-09 17:34:55.895172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:81992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.496 [2024-12-09 17:34:55.895178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.496 [2024-12-09 17:34:55.895186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:82000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.496 [2024-12-09 17:34:55.895192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.496 [2024-12-09 17:34:55.895200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:82008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.496 [2024-12-09 17:34:55.895207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.496 [2024-12-09 17:34:55.895214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:82016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.496 [2024-12-09 17:34:55.895229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.496 [2024-12-09 17:34:55.895237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:82024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.496 [2024-12-09 17:34:55.895243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.496 [2024-12-09 17:34:55.895251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:82032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.496 [2024-12-09 17:34:55.895258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.496 [2024-12-09 17:34:55.895265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:82040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.496 [2024-12-09 17:34:55.895272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.496 [2024-12-09 17:34:55.895279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.496 [2024-12-09 17:34:55.895285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.496 [2024-12-09 17:34:55.895294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:82056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.496 [2024-12-09 17:34:55.895300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.496 [2024-12-09 17:34:55.895310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:82064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.496 [2024-12-09 17:34:55.895317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.496 [2024-12-09 17:34:55.895325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:82072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.496 [2024-12-09 17:34:55.895331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.496 [2024-12-09 17:34:55.895339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:82080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.496 [2024-12-09 17:34:55.895346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.496 [2024-12-09 17:34:55.895358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:82088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.496 [2024-12-09 17:34:55.895365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.497 [2024-12-09 17:34:55.895373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:82096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.497 [2024-12-09 17:34:55.895379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.497 [2024-12-09 17:34:55.895387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.497 [2024-12-09 17:34:55.895394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.497 [2024-12-09 17:34:55.895402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:82112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.497 [2024-12-09 17:34:55.895408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.497 [2024-12-09 17:34:55.895417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:82120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.497 [2024-12-09 17:34:55.895423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.497 [2024-12-09 17:34:55.895431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:82128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.497 [2024-12-09 17:34:55.895438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.497 [2024-12-09 17:34:55.895445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:82136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.497 [2024-12-09 17:34:55.895452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.497 [2024-12-09 17:34:55.895459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:82144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.497 [2024-12-09 17:34:55.895466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.497 [2024-12-09 17:34:55.895474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:82152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.497 [2024-12-09 17:34:55.895481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.497 [2024-12-09 17:34:55.895489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:82160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.497 [2024-12-09 17:34:55.895495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.497 [2024-12-09 17:34:55.895503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:82168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.497 [2024-12-09 17:34:55.895509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.497 [2024-12-09 17:34:55.895517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:82176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.497 [2024-12-09 17:34:55.895524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.497 [2024-12-09 17:34:55.895532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:82184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.497 [2024-12-09 17:34:55.895539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.497 [2024-12-09 17:34:55.895547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:82192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.497 [2024-12-09 17:34:55.895554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.497 [2024-12-09 17:34:55.895562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.497 [2024-12-09 17:34:55.895569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.497 [2024-12-09 17:34:55.895577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:82208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.497 [2024-12-09 17:34:55.895583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.497 [2024-12-09 17:34:55.895591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:82216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.497 [2024-12-09 17:34:55.895597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.497 [2024-12-09 17:34:55.895605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:82224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.497 [2024-12-09 17:34:55.895611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.497 [2024-12-09 17:34:55.895619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:82232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.497 [2024-12-09 17:34:55.895626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.497 [2024-12-09 17:34:55.895634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:82240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.497 [2024-12-09 17:34:55.895641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.497 [2024-12-09 17:34:55.895648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.497 [2024-12-09 17:34:55.895655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.497 [2024-12-09 17:34:55.895662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:82256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.497 [2024-12-09 17:34:55.895669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.497 [2024-12-09 17:34:55.895677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:82264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.497 [2024-12-09 17:34:55.895683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.497 [2024-12-09 17:34:55.895691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:82272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.497 [2024-12-09 17:34:55.895697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.497 [2024-12-09 17:34:55.895705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.497 [2024-12-09 17:34:55.895712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.497 [2024-12-09 17:34:55.895721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:82288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.497 [2024-12-09 17:34:55.895727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.497 [2024-12-09 17:34:55.895735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:82296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.497 [2024-12-09 17:34:55.895741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.497 [2024-12-09 17:34:55.895749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:82304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.497 [2024-12-09 17:34:55.895755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.497 [2024-12-09 17:34:55.895764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:82312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.497 [2024-12-09 17:34:55.895770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.497 [2024-12-09 17:34:55.895779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:82320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.497 [2024-12-09 17:34:55.895785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.497 [2024-12-09 17:34:55.895793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.497 [2024-12-09 17:34:55.895799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.497 [2024-12-09 17:34:55.895807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.497 [2024-12-09 17:34:55.895813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.497 [2024-12-09 17:34:55.895821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:82344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.497 [2024-12-09 17:34:55.895828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.497 [2024-12-09 17:34:55.895836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:82352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.497 [2024-12-09 17:34:55.895842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.497 [2024-12-09 17:34:55.895850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:82360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.497 [2024-12-09 17:34:55.895856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.497 [2024-12-09 17:34:55.895864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:82368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.497 [2024-12-09 17:34:55.895873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.497 [2024-12-09 17:34:55.895881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:82376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.497 [2024-12-09 17:34:55.895887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.497 [2024-12-09 17:34:55.895895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:82384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.497 [2024-12-09 17:34:55.895902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.497 [2024-12-09 17:34:55.895910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:82528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.497 [2024-12-09 17:34:55.895916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.497 [2024-12-09 17:34:55.895924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:82536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.497 [2024-12-09 17:34:55.895931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.497 [2024-12-09 17:34:55.895938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.497 [2024-12-09 17:34:55.895944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.497 [2024-12-09 17:34:55.895952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:82552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.498 [2024-12-09 17:34:55.895958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.498 [2024-12-09 17:34:55.895966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:82560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.498 [2024-12-09 17:34:55.895973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.498 [2024-12-09 17:34:55.895980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:82568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.498 [2024-12-09 17:34:55.895987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.498 [2024-12-09 17:34:55.896006] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.498 [2024-12-09 17:34:55.896012] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.498 [2024-12-09 17:34:55.896018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82576 len:8 PRP1 0x0 PRP2 0x0 00:24:33.498 [2024-12-09 17:34:55.896026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.498 [2024-12-09 17:34:55.896069] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:24:33.498 [2024-12-09 17:34:55.896089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:33.498 [2024-12-09 17:34:55.896096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.498 [2024-12-09 17:34:55.896104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:33.498 [2024-12-09 17:34:55.896110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.498 [2024-12-09 17:34:55.896117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:33.498 [2024-12-09 17:34:55.896123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.498 [2024-12-09 17:34:55.896130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:33.498 [2024-12-09 17:34:55.896136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.498 [2024-12-09 17:34:55.896144] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:24:33.498 [2024-12-09 17:34:55.898949] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:24:33.498 [2024-12-09 17:34:55.898979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23128d0 (9): Bad file descriptor 00:24:33.498 [2024-12-09 17:34:56.047763] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:24:33.498 11160.40 IOPS, 43.60 MiB/s [2024-12-09T16:35:02.677Z] 11192.36 IOPS, 43.72 MiB/s [2024-12-09T16:35:02.677Z] 11231.58 IOPS, 43.87 MiB/s [2024-12-09T16:35:02.677Z] 11268.23 IOPS, 44.02 MiB/s [2024-12-09T16:35:02.677Z] 11274.86 IOPS, 44.04 MiB/s 00:24:33.498 Latency(us) 00:24:33.498 [2024-12-09T16:35:02.677Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:33.498 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:33.498 Verification LBA range: start 0x0 length 0x4000 00:24:33.498 NVMe0n1 : 15.01 11289.54 44.10 831.47 0.00 10537.39 421.30 21221.18 00:24:33.498 [2024-12-09T16:35:02.677Z] =================================================================================================================== 00:24:33.498 [2024-12-09T16:35:02.677Z] Total : 11289.54 44.10 831.47 0.00 10537.39 421.30 21221.18 00:24:33.498 Received shutdown signal, test time was about 15.000000 seconds 00:24:33.498 00:24:33.498 Latency(us) 00:24:33.498 [2024-12-09T16:35:02.677Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:33.498 [2024-12-09T16:35:02.677Z] =================================================================================================================== 00:24:33.498 [2024-12-09T16:35:02.677Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:33.498 17:35:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:24:33.498 17:35:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:24:33.498 17:35:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:24:33.498 17:35:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2678306 00:24:33.498 17:35:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2678306 /var/tmp/bdevperf.sock 00:24:33.498 17:35:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:24:33.498 17:35:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2678306 ']' 00:24:33.498 17:35:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:33.498 17:35:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:33.498 17:35:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:33.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:33.498 17:35:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:33.498 17:35:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:33.498 17:35:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:33.498 17:35:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:24:33.498 17:35:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:33.498 [2024-12-09 17:35:02.445696] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:33.498 17:35:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:33.498 [2024-12-09 17:35:02.634253] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:33.756 17:35:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:33.756 NVMe0n1 00:24:33.756 17:35:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:34.322 00:24:34.322 17:35:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:34.580 00:24:34.580 17:35:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:34.580 17:35:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:24:34.838 17:35:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:34.838 17:35:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:24:38.118 17:35:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:38.119 17:35:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:24:38.119 17:35:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:38.119 17:35:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2679036 00:24:38.119 17:35:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 2679036 00:24:39.493 { 00:24:39.493 "results": [ 00:24:39.493 { 00:24:39.493 "job": "NVMe0n1", 00:24:39.493 "core_mask": "0x1", 00:24:39.493 "workload": "verify", 00:24:39.493 "status": "finished", 00:24:39.493 "verify_range": { 00:24:39.493 "start": 0, 00:24:39.493 "length": 16384 00:24:39.493 }, 00:24:39.493 "queue_depth": 128, 00:24:39.493 "io_size": 4096, 00:24:39.493 "runtime": 1.015033, 00:24:39.493 "iops": 11441.00733670728, 00:24:39.493 "mibps": 44.69143490901281, 00:24:39.493 "io_failed": 0, 00:24:39.493 "io_timeout": 0, 00:24:39.493 "avg_latency_us": 11146.639714933593, 00:24:39.493 "min_latency_us": 2309.3638095238093, 00:24:39.493 "max_latency_us": 9362.285714285714 00:24:39.493 } 00:24:39.493 ], 00:24:39.493 "core_count": 1 00:24:39.493 } 00:24:39.493 17:35:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:39.493 [2024-12-09 17:35:02.072259] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:24:39.493 [2024-12-09 17:35:02.072312] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2678306 ] 00:24:39.493 [2024-12-09 17:35:02.148791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:39.493 [2024-12-09 17:35:02.186511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:39.493 [2024-12-09 17:35:03.935881] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:39.493 [2024-12-09 17:35:03.935925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:39.493 [2024-12-09 17:35:03.935936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.493 [2024-12-09 17:35:03.935945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:39.493 [2024-12-09 17:35:03.935952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.493 [2024-12-09 17:35:03.935959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:39.493 [2024-12-09 17:35:03.935965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.493 [2024-12-09 17:35:03.935972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:39.493 [2024-12-09 17:35:03.935979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.493 [2024-12-09 17:35:03.935986] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:24:39.493 [2024-12-09 17:35:03.936010] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:24:39.493 [2024-12-09 17:35:03.936024] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114e8d0 (9): Bad file descriptor 00:24:39.493 [2024-12-09 17:35:03.941010] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:24:39.493 Running I/O for 1 seconds... 00:24:39.493 11360.00 IOPS, 44.38 MiB/s 00:24:39.493 Latency(us) 00:24:39.493 [2024-12-09T16:35:08.672Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:39.493 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:39.493 Verification LBA range: start 0x0 length 0x4000 00:24:39.493 NVMe0n1 : 1.02 11441.01 44.69 0.00 0.00 11146.64 2309.36 9362.29 00:24:39.493 [2024-12-09T16:35:08.672Z] =================================================================================================================== 00:24:39.493 [2024-12-09T16:35:08.672Z] Total : 11441.01 44.69 0.00 0.00 11146.64 2309.36 9362.29 00:24:39.493 17:35:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:39.494 17:35:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:24:39.494 17:35:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:39.752 17:35:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:39.752 17:35:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:24:39.752 17:35:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:40.009 17:35:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:24:43.290 17:35:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:43.290 17:35:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:24:43.290 17:35:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 2678306 00:24:43.290 17:35:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2678306 ']' 00:24:43.290 17:35:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2678306 00:24:43.290 17:35:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:24:43.290 17:35:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:43.290 17:35:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2678306 00:24:43.290 17:35:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:43.290 17:35:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:43.290 17:35:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2678306' 00:24:43.290 killing process with pid 2678306 00:24:43.290 17:35:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2678306 00:24:43.290 17:35:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2678306 00:24:43.548 17:35:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:24:43.548 17:35:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:43.548 17:35:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:24:43.548 17:35:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:43.548 17:35:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:24:43.548 17:35:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:43.548 17:35:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:24:43.548 17:35:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:43.548 17:35:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:24:43.548 17:35:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:43.548 17:35:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:43.548 rmmod nvme_tcp 00:24:43.548 rmmod nvme_fabrics 00:24:43.806 rmmod nvme_keyring 00:24:43.806 17:35:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:43.806 17:35:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:24:43.806 17:35:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:24:43.806 17:35:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 2675184 ']' 00:24:43.806 17:35:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 2675184 00:24:43.806 17:35:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2675184 ']' 00:24:43.806 17:35:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2675184 00:24:43.806 17:35:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:24:43.806 17:35:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:43.806 17:35:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2675184 00:24:43.806 17:35:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:43.806 17:35:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:43.806 17:35:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2675184' 00:24:43.806 killing process with pid 2675184 00:24:43.806 17:35:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2675184 00:24:43.806 17:35:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2675184 00:24:44.066 17:35:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:44.066 17:35:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:44.066 17:35:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:44.066 17:35:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:24:44.066 17:35:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:24:44.066 17:35:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:44.066 17:35:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:24:44.066 17:35:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:44.066 17:35:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:44.066 17:35:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:44.066 17:35:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:44.066 17:35:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:45.972 17:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:45.972 00:24:45.972 real 0m37.779s 00:24:45.972 user 1m59.500s 00:24:45.972 sys 0m7.965s 00:24:45.972 17:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:45.972 17:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:45.972 ************************************ 00:24:45.972 END TEST nvmf_failover 00:24:45.972 ************************************ 00:24:45.972 17:35:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:45.972 17:35:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:45.972 17:35:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:45.972 17:35:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.231 ************************************ 00:24:46.231 START TEST nvmf_host_discovery 00:24:46.231 ************************************ 00:24:46.231 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:46.231 * Looking for test storage... 00:24:46.231 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:46.231 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:46.231 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:24:46.231 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:46.231 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:46.231 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:46.231 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:46.231 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:46.231 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:24:46.231 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:24:46.231 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:24:46.231 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:24:46.231 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:24:46.231 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:24:46.231 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:24:46.231 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:46.231 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:24:46.231 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:24:46.231 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:46.231 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:46.231 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:24:46.231 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:24:46.231 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:46.231 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:24:46.231 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:24:46.231 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:24:46.231 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:24:46.231 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:46.231 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:24:46.231 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:24:46.231 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:46.231 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:46.231 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:24:46.231 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:46.231 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:46.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:46.231 --rc genhtml_branch_coverage=1 00:24:46.231 --rc genhtml_function_coverage=1 00:24:46.231 --rc genhtml_legend=1 00:24:46.231 --rc geninfo_all_blocks=1 00:24:46.231 --rc geninfo_unexecuted_blocks=1 00:24:46.231 00:24:46.231 ' 00:24:46.231 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:46.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:46.231 --rc genhtml_branch_coverage=1 00:24:46.231 --rc genhtml_function_coverage=1 00:24:46.231 --rc genhtml_legend=1 00:24:46.231 --rc geninfo_all_blocks=1 00:24:46.231 --rc geninfo_unexecuted_blocks=1 00:24:46.231 00:24:46.231 ' 00:24:46.231 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:46.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:46.231 --rc genhtml_branch_coverage=1 00:24:46.231 --rc genhtml_function_coverage=1 00:24:46.231 --rc genhtml_legend=1 00:24:46.231 --rc geninfo_all_blocks=1 00:24:46.231 --rc geninfo_unexecuted_blocks=1 00:24:46.231 00:24:46.231 ' 00:24:46.231 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:46.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:46.232 --rc genhtml_branch_coverage=1 00:24:46.232 --rc genhtml_function_coverage=1 00:24:46.232 --rc genhtml_legend=1 00:24:46.232 --rc geninfo_all_blocks=1 00:24:46.232 --rc geninfo_unexecuted_blocks=1 00:24:46.232 00:24:46.232 ' 00:24:46.232 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:46.232 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:24:46.232 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:46.232 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:46.232 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:46.232 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:46.232 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:46.232 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:46.232 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:46.232 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:46.232 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:46.232 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:46.232 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:24:46.232 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:24:46.232 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:46.232 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:46.232 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:46.232 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:46.232 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:46.232 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:24:46.232 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:46.232 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:46.232 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:46.232 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.232 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.232 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.232 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:24:46.232 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.232 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:24:46.232 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:46.232 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:46.232 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:46.232 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:46.232 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:46.232 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:46.232 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:46.232 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:46.232 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:46.232 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:46.232 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:24:46.232 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:24:46.232 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:24:46.232 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:24:46.232 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:24:46.232 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:24:46.232 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:24:46.232 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:46.232 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:46.232 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:46.232 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:46.232 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:46.232 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:46.232 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:46.232 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:46.232 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:46.232 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:46.232 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:24:46.232 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:52.863 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:52.864 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:52.864 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:52.864 Found net devices under 0000:af:00.0: cvl_0_0 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:52.864 Found net devices under 0000:af:00.1: cvl_0_1 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:52.864 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:52.864 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:52.864 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:52.864 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:52.864 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:52.864 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:52.864 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:52.864 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:52.864 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:52.864 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:52.864 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:52.864 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.291 ms 00:24:52.864 00:24:52.864 --- 10.0.0.2 ping statistics --- 00:24:52.864 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:52.864 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:24:52.864 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:52.864 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:52.864 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:24:52.864 00:24:52.864 --- 10.0.0.1 ping statistics --- 00:24:52.864 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:52.864 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:24:52.864 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:52.864 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:24:52.864 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:52.864 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:52.864 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:52.864 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:52.865 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:52.865 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:52.865 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:52.865 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:24:52.865 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:52.865 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:52.865 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:52.865 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=2683441 00:24:52.865 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 2683441 00:24:52.865 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:52.865 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2683441 ']' 00:24:52.865 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:52.865 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:52.865 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:52.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:52.865 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:52.865 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:52.865 [2024-12-09 17:35:21.399960] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:24:52.865 [2024-12-09 17:35:21.400011] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:52.865 [2024-12-09 17:35:21.479810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:52.865 [2024-12-09 17:35:21.518735] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:52.865 [2024-12-09 17:35:21.518770] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:52.865 [2024-12-09 17:35:21.518777] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:52.865 [2024-12-09 17:35:21.518783] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:52.865 [2024-12-09 17:35:21.518789] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:52.865 [2024-12-09 17:35:21.519326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:52.865 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:52.865 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:24:52.865 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:52.865 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:52.865 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:52.865 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:52.865 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:52.865 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.865 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:52.865 [2024-12-09 17:35:21.655247] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:52.865 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.865 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:24:52.865 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.865 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:52.865 [2024-12-09 17:35:21.667428] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:52.865 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.865 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:24:52.865 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.865 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:52.865 null0 00:24:52.865 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.865 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:24:52.865 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.865 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:52.865 null1 00:24:52.865 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.865 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:24:52.865 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.865 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:52.865 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.865 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2683519 00:24:52.865 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:24:52.865 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2683519 /tmp/host.sock 00:24:52.865 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2683519 ']' 00:24:52.865 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:24:52.865 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:52.865 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:52.865 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:52.865 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:52.865 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:52.865 [2024-12-09 17:35:21.743127] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:24:52.865 [2024-12-09 17:35:21.743170] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2683519 ] 00:24:52.865 [2024-12-09 17:35:21.815806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:52.865 [2024-12-09 17:35:21.856370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:52.865 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:52.865 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:24:52.865 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:52.865 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:24:52.865 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.865 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:52.865 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.865 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:24:52.865 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.865 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:52.865 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.865 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:24:52.865 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:24:52.865 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:52.865 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:52.865 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:52.865 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.865 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:52.865 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:52.865 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.865 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:24:52.865 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:24:52.865 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:52.865 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:52.865 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.865 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:52.865 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:52.865 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:52.865 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.164 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:24:53.164 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:24:53.164 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.164 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:53.164 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.164 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:24:53.164 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:53.164 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:53.164 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.164 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:53.164 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:53.164 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:53.164 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.164 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:24:53.164 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:24:53.164 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:53.164 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:53.164 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.164 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:53.164 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:53.164 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:53.164 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.164 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:24:53.164 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:24:53.164 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.164 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:53.164 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.164 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:24:53.164 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:53.164 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:53.164 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.164 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:53.164 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:53.164 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:53.164 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.164 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:24:53.164 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:24:53.164 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:53.164 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:53.164 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.164 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:53.164 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:53.164 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:53.164 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.164 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:24:53.164 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:53.164 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.164 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:53.164 [2024-12-09 17:35:22.272939] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:53.164 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.164 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:24:53.164 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:53.164 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:53.164 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:53.164 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.164 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:53.164 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:53.164 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.164 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:24:53.164 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:24:53.164 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:53.164 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.164 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:53.164 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:53.164 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:53.164 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:53.164 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.422 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:24:53.422 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:24:53.422 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:53.422 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:53.422 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:53.422 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:53.422 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:53.422 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:53.422 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:53.422 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:53.422 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:53.422 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.422 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:53.422 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.422 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:53.422 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:24:53.422 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:53.422 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:53.422 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:24:53.422 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.422 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:53.422 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.422 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:53.422 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:53.422 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:53.422 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:53.422 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:53.422 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:53.422 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:53.422 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:53.423 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.423 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:53.423 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:53.423 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:53.423 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.423 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:24:53.423 17:35:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:24:53.987 [2024-12-09 17:35:23.013677] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:53.987 [2024-12-09 17:35:23.013695] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:53.987 [2024-12-09 17:35:23.013707] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:53.987 [2024-12-09 17:35:23.099960] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:54.245 [2024-12-09 17:35:23.274875] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:24:54.245 [2024-12-09 17:35:23.275569] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xbd1260:1 started. 00:24:54.245 [2024-12-09 17:35:23.276918] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:54.245 [2024-12-09 17:35:23.276933] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:54.245 [2024-12-09 17:35:23.282141] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xbd1260 was disconnected and freed. delete nvme_qpair. 00:24:54.503 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:54.503 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:54.503 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:54.503 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:54.503 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:54.503 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.503 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:54.503 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:54.503 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:54.503 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.503 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:54.503 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:54.503 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:54.503 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:54.503 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:54.503 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:54.503 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:24:54.503 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:54.503 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:54.503 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:54.503 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.503 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:54.503 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:54.503 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:54.503 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.503 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:24:54.503 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:54.503 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:54.503 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:54.503 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:54.503 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:54.503 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:24:54.503 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:54.503 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:54.503 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.503 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:54.503 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:54.503 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:54.503 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:54.503 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.503 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:24:54.503 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:54.503 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:24:54.503 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:54.503 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:54.503 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:54.503 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:54.503 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:54.503 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:54.503 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:54.503 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:54.503 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:54.503 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.503 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:54.503 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.503 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:54.503 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:24:54.503 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:54.503 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:54.503 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:24:54.503 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.503 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:54.503 [2024-12-09 17:35:23.677233] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xbd1440:1 started. 00:24:54.503 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.503 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:54.503 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:54.503 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:54.503 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:54.503 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:54.762 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:54.762 [2024-12-09 17:35:23.683141] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xbd1440 was disconnected and freed. delete nvme_qpair. 00:24:54.762 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:54.762 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:54.762 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.762 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:54.762 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:54.762 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:54.762 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.762 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:54.762 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:54.762 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:24:54.762 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:54.762 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:54.762 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:54.762 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:54.762 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:54.762 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:54.762 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:54.762 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:24:54.762 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.762 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:54.762 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:54.762 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.762 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:54.762 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:54.762 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:54.762 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:54.763 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:24:54.763 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.763 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:54.763 [2024-12-09 17:35:23.777020] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:54.763 [2024-12-09 17:35:23.777571] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:54.763 [2024-12-09 17:35:23.777591] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:54.763 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.763 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:54.763 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:54.763 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:54.763 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:54.763 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:54.763 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:54.763 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:54.763 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:54.763 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.763 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:54.763 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:54.763 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:54.763 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.763 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:54.763 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:54.763 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:54.763 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:54.763 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:54.763 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:54.763 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:54.763 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:54.763 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:54.763 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:54.763 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.763 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:54.763 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:54.763 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:54.763 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.763 [2024-12-09 17:35:23.863831] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:24:54.763 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:54.763 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:54.763 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:54.763 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:54.763 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:54.763 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:54.763 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:54.763 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:54.763 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:54.763 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:54.763 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.763 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:54.763 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:54.763 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:54.763 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.763 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:24:54.763 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:24:55.021 [2024-12-09 17:35:24.128927] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:24:55.021 [2024-12-09 17:35:24.128960] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:55.021 [2024-12-09 17:35:24.128972] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:55.021 [2024-12-09 17:35:24.128976] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:56.013 17:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:56.013 17:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:56.013 17:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:56.013 17:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:56.013 17:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:56.013 17:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.013 17:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:56.013 17:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:56.013 17:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:56.013 17:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.013 17:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:24:56.013 17:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:56.013 17:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:24:56.013 17:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:56.013 17:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:56.013 17:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:56.013 17:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:56.013 17:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:56.013 17:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:56.013 17:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:56.013 17:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:56.013 17:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:56.013 17:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.013 17:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:56.013 17:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.013 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:56.013 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:56.013 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:56.013 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:56.013 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:56.013 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.013 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:56.013 [2024-12-09 17:35:25.021409] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:56.013 [2024-12-09 17:35:25.021435] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:56.013 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.013 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:56.013 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:56.013 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:56.013 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:56.013 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:56.013 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:56.013 [2024-12-09 17:35:25.029303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.013 [2024-12-09 17:35:25.029324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.013 [2024-12-09 17:35:25.029333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.013 [2024-12-09 17:35:25.029339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.013 [2024-12-09 17:35:25.029346] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.013 [2024-12-09 17:35:25.029353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.013 [2024-12-09 17:35:25.029360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.013 [2024-12-09 17:35:25.029367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.013 [2024-12-09 17:35:25.029373] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba1710 is same with the state(6) to be set 00:24:56.013 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:56.013 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:56.013 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.013 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:56.013 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:56.013 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:56.013 [2024-12-09 17:35:25.039317] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba1710 (9): Bad file descriptor 00:24:56.013 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.013 [2024-12-09 17:35:25.049352] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:56.013 [2024-12-09 17:35:25.049363] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:56.013 [2024-12-09 17:35:25.049369] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:56.013 [2024-12-09 17:35:25.049374] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:56.013 [2024-12-09 17:35:25.049392] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:56.013 [2024-12-09 17:35:25.049604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.013 [2024-12-09 17:35:25.049624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba1710 with addr=10.0.0.2, port=4420 00:24:56.013 [2024-12-09 17:35:25.049632] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba1710 is same with the state(6) to be set 00:24:56.013 [2024-12-09 17:35:25.049644] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba1710 (9): Bad file descriptor 00:24:56.013 [2024-12-09 17:35:25.049661] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:56.013 [2024-12-09 17:35:25.049668] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:56.013 [2024-12-09 17:35:25.049677] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:56.013 [2024-12-09 17:35:25.049683] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:56.013 [2024-12-09 17:35:25.049688] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:56.013 [2024-12-09 17:35:25.049692] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:56.013 [2024-12-09 17:35:25.059423] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:56.013 [2024-12-09 17:35:25.059433] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:56.013 [2024-12-09 17:35:25.059437] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:56.013 [2024-12-09 17:35:25.059441] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:56.013 [2024-12-09 17:35:25.059454] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:56.013 [2024-12-09 17:35:25.059679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.013 [2024-12-09 17:35:25.059692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba1710 with addr=10.0.0.2, port=4420 00:24:56.014 [2024-12-09 17:35:25.059700] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba1710 is same with the state(6) to be set 00:24:56.014 [2024-12-09 17:35:25.059710] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba1710 (9): Bad file descriptor 00:24:56.014 [2024-12-09 17:35:25.059726] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:56.014 [2024-12-09 17:35:25.059733] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:56.014 [2024-12-09 17:35:25.059741] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:56.014 [2024-12-09 17:35:25.059746] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:56.014 [2024-12-09 17:35:25.059751] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:56.014 [2024-12-09 17:35:25.059755] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:56.014 [2024-12-09 17:35:25.069485] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:56.014 [2024-12-09 17:35:25.069498] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:56.014 [2024-12-09 17:35:25.069502] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:56.014 [2024-12-09 17:35:25.069507] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:56.014 [2024-12-09 17:35:25.069521] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:56.014 [2024-12-09 17:35:25.069803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.014 [2024-12-09 17:35:25.069817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba1710 with addr=10.0.0.2, port=4420 00:24:56.014 [2024-12-09 17:35:25.069825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba1710 is same with the state(6) to be set 00:24:56.014 [2024-12-09 17:35:25.069836] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba1710 (9): Bad file descriptor 00:24:56.014 [2024-12-09 17:35:25.069853] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:56.014 [2024-12-09 17:35:25.069860] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:56.014 [2024-12-09 17:35:25.069866] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:56.014 [2024-12-09 17:35:25.069872] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:56.014 [2024-12-09 17:35:25.069876] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:56.014 [2024-12-09 17:35:25.069880] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:56.014 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:56.014 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:56.014 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:56.014 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:56.014 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:56.014 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:56.014 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:56.014 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:56.014 [2024-12-09 17:35:25.079552] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:56.014 [2024-12-09 17:35:25.079564] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:56.014 [2024-12-09 17:35:25.079568] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:56.014 [2024-12-09 17:35:25.079572] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:56.014 [2024-12-09 17:35:25.079584] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:56.014 [2024-12-09 17:35:25.079806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.014 [2024-12-09 17:35:25.079819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba1710 with addr=10.0.0.2, port=4420 00:24:56.014 [2024-12-09 17:35:25.079827] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba1710 is same with the state(6) to be set 00:24:56.014 [2024-12-09 17:35:25.079837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba1710 (9): Bad file descriptor 00:24:56.014 [2024-12-09 17:35:25.079847] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:56.014 [2024-12-09 17:35:25.079854] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:56.014 [2024-12-09 17:35:25.079860] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:56.014 [2024-12-09 17:35:25.079866] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:56.014 [2024-12-09 17:35:25.079873] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:56.014 [2024-12-09 17:35:25.079877] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:56.014 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:56.014 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:56.014 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.014 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:56.014 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:56.014 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:56.014 [2024-12-09 17:35:25.089615] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:56.014 [2024-12-09 17:35:25.089628] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:56.014 [2024-12-09 17:35:25.089632] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:56.014 [2024-12-09 17:35:25.089636] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:56.014 [2024-12-09 17:35:25.089651] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:56.014 [2024-12-09 17:35:25.089807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.014 [2024-12-09 17:35:25.089820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba1710 with addr=10.0.0.2, port=4420 00:24:56.014 [2024-12-09 17:35:25.089827] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba1710 is same with the state(6) to be set 00:24:56.014 [2024-12-09 17:35:25.089838] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba1710 (9): Bad file descriptor 00:24:56.014 [2024-12-09 17:35:25.089848] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:56.014 [2024-12-09 17:35:25.089855] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:56.014 [2024-12-09 17:35:25.089861] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:56.014 [2024-12-09 17:35:25.089867] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:56.014 [2024-12-09 17:35:25.089872] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:56.014 [2024-12-09 17:35:25.089876] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:56.014 [2024-12-09 17:35:25.099683] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:56.014 [2024-12-09 17:35:25.099693] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:56.014 [2024-12-09 17:35:25.099697] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:56.014 [2024-12-09 17:35:25.099701] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:56.014 [2024-12-09 17:35:25.099714] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:56.014 [2024-12-09 17:35:25.099888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.014 [2024-12-09 17:35:25.099900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba1710 with addr=10.0.0.2, port=4420 00:24:56.014 [2024-12-09 17:35:25.099907] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba1710 is same with the state(6) to be set 00:24:56.014 [2024-12-09 17:35:25.099922] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba1710 (9): Bad file descriptor 00:24:56.014 [2024-12-09 17:35:25.099931] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:56.014 [2024-12-09 17:35:25.099938] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:56.014 [2024-12-09 17:35:25.099944] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:56.014 [2024-12-09 17:35:25.099950] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:56.014 [2024-12-09 17:35:25.099954] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:56.014 [2024-12-09 17:35:25.099958] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:56.014 [2024-12-09 17:35:25.109051] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:24:56.014 [2024-12-09 17:35:25.109066] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:56.014 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.014 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:56.014 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:56.014 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:56.014 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:56.014 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:56.014 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:56.014 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:24:56.014 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:56.014 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:56.014 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:56.015 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.015 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:56.015 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:56.015 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:56.015 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.015 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:24:56.015 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:56.015 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:24:56.015 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:56.015 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:56.015 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:56.015 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:56.015 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:56.015 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:56.015 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:56.015 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:56.015 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:56.015 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.015 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:56.015 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.273 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:56.273 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:56.273 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:56.273 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:56.273 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:24:56.273 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.273 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:56.273 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.273 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:24:56.273 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:24:56.273 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:56.273 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:56.273 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:24:56.273 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:56.273 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:56.273 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:56.273 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:56.273 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.273 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:56.273 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:56.273 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.273 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:24:56.273 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:56.273 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:24:56.273 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:24:56.273 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:56.273 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:56.273 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:24:56.273 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:56.273 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:56.273 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:56.273 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:56.273 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.273 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:56.273 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:56.273 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.273 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:24:56.273 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:56.273 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:24:56.273 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:24:56.273 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:56.273 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:56.273 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:56.273 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:56.273 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:56.273 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:56.273 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:56.273 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:56.273 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.273 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:56.273 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.273 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:24:56.273 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:24:56.273 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:56.273 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:56.273 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:56.273 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.273 17:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:57.207 [2024-12-09 17:35:26.381624] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:57.207 [2024-12-09 17:35:26.381640] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:57.207 [2024-12-09 17:35:26.381650] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:57.465 [2024-12-09 17:35:26.467905] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:24:57.465 [2024-12-09 17:35:26.566602] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:24:57.465 [2024-12-09 17:35:26.567165] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0xd08920:1 started. 00:24:57.465 [2024-12-09 17:35:26.568709] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:57.465 [2024-12-09 17:35:26.568733] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:57.465 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.465 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:57.465 [2024-12-09 17:35:26.570825] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0xd08920 was disconnected and freed. delete nvme_qpair. 00:24:57.465 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:24:57.465 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:57.465 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:57.465 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:57.465 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:57.465 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:57.465 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:57.465 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.465 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:57.465 request: 00:24:57.465 { 00:24:57.465 "name": "nvme", 00:24:57.465 "trtype": "tcp", 00:24:57.465 "traddr": "10.0.0.2", 00:24:57.465 "adrfam": "ipv4", 00:24:57.465 "trsvcid": "8009", 00:24:57.465 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:57.465 "wait_for_attach": true, 00:24:57.465 "method": "bdev_nvme_start_discovery", 00:24:57.465 "req_id": 1 00:24:57.465 } 00:24:57.465 Got JSON-RPC error response 00:24:57.465 response: 00:24:57.465 { 00:24:57.465 "code": -17, 00:24:57.465 "message": "File exists" 00:24:57.465 } 00:24:57.465 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:57.465 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:24:57.465 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:57.465 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:57.465 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:57.465 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:24:57.465 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:57.465 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:57.466 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.466 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:57.466 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:57.466 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:57.466 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.466 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:24:57.466 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:24:57.466 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:57.466 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:57.466 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.466 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:57.466 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:57.466 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:57.724 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.724 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:57.724 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:57.724 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:24:57.724 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:57.724 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:57.724 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:57.724 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:57.724 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:57.724 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:57.724 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.724 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:57.724 request: 00:24:57.724 { 00:24:57.724 "name": "nvme_second", 00:24:57.724 "trtype": "tcp", 00:24:57.724 "traddr": "10.0.0.2", 00:24:57.724 "adrfam": "ipv4", 00:24:57.724 "trsvcid": "8009", 00:24:57.724 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:57.724 "wait_for_attach": true, 00:24:57.724 "method": "bdev_nvme_start_discovery", 00:24:57.724 "req_id": 1 00:24:57.724 } 00:24:57.724 Got JSON-RPC error response 00:24:57.724 response: 00:24:57.724 { 00:24:57.724 "code": -17, 00:24:57.724 "message": "File exists" 00:24:57.724 } 00:24:57.724 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:57.724 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:24:57.724 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:57.724 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:57.724 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:57.724 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:24:57.724 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:57.724 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:57.724 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.724 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:57.724 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:57.724 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:57.725 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.725 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:24:57.725 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:24:57.725 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:57.725 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:57.725 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.725 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:57.725 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:57.725 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:57.725 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.725 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:57.725 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:57.725 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:24:57.725 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:57.725 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:57.725 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:57.725 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:57.725 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:57.725 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:57.725 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.725 17:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:58.659 [2024-12-09 17:35:27.804496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.659 [2024-12-09 17:35:27.804522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb0530 with addr=10.0.0.2, port=8010 00:24:58.659 [2024-12-09 17:35:27.804533] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:58.659 [2024-12-09 17:35:27.804540] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:58.659 [2024-12-09 17:35:27.804546] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:00.035 [2024-12-09 17:35:28.806867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.035 [2024-12-09 17:35:28.806892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb0530 with addr=10.0.0.2, port=8010 00:25:00.035 [2024-12-09 17:35:28.806903] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:00.035 [2024-12-09 17:35:28.806910] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:00.035 [2024-12-09 17:35:28.806916] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:00.971 [2024-12-09 17:35:29.809110] bdev_nvme.c:7554:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:25:00.971 request: 00:25:00.971 { 00:25:00.971 "name": "nvme_second", 00:25:00.971 "trtype": "tcp", 00:25:00.971 "traddr": "10.0.0.2", 00:25:00.971 "adrfam": "ipv4", 00:25:00.971 "trsvcid": "8010", 00:25:00.971 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:00.971 "wait_for_attach": false, 00:25:00.971 "attach_timeout_ms": 3000, 00:25:00.971 "method": "bdev_nvme_start_discovery", 00:25:00.971 "req_id": 1 00:25:00.971 } 00:25:00.971 Got JSON-RPC error response 00:25:00.971 response: 00:25:00.971 { 00:25:00.971 "code": -110, 00:25:00.971 "message": "Connection timed out" 00:25:00.971 } 00:25:00.971 17:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:00.971 17:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:25:00.971 17:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:00.971 17:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:00.971 17:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:00.971 17:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:25:00.971 17:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:00.971 17:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:00.971 17:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.971 17:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:00.971 17:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:00.971 17:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:00.971 17:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.971 17:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:25:00.971 17:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:25:00.971 17:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2683519 00:25:00.971 17:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:25:00.971 17:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:00.971 17:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:25:00.971 17:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:00.971 17:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:25:00.971 17:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:00.971 17:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:00.971 rmmod nvme_tcp 00:25:00.971 rmmod nvme_fabrics 00:25:00.971 rmmod nvme_keyring 00:25:00.971 17:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:00.971 17:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:25:00.971 17:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:25:00.971 17:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 2683441 ']' 00:25:00.971 17:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 2683441 00:25:00.971 17:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 2683441 ']' 00:25:00.971 17:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 2683441 00:25:00.971 17:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:25:00.971 17:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:00.971 17:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2683441 00:25:00.971 17:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:00.971 17:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:00.971 17:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2683441' 00:25:00.971 killing process with pid 2683441 00:25:00.971 17:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 2683441 00:25:00.971 17:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 2683441 00:25:00.971 17:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:00.971 17:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:00.971 17:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:00.971 17:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:25:01.230 17:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:25:01.230 17:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:01.230 17:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:25:01.230 17:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:01.230 17:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:01.230 17:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:01.230 17:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:01.230 17:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:03.135 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:03.135 00:25:03.135 real 0m17.062s 00:25:03.135 user 0m20.156s 00:25:03.135 sys 0m5.745s 00:25:03.135 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:03.135 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:03.135 ************************************ 00:25:03.135 END TEST nvmf_host_discovery 00:25:03.135 ************************************ 00:25:03.135 17:35:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:03.135 17:35:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:03.135 17:35:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:03.135 17:35:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.135 ************************************ 00:25:03.135 START TEST nvmf_host_multipath_status 00:25:03.135 ************************************ 00:25:03.135 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:03.395 * Looking for test storage... 00:25:03.395 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:03.395 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:03.395 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:25:03.395 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:03.395 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:03.395 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:03.395 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:03.395 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:03.395 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:25:03.395 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:25:03.395 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:25:03.395 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:25:03.395 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:25:03.395 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:25:03.395 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:25:03.395 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:03.395 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:25:03.395 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:25:03.395 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:03.395 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:03.395 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:25:03.395 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:25:03.395 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:03.395 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:25:03.395 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:25:03.395 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:25:03.395 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:25:03.395 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:03.395 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:25:03.395 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:25:03.395 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:03.395 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:03.395 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:25:03.395 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:03.395 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:03.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.395 --rc genhtml_branch_coverage=1 00:25:03.395 --rc genhtml_function_coverage=1 00:25:03.395 --rc genhtml_legend=1 00:25:03.395 --rc geninfo_all_blocks=1 00:25:03.396 --rc geninfo_unexecuted_blocks=1 00:25:03.396 00:25:03.396 ' 00:25:03.396 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:03.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.396 --rc genhtml_branch_coverage=1 00:25:03.396 --rc genhtml_function_coverage=1 00:25:03.396 --rc genhtml_legend=1 00:25:03.396 --rc geninfo_all_blocks=1 00:25:03.396 --rc geninfo_unexecuted_blocks=1 00:25:03.396 00:25:03.396 ' 00:25:03.396 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:03.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.396 --rc genhtml_branch_coverage=1 00:25:03.396 --rc genhtml_function_coverage=1 00:25:03.396 --rc genhtml_legend=1 00:25:03.396 --rc geninfo_all_blocks=1 00:25:03.396 --rc geninfo_unexecuted_blocks=1 00:25:03.396 00:25:03.396 ' 00:25:03.396 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:03.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.396 --rc genhtml_branch_coverage=1 00:25:03.396 --rc genhtml_function_coverage=1 00:25:03.396 --rc genhtml_legend=1 00:25:03.396 --rc geninfo_all_blocks=1 00:25:03.396 --rc geninfo_unexecuted_blocks=1 00:25:03.396 00:25:03.396 ' 00:25:03.396 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:03.396 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:25:03.396 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:03.396 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:03.396 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:03.396 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:03.396 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:03.396 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:03.396 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:03.396 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:03.396 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:03.396 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:03.396 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:25:03.396 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:25:03.396 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:03.396 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:03.396 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:03.396 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:03.396 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:03.396 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:25:03.396 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:03.396 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:03.396 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:03.396 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.396 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.396 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.396 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:25:03.396 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.396 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:25:03.396 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:03.396 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:03.396 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:03.396 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:03.396 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:03.396 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:03.396 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:03.396 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:03.396 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:03.396 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:03.396 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:03.396 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:03.396 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:03.396 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:25:03.396 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:03.396 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:03.396 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:25:03.396 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:03.396 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:03.396 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:03.396 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:03.396 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:03.396 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:03.396 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:03.396 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:03.396 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:03.396 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:03.396 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:25:03.396 17:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:09.966 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:09.966 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:25:09.966 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:09.966 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:09.966 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:09.966 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:09.966 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:09.966 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:25:09.966 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:09.966 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:25:09.966 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:25:09.966 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:25:09.966 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:25:09.966 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:25:09.966 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:25:09.966 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:09.966 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:09.966 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:09.966 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:09.966 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:09.966 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:09.966 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:09.966 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:09.966 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:09.966 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:09.966 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:09.966 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:09.966 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:09.966 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:09.966 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:09.966 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:09.966 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:09.966 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:09.966 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:09.966 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:09.966 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:09.966 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:09.966 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:09.966 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:09.966 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:09.966 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:09.967 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:09.967 Found net devices under 0000:af:00.0: cvl_0_0 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:09.967 Found net devices under 0000:af:00.1: cvl_0_1 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:09.967 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:09.967 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.333 ms 00:25:09.967 00:25:09.967 --- 10.0.0.2 ping statistics --- 00:25:09.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:09.967 rtt min/avg/max/mdev = 0.333/0.333/0.333/0.000 ms 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:09.967 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:09.967 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:25:09.967 00:25:09.967 --- 10.0.0.1 ping statistics --- 00:25:09.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:09.967 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=2688492 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 2688492 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2688492 ']' 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:09.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:09.967 [2024-12-09 17:35:38.430402] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:25:09.967 [2024-12-09 17:35:38.430445] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:09.967 [2024-12-09 17:35:38.509018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:09.967 [2024-12-09 17:35:38.549427] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:09.967 [2024-12-09 17:35:38.549464] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:09.967 [2024-12-09 17:35:38.549471] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:09.967 [2024-12-09 17:35:38.549477] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:09.967 [2024-12-09 17:35:38.549482] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:09.967 [2024-12-09 17:35:38.550664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:09.967 [2024-12-09 17:35:38.550666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:09.967 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2688492 00:25:09.968 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:09.968 [2024-12-09 17:35:38.855665] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:09.968 17:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:09.968 Malloc0 00:25:09.968 17:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:10.226 17:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:10.484 17:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:10.742 [2024-12-09 17:35:39.680895] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:10.742 17:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:10.742 [2024-12-09 17:35:39.869374] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:10.742 17:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:10.742 17:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2688756 00:25:10.742 17:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:10.742 17:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2688756 /var/tmp/bdevperf.sock 00:25:10.742 17:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2688756 ']' 00:25:10.742 17:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:10.742 17:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:10.742 17:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:10.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:10.742 17:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:10.742 17:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:11.000 17:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:11.000 17:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:25:11.000 17:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:11.257 17:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:11.515 Nvme0n1 00:25:11.773 17:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:12.031 Nvme0n1 00:25:12.031 17:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:25:12.031 17:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:25:13.931 17:35:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:25:13.931 17:35:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:14.190 17:35:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:14.448 17:35:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:25:15.381 17:35:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:25:15.381 17:35:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:15.381 17:35:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:15.381 17:35:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:15.639 17:35:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:15.639 17:35:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:15.639 17:35:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:15.639 17:35:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:15.897 17:35:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:15.897 17:35:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:15.897 17:35:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:15.897 17:35:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:16.154 17:35:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:16.154 17:35:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:16.154 17:35:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:16.154 17:35:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:16.411 17:35:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:16.411 17:35:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:16.411 17:35:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:16.412 17:35:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:16.412 17:35:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:16.412 17:35:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:16.412 17:35:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:16.412 17:35:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:16.669 17:35:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:16.669 17:35:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:25:16.670 17:35:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:16.928 17:35:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:17.185 17:35:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:25:18.118 17:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:25:18.118 17:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:18.118 17:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:18.118 17:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:18.376 17:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:18.376 17:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:18.376 17:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:18.376 17:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:18.633 17:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:18.633 17:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:18.633 17:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:18.633 17:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:18.890 17:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:18.890 17:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:18.890 17:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:18.890 17:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:18.890 17:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:18.890 17:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:18.891 17:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:18.891 17:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:19.148 17:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:19.148 17:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:19.148 17:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:19.148 17:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:19.406 17:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:19.406 17:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:25:19.406 17:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:19.664 17:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:19.922 17:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:25:20.854 17:35:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:25:20.854 17:35:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:20.854 17:35:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:20.854 17:35:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:21.112 17:35:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:21.112 17:35:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:21.112 17:35:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:21.112 17:35:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:21.370 17:35:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:21.370 17:35:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:21.370 17:35:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:21.370 17:35:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:21.370 17:35:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:21.370 17:35:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:21.370 17:35:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:21.370 17:35:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:21.628 17:35:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:21.628 17:35:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:21.628 17:35:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:21.628 17:35:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:21.886 17:35:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:21.886 17:35:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:21.886 17:35:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:21.886 17:35:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:22.143 17:35:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:22.143 17:35:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:25:22.143 17:35:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:22.400 17:35:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:22.400 17:35:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:25:23.773 17:35:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:25:23.773 17:35:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:23.773 17:35:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:23.773 17:35:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:23.773 17:35:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:23.773 17:35:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:23.773 17:35:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:23.773 17:35:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:24.031 17:35:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:24.031 17:35:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:24.031 17:35:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:24.031 17:35:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:24.289 17:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:24.289 17:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:24.289 17:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:24.289 17:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:24.289 17:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:24.289 17:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:24.289 17:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:24.289 17:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:24.547 17:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:24.547 17:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:24.547 17:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:24.547 17:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:24.805 17:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:24.805 17:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:25:24.805 17:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:25.063 17:35:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:25.320 17:35:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:25:26.251 17:35:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:25:26.251 17:35:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:26.252 17:35:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:26.252 17:35:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:26.509 17:35:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:26.509 17:35:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:26.509 17:35:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:26.509 17:35:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:26.509 17:35:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:26.509 17:35:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:26.509 17:35:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:26.509 17:35:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:26.766 17:35:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:26.766 17:35:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:26.767 17:35:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:26.767 17:35:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:27.025 17:35:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:27.025 17:35:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:27.025 17:35:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:27.025 17:35:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:27.282 17:35:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:27.282 17:35:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:27.283 17:35:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:27.283 17:35:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:27.283 17:35:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:27.283 17:35:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:25:27.540 17:35:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:27.540 17:35:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:27.798 17:35:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:25:28.731 17:35:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:25:28.732 17:35:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:28.732 17:35:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:28.732 17:35:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:28.989 17:35:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:28.989 17:35:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:28.989 17:35:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:28.990 17:35:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:29.248 17:35:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:29.248 17:35:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:29.248 17:35:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:29.248 17:35:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:29.506 17:35:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:29.506 17:35:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:29.506 17:35:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:29.506 17:35:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:29.506 17:35:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:29.506 17:35:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:29.506 17:35:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:29.506 17:35:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:29.763 17:35:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:29.764 17:35:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:29.764 17:35:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:29.764 17:35:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:30.021 17:35:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:30.022 17:35:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:25:30.280 17:35:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:25:30.280 17:35:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:30.538 17:35:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:30.795 17:35:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:25:31.729 17:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:25:31.729 17:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:31.729 17:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:31.729 17:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:31.987 17:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:31.987 17:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:31.987 17:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:31.987 17:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:31.987 17:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:31.987 17:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:32.244 17:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.244 17:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:32.244 17:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:32.244 17:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:32.244 17:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.244 17:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:32.502 17:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:32.502 17:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:32.502 17:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.502 17:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:32.760 17:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:32.760 17:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:32.760 17:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.760 17:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:33.017 17:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:33.017 17:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:25:33.017 17:36:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:33.275 17:36:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:33.275 17:36:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:25:34.648 17:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:25:34.648 17:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:34.648 17:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:34.648 17:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:34.648 17:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:34.648 17:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:34.648 17:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:34.648 17:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:34.905 17:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:34.905 17:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:34.905 17:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:34.905 17:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:34.905 17:36:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:34.905 17:36:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:34.905 17:36:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:34.905 17:36:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:35.162 17:36:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:35.162 17:36:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:35.162 17:36:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:35.162 17:36:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:35.419 17:36:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:35.419 17:36:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:35.419 17:36:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:35.419 17:36:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:35.677 17:36:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:35.677 17:36:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:25:35.677 17:36:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:35.677 17:36:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:35.934 17:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:25:37.303 17:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:25:37.303 17:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:37.303 17:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:37.303 17:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:37.303 17:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:37.303 17:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:37.303 17:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:37.303 17:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:37.303 17:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:37.303 17:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:37.303 17:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:37.303 17:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:37.560 17:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:37.560 17:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:37.560 17:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:37.560 17:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:37.817 17:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:37.817 17:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:37.817 17:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:37.817 17:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:38.074 17:36:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:38.074 17:36:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:38.074 17:36:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:38.074 17:36:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:38.332 17:36:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:38.332 17:36:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:25:38.332 17:36:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:38.332 17:36:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:38.589 17:36:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:25:39.520 17:36:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:25:39.520 17:36:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:39.520 17:36:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:39.520 17:36:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:39.778 17:36:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:39.778 17:36:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:39.778 17:36:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:39.778 17:36:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:40.035 17:36:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:40.035 17:36:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:40.035 17:36:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:40.035 17:36:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:40.292 17:36:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:40.292 17:36:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:40.292 17:36:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:40.292 17:36:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:40.550 17:36:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:40.550 17:36:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:40.550 17:36:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:40.550 17:36:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:40.807 17:36:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:40.807 17:36:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:40.808 17:36:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:40.808 17:36:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:40.808 17:36:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:40.808 17:36:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2688756 00:25:40.808 17:36:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2688756 ']' 00:25:40.808 17:36:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2688756 00:25:40.808 17:36:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:25:40.808 17:36:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:40.808 17:36:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2688756 00:25:41.069 17:36:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:25:41.069 17:36:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:25:41.069 17:36:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2688756' 00:25:41.069 killing process with pid 2688756 00:25:41.069 17:36:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2688756 00:25:41.069 17:36:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2688756 00:25:41.069 { 00:25:41.069 "results": [ 00:25:41.069 { 00:25:41.069 "job": "Nvme0n1", 00:25:41.069 "core_mask": "0x4", 00:25:41.069 "workload": "verify", 00:25:41.069 "status": "terminated", 00:25:41.069 "verify_range": { 00:25:41.069 "start": 0, 00:25:41.069 "length": 16384 00:25:41.069 }, 00:25:41.069 "queue_depth": 128, 00:25:41.069 "io_size": 4096, 00:25:41.069 "runtime": 28.823304, 00:25:41.069 "iops": 10603.468637738408, 00:25:41.069 "mibps": 41.419799366165655, 00:25:41.069 "io_failed": 0, 00:25:41.069 "io_timeout": 0, 00:25:41.069 "avg_latency_us": 12051.540770092146, 00:25:41.069 "min_latency_us": 261.36380952380955, 00:25:41.069 "max_latency_us": 3019898.88 00:25:41.069 } 00:25:41.069 ], 00:25:41.069 "core_count": 1 00:25:41.069 } 00:25:41.069 17:36:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2688756 00:25:41.069 17:36:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:41.069 [2024-12-09 17:35:39.933139] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:25:41.069 [2024-12-09 17:35:39.933189] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2688756 ] 00:25:41.069 [2024-12-09 17:35:40.007813] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:41.069 [2024-12-09 17:35:40.049327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:41.069 Running I/O for 90 seconds... 00:25:41.069 11410.00 IOPS, 44.57 MiB/s [2024-12-09T16:36:10.248Z] 11450.00 IOPS, 44.73 MiB/s [2024-12-09T16:36:10.248Z] 11449.67 IOPS, 44.73 MiB/s [2024-12-09T16:36:10.248Z] 11497.75 IOPS, 44.91 MiB/s [2024-12-09T16:36:10.248Z] 11472.80 IOPS, 44.82 MiB/s [2024-12-09T16:36:10.248Z] 11479.00 IOPS, 44.84 MiB/s [2024-12-09T16:36:10.248Z] 11485.71 IOPS, 44.87 MiB/s [2024-12-09T16:36:10.248Z] 11468.00 IOPS, 44.80 MiB/s [2024-12-09T16:36:10.248Z] 11456.00 IOPS, 44.75 MiB/s [2024-12-09T16:36:10.248Z] 11446.50 IOPS, 44.71 MiB/s [2024-12-09T16:36:10.248Z] 11458.36 IOPS, 44.76 MiB/s [2024-12-09T16:36:10.248Z] 11459.17 IOPS, 44.76 MiB/s [2024-12-09T16:36:10.248Z] [2024-12-09 17:35:54.033988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:129104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.069 [2024-12-09 17:35:54.034023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:41.069 [2024-12-09 17:35:54.034074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:129112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.070 [2024-12-09 17:35:54.034082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:41.070 [2024-12-09 17:35:54.034095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:129120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.070 [2024-12-09 17:35:54.034103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.070 [2024-12-09 17:35:54.034115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:129128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.070 [2024-12-09 17:35:54.034122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.070 [2024-12-09 17:35:54.034134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:129136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.070 [2024-12-09 17:35:54.034142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:41.070 [2024-12-09 17:35:54.034154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:129144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.070 [2024-12-09 17:35:54.034161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:41.070 [2024-12-09 17:35:54.034174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:129152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.070 [2024-12-09 17:35:54.034181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:41.070 [2024-12-09 17:35:54.034193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:129160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.070 [2024-12-09 17:35:54.034200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:41.070 [2024-12-09 17:35:54.034213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:129168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.070 [2024-12-09 17:35:54.034224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:41.070 [2024-12-09 17:35:54.034237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:129176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.070 [2024-12-09 17:35:54.034250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:41.070 [2024-12-09 17:35:54.034263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:129184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.070 [2024-12-09 17:35:54.034270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:41.070 [2024-12-09 17:35:54.034282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:129192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.070 [2024-12-09 17:35:54.034289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:41.070 [2024-12-09 17:35:54.034301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:129200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.070 [2024-12-09 17:35:54.034309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:41.070 [2024-12-09 17:35:54.034321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:129208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.070 [2024-12-09 17:35:54.034328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:41.070 [2024-12-09 17:35:54.034341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:129216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.070 [2024-12-09 17:35:54.034348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:41.070 [2024-12-09 17:35:54.034613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:129224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.070 [2024-12-09 17:35:54.034624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:41.070 [2024-12-09 17:35:54.034638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:129232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.070 [2024-12-09 17:35:54.034645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:41.070 [2024-12-09 17:35:54.034659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:129240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.070 [2024-12-09 17:35:54.034666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:41.070 [2024-12-09 17:35:54.034679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:129248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.070 [2024-12-09 17:35:54.034686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:41.070 [2024-12-09 17:35:54.034699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:129256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.070 [2024-12-09 17:35:54.034706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:41.070 [2024-12-09 17:35:54.034719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:129264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.070 [2024-12-09 17:35:54.034726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:41.070 [2024-12-09 17:35:54.034739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:129272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.070 [2024-12-09 17:35:54.034748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:41.070 [2024-12-09 17:35:54.034761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:129280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.070 [2024-12-09 17:35:54.034768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:41.070 [2024-12-09 17:35:54.034781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:129288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.070 [2024-12-09 17:35:54.034788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:41.070 [2024-12-09 17:35:54.034801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:129296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.070 [2024-12-09 17:35:54.034808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:41.070 [2024-12-09 17:35:54.034821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:129304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.070 [2024-12-09 17:35:54.034828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:41.070 [2024-12-09 17:35:54.034841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:129312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.070 [2024-12-09 17:35:54.034847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:41.070 [2024-12-09 17:35:54.034860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:129320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.070 [2024-12-09 17:35:54.034867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:41.070 [2024-12-09 17:35:54.034880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:129328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.070 [2024-12-09 17:35:54.034887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:41.070 [2024-12-09 17:35:54.034900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:129336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.070 [2024-12-09 17:35:54.034906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:41.070 [2024-12-09 17:35:54.034919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:129344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.070 [2024-12-09 17:35:54.034926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:41.070 [2024-12-09 17:35:54.034939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:129352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.070 [2024-12-09 17:35:54.034946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:41.070 [2024-12-09 17:35:54.034959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:129360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.070 [2024-12-09 17:35:54.034966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:41.070 [2024-12-09 17:35:54.034979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:129368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.070 [2024-12-09 17:35:54.034986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:41.070 [2024-12-09 17:35:54.035001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:129376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.070 [2024-12-09 17:35:54.035007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:41.070 [2024-12-09 17:35:54.035020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:129384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.070 [2024-12-09 17:35:54.035027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.070 [2024-12-09 17:35:54.035040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:129392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.070 [2024-12-09 17:35:54.035047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:41.070 [2024-12-09 17:35:54.035060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:129400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.070 [2024-12-09 17:35:54.035067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:41.070 [2024-12-09 17:35:54.035080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:129408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.070 [2024-12-09 17:35:54.035088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:41.070 [2024-12-09 17:35:54.035101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:129416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.070 [2024-12-09 17:35:54.035107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:41.070 [2024-12-09 17:35:54.035120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:129424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.070 [2024-12-09 17:35:54.035127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:41.071 [2024-12-09 17:35:54.035141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:129432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.071 [2024-12-09 17:35:54.035147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:41.071 [2024-12-09 17:35:54.035160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:129440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.071 [2024-12-09 17:35:54.035167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:41.071 [2024-12-09 17:35:54.035180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:129448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.071 [2024-12-09 17:35:54.035187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:41.071 [2024-12-09 17:35:54.035200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:129456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.071 [2024-12-09 17:35:54.035207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:41.071 [2024-12-09 17:35:54.035224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:129464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.071 [2024-12-09 17:35:54.035231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:41.071 [2024-12-09 17:35:54.035246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:129472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.071 [2024-12-09 17:35:54.035260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:41.071 [2024-12-09 17:35:54.035273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:129480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.071 [2024-12-09 17:35:54.035280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:41.071 [2024-12-09 17:35:54.035293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:129488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.071 [2024-12-09 17:35:54.035300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:41.071 [2024-12-09 17:35:54.035313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:129496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.071 [2024-12-09 17:35:54.035320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:41.071 [2024-12-09 17:35:54.035333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:129504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.071 [2024-12-09 17:35:54.035340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:41.071 [2024-12-09 17:35:54.035353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:129512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.071 [2024-12-09 17:35:54.035360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:41.071 [2024-12-09 17:35:54.035372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:129520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.071 [2024-12-09 17:35:54.035379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:41.071 [2024-12-09 17:35:54.035392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:129528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.071 [2024-12-09 17:35:54.035398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:41.071 [2024-12-09 17:35:54.035411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:129536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.071 [2024-12-09 17:35:54.035418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:41.071 [2024-12-09 17:35:54.035431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:129544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.071 [2024-12-09 17:35:54.035438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:41.071 [2024-12-09 17:35:54.035450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:129552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.071 [2024-12-09 17:35:54.035457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:41.071 [2024-12-09 17:35:54.035470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:129560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.071 [2024-12-09 17:35:54.035477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:41.071 [2024-12-09 17:35:54.035492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:129568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.071 [2024-12-09 17:35:54.035499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:41.071 [2024-12-09 17:35:54.035512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:129576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.071 [2024-12-09 17:35:54.035519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:41.071 [2024-12-09 17:35:54.035531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:129584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.071 [2024-12-09 17:35:54.035538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:41.071 [2024-12-09 17:35:54.035551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:129592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.071 [2024-12-09 17:35:54.035557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:41.071 [2024-12-09 17:35:54.035570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:129728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.071 [2024-12-09 17:35:54.035579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:41.071 [2024-12-09 17:35:54.035592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:129600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.071 [2024-12-09 17:35:54.035599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:41.071 [2024-12-09 17:35:54.035612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:129608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.071 [2024-12-09 17:35:54.035619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:41.071 [2024-12-09 17:35:54.035632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:129616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.071 [2024-12-09 17:35:54.035639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:41.071 [2024-12-09 17:35:54.035651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:129624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.071 [2024-12-09 17:35:54.035658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:41.071 [2024-12-09 17:35:54.035671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:129632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.071 [2024-12-09 17:35:54.035678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.071 [2024-12-09 17:35:54.035691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:129640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.071 [2024-12-09 17:35:54.035698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:41.071 [2024-12-09 17:35:54.035711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:129648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.071 [2024-12-09 17:35:54.035718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:41.071 [2024-12-09 17:35:54.035731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:129656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.071 [2024-12-09 17:35:54.035739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:41.071 [2024-12-09 17:35:54.035752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:129664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.071 [2024-12-09 17:35:54.035759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:41.071 [2024-12-09 17:35:54.035772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:129672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.071 [2024-12-09 17:35:54.035779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:41.071 [2024-12-09 17:35:54.035791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:129680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.071 [2024-12-09 17:35:54.035798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:41.071 [2024-12-09 17:35:54.035811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:129688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.071 [2024-12-09 17:35:54.035818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:41.071 [2024-12-09 17:35:54.035831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:129696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.071 [2024-12-09 17:35:54.035838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:41.071 [2024-12-09 17:35:54.035850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:129704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.071 [2024-12-09 17:35:54.035857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:41.071 [2024-12-09 17:35:54.035870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:129712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.071 [2024-12-09 17:35:54.035877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:41.071 [2024-12-09 17:35:54.036007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:129720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.071 [2024-12-09 17:35:54.036016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:41.071 [2024-12-09 17:35:54.036033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:129736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.071 [2024-12-09 17:35:54.036040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:41.071 [2024-12-09 17:35:54.036057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:129744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.072 [2024-12-09 17:35:54.036063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:41.072 [2024-12-09 17:35:54.036080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:129752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.072 [2024-12-09 17:35:54.036087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:41.072 [2024-12-09 17:35:54.036103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:129760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.072 [2024-12-09 17:35:54.036111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:41.072 [2024-12-09 17:35:54.036128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:129768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.072 [2024-12-09 17:35:54.036134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:41.072 [2024-12-09 17:35:54.036150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:129776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.072 [2024-12-09 17:35:54.036158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:41.072 [2024-12-09 17:35:54.036174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:129784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.072 [2024-12-09 17:35:54.036181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:41.072 [2024-12-09 17:35:54.036197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:129792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.072 [2024-12-09 17:35:54.036204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:41.072 [2024-12-09 17:35:54.036225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:129800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.072 [2024-12-09 17:35:54.036233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:41.072 [2024-12-09 17:35:54.036248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:129808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.072 [2024-12-09 17:35:54.036255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:41.072 [2024-12-09 17:35:54.036271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:129816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.072 [2024-12-09 17:35:54.036278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:41.072 [2024-12-09 17:35:54.036294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:129824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.072 [2024-12-09 17:35:54.036301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:41.072 [2024-12-09 17:35:54.036317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:129832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.072 [2024-12-09 17:35:54.036324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:41.072 [2024-12-09 17:35:54.036340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:129840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.072 [2024-12-09 17:35:54.036346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:41.072 [2024-12-09 17:35:54.036362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:129848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.072 [2024-12-09 17:35:54.036369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:41.072 [2024-12-09 17:35:54.036386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:129856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.072 [2024-12-09 17:35:54.036394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:41.072 [2024-12-09 17:35:54.036410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:129864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.072 [2024-12-09 17:35:54.036417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:41.072 [2024-12-09 17:35:54.036433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:129872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.072 [2024-12-09 17:35:54.036440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:41.072 [2024-12-09 17:35:54.036457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:129880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.072 [2024-12-09 17:35:54.036465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:41.072 [2024-12-09 17:35:54.036481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:129888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.072 [2024-12-09 17:35:54.036488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:41.072 [2024-12-09 17:35:54.036504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:129896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.072 [2024-12-09 17:35:54.036511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.072 [2024-12-09 17:35:54.036527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:129904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.072 [2024-12-09 17:35:54.036534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:41.072 [2024-12-09 17:35:54.036550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:129912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.072 [2024-12-09 17:35:54.036556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:41.072 [2024-12-09 17:35:54.036572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:129920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.072 [2024-12-09 17:35:54.036580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:41.072 [2024-12-09 17:35:54.036595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:129928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.072 [2024-12-09 17:35:54.036602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:41.072 [2024-12-09 17:35:54.036618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:129936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.072 [2024-12-09 17:35:54.036625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:41.072 [2024-12-09 17:35:54.036641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:129944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.072 [2024-12-09 17:35:54.036648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:41.072 [2024-12-09 17:35:54.036664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:129952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.072 [2024-12-09 17:35:54.036671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:41.072 [2024-12-09 17:35:54.036688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:129960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.072 [2024-12-09 17:35:54.036695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:41.072 [2024-12-09 17:35:54.036711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:129968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.072 [2024-12-09 17:35:54.036718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:41.072 [2024-12-09 17:35:54.036734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:129976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.072 [2024-12-09 17:35:54.036741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:41.072 [2024-12-09 17:35:54.036758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:129984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.072 [2024-12-09 17:35:54.036765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:41.072 [2024-12-09 17:35:54.036781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:129992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.072 [2024-12-09 17:35:54.036788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:41.072 [2024-12-09 17:35:54.036804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:130000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.072 [2024-12-09 17:35:54.036810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:41.072 [2024-12-09 17:35:54.036827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:130008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.072 [2024-12-09 17:35:54.036834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:41.072 [2024-12-09 17:35:54.036850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:130016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.072 [2024-12-09 17:35:54.036857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:41.072 [2024-12-09 17:35:54.036873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:130024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.072 [2024-12-09 17:35:54.036880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:41.072 [2024-12-09 17:35:54.036896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:130032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.072 [2024-12-09 17:35:54.036902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:41.072 [2024-12-09 17:35:54.036919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:130040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.072 [2024-12-09 17:35:54.036925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:41.072 [2024-12-09 17:35:54.036942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:130048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.072 [2024-12-09 17:35:54.036948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:41.072 [2024-12-09 17:35:54.036968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:130056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.072 [2024-12-09 17:35:54.036974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:41.072 [2024-12-09 17:35:54.036991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:130064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.073 [2024-12-09 17:35:54.036998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:41.073 [2024-12-09 17:35:54.037013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:130072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.073 [2024-12-09 17:35:54.037020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:41.073 [2024-12-09 17:35:54.037036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:130080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.073 [2024-12-09 17:35:54.037043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:41.073 [2024-12-09 17:35:54.037059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:130088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.073 [2024-12-09 17:35:54.037065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:41.073 [2024-12-09 17:35:54.037081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:130096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.073 [2024-12-09 17:35:54.037088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:41.073 [2024-12-09 17:35:54.037104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:130104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.073 [2024-12-09 17:35:54.037111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:41.073 [2024-12-09 17:35:54.037128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:130112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.073 [2024-12-09 17:35:54.037135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:41.073 [2024-12-09 17:35:54.037151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:130120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.073 [2024-12-09 17:35:54.037158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:41.073 11323.85 IOPS, 44.23 MiB/s [2024-12-09T16:36:10.252Z] 10515.00 IOPS, 41.07 MiB/s [2024-12-09T16:36:10.252Z] 9814.00 IOPS, 38.34 MiB/s [2024-12-09T16:36:10.252Z] 9302.50 IOPS, 36.34 MiB/s [2024-12-09T16:36:10.252Z] 9424.82 IOPS, 36.82 MiB/s [2024-12-09T16:36:10.252Z] 9530.06 IOPS, 37.23 MiB/s [2024-12-09T16:36:10.252Z] 9683.00 IOPS, 37.82 MiB/s [2024-12-09T16:36:10.252Z] 9876.20 IOPS, 38.58 MiB/s [2024-12-09T16:36:10.252Z] 10040.86 IOPS, 39.22 MiB/s [2024-12-09T16:36:10.252Z] 10108.73 IOPS, 39.49 MiB/s [2024-12-09T16:36:10.252Z] 10159.30 IOPS, 39.68 MiB/s [2024-12-09T16:36:10.252Z] 10219.00 IOPS, 39.92 MiB/s [2024-12-09T16:36:10.252Z] 10350.64 IOPS, 40.43 MiB/s [2024-12-09T16:36:10.252Z] 10464.65 IOPS, 40.88 MiB/s [2024-12-09T16:36:10.252Z] [2024-12-09 17:36:07.656617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:128080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.073 [2024-12-09 17:36:07.656657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:41.073 [2024-12-09 17:36:07.656705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:128096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.073 [2024-12-09 17:36:07.656714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:41.073 [2024-12-09 17:36:07.656731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:128112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.073 [2024-12-09 17:36:07.656738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:41.073 [2024-12-09 17:36:07.656751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.073 [2024-12-09 17:36:07.656758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:41.073 [2024-12-09 17:36:07.656771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:128144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.073 [2024-12-09 17:36:07.656777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:41.073 [2024-12-09 17:36:07.656790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:128160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.073 [2024-12-09 17:36:07.656796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:41.073 [2024-12-09 17:36:07.656809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:128176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.073 [2024-12-09 17:36:07.656815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:41.073 [2024-12-09 17:36:07.656827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:128192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.073 [2024-12-09 17:36:07.656834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:41.073 [2024-12-09 17:36:07.656846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:128208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.073 [2024-12-09 17:36:07.656853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:41.073 [2024-12-09 17:36:07.656865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:128224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.073 [2024-12-09 17:36:07.656871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:41.073 [2024-12-09 17:36:07.656883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:128240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.073 [2024-12-09 17:36:07.656890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:41.073 [2024-12-09 17:36:07.656902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:128256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.073 [2024-12-09 17:36:07.656909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:41.073 [2024-12-09 17:36:07.656921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:128272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.073 [2024-12-09 17:36:07.656928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:41.073 [2024-12-09 17:36:07.656940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:128288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.073 [2024-12-09 17:36:07.656947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:41.073 [2024-12-09 17:36:07.656961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:128304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.073 [2024-12-09 17:36:07.656967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:41.073 [2024-12-09 17:36:07.656979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:128320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.073 [2024-12-09 17:36:07.656987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:41.073 [2024-12-09 17:36:07.657000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:128336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.073 [2024-12-09 17:36:07.657010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:41.073 [2024-12-09 17:36:07.657023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:128352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.073 [2024-12-09 17:36:07.657031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:41.073 [2024-12-09 17:36:07.657043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:128368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.073 [2024-12-09 17:36:07.657049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:41.073 [2024-12-09 17:36:07.657062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:128384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.073 [2024-12-09 17:36:07.657068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.073 [2024-12-09 17:36:07.657080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:128400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.073 [2024-12-09 17:36:07.657087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:41.073 [2024-12-09 17:36:07.657099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:128416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.073 [2024-12-09 17:36:07.657106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:41.073 [2024-12-09 17:36:07.657118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:128432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.073 [2024-12-09 17:36:07.657124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:41.073 [2024-12-09 17:36:07.657136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:128448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.073 [2024-12-09 17:36:07.657143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:41.073 [2024-12-09 17:36:07.657156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:128464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.073 [2024-12-09 17:36:07.657162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:41.073 [2024-12-09 17:36:07.657174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:128480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.073 [2024-12-09 17:36:07.657181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:41.073 [2024-12-09 17:36:07.657193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:128496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.073 [2024-12-09 17:36:07.657201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:41.073 [2024-12-09 17:36:07.657213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:128512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.073 [2024-12-09 17:36:07.657226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:41.073 [2024-12-09 17:36:07.657238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.074 [2024-12-09 17:36:07.657245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:41.074 [2024-12-09 17:36:07.657257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:128544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.074 [2024-12-09 17:36:07.657264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:41.074 [2024-12-09 17:36:07.657276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:128560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.074 [2024-12-09 17:36:07.657283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:41.074 [2024-12-09 17:36:07.657296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:127968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.074 [2024-12-09 17:36:07.657302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:41.074 [2024-12-09 17:36:07.657314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:128000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.074 [2024-12-09 17:36:07.657323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:41.074 [2024-12-09 17:36:07.657335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:128032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.074 [2024-12-09 17:36:07.657342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:41.074 [2024-12-09 17:36:07.657354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:128064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.074 [2024-12-09 17:36:07.657361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:41.074 [2024-12-09 17:36:07.657373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:128576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.074 [2024-12-09 17:36:07.657380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:41.074 [2024-12-09 17:36:07.658304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:128592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.074 [2024-12-09 17:36:07.658322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:41.074 [2024-12-09 17:36:07.658338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:128608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.074 [2024-12-09 17:36:07.658345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:41.074 [2024-12-09 17:36:07.658358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:128624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.074 [2024-12-09 17:36:07.658367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:41.074 [2024-12-09 17:36:07.658380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:128640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.074 [2024-12-09 17:36:07.658387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:41.074 [2024-12-09 17:36:07.658399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:128656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.074 [2024-12-09 17:36:07.658406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:41.074 [2024-12-09 17:36:07.658418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:128672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.074 [2024-12-09 17:36:07.658425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:41.074 [2024-12-09 17:36:07.658437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:128688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.074 [2024-12-09 17:36:07.658445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:41.074 [2024-12-09 17:36:07.658457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:128704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.074 [2024-12-09 17:36:07.658463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:41.074 [2024-12-09 17:36:07.658475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:128720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.074 [2024-12-09 17:36:07.658482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:41.074 [2024-12-09 17:36:07.658494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:128736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.074 [2024-12-09 17:36:07.658501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:41.074 [2024-12-09 17:36:07.658513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:128752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.074 [2024-12-09 17:36:07.658520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:41.074 [2024-12-09 17:36:07.658532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:128768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.074 [2024-12-09 17:36:07.658539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:41.074 [2024-12-09 17:36:07.658552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:128784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.074 [2024-12-09 17:36:07.658561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:41.074 [2024-12-09 17:36:07.658573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:128800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.074 [2024-12-09 17:36:07.658580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:41.074 [2024-12-09 17:36:07.658592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:128816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.074 [2024-12-09 17:36:07.658599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:41.074 [2024-12-09 17:36:07.658613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:128832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.074 [2024-12-09 17:36:07.658619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.074 [2024-12-09 17:36:07.658632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:128848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.074 [2024-12-09 17:36:07.658638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:41.074 [2024-12-09 17:36:07.658651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:128864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.074 [2024-12-09 17:36:07.658658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:41.074 [2024-12-09 17:36:07.658670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:128880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.074 [2024-12-09 17:36:07.658677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:41.074 [2024-12-09 17:36:07.658689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:128896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.074 [2024-12-09 17:36:07.658695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:41.074 [2024-12-09 17:36:07.658708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:127960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.074 [2024-12-09 17:36:07.658714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:41.074 [2024-12-09 17:36:07.658727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:127992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.074 [2024-12-09 17:36:07.658734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:41.074 [2024-12-09 17:36:07.658746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:128024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.074 [2024-12-09 17:36:07.658753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:41.074 [2024-12-09 17:36:07.658765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:128056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.074 [2024-12-09 17:36:07.658772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:41.074 [2024-12-09 17:36:07.659165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:128920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.074 [2024-12-09 17:36:07.659179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:41.074 [2024-12-09 17:36:07.659194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:128936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.074 [2024-12-09 17:36:07.659201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:41.074 [2024-12-09 17:36:07.659214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:128952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.074 [2024-12-09 17:36:07.659227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:41.074 [2024-12-09 17:36:07.659243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:128968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.075 [2024-12-09 17:36:07.659250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:41.075 [2024-12-09 17:36:07.659263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:128088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.075 [2024-12-09 17:36:07.659270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:41.075 [2024-12-09 17:36:07.659282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:128120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.075 [2024-12-09 17:36:07.659290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:41.075 [2024-12-09 17:36:07.659302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:128152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.075 [2024-12-09 17:36:07.659309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:41.075 [2024-12-09 17:36:07.659321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:128184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.075 [2024-12-09 17:36:07.659328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:41.075 [2024-12-09 17:36:07.659340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:128216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.075 [2024-12-09 17:36:07.659347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:41.075 [2024-12-09 17:36:07.659359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:128248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.075 [2024-12-09 17:36:07.659366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:41.075 [2024-12-09 17:36:07.659378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:128280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.075 [2024-12-09 17:36:07.659385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:41.075 [2024-12-09 17:36:07.659397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:128312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.075 [2024-12-09 17:36:07.659404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:41.075 [2024-12-09 17:36:07.659416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:128344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.075 [2024-12-09 17:36:07.659423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:41.075 [2024-12-09 17:36:07.659435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:128376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.075 [2024-12-09 17:36:07.659442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:41.075 [2024-12-09 17:36:07.659454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:128408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.075 [2024-12-09 17:36:07.659461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:41.075 [2024-12-09 17:36:07.659475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:128440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.075 [2024-12-09 17:36:07.659482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:41.075 [2024-12-09 17:36:07.659494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:128472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.075 [2024-12-09 17:36:07.659501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:41.075 [2024-12-09 17:36:07.659513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:128504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.075 [2024-12-09 17:36:07.659520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:41.075 [2024-12-09 17:36:07.659532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:128536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.075 [2024-12-09 17:36:07.659540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:41.075 [2024-12-09 17:36:07.659553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:128568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.075 [2024-12-09 17:36:07.659559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:41.075 10541.15 IOPS, 41.18 MiB/s [2024-12-09T16:36:10.254Z] 10578.89 IOPS, 41.32 MiB/s [2024-12-09T16:36:10.254Z] Received shutdown signal, test time was about 28.824131 seconds 00:25:41.075 00:25:41.075 Latency(us) 00:25:41.075 [2024-12-09T16:36:10.254Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:41.075 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:41.075 Verification LBA range: start 0x0 length 0x4000 00:25:41.075 Nvme0n1 : 28.82 10603.47 41.42 0.00 0.00 12051.54 261.36 3019898.88 00:25:41.075 [2024-12-09T16:36:10.254Z] =================================================================================================================== 00:25:41.075 [2024-12-09T16:36:10.254Z] Total : 10603.47 41.42 0.00 0.00 12051.54 261.36 3019898.88 00:25:41.075 17:36:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:41.334 17:36:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:25:41.334 17:36:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:41.334 17:36:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:25:41.334 17:36:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:41.334 17:36:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:25:41.334 17:36:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:41.334 17:36:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:25:41.334 17:36:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:41.334 17:36:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:41.334 rmmod nvme_tcp 00:25:41.334 rmmod nvme_fabrics 00:25:41.334 rmmod nvme_keyring 00:25:41.334 17:36:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:41.334 17:36:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:25:41.334 17:36:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:25:41.334 17:36:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 2688492 ']' 00:25:41.334 17:36:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 2688492 00:25:41.334 17:36:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2688492 ']' 00:25:41.334 17:36:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2688492 00:25:41.334 17:36:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:25:41.334 17:36:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:41.334 17:36:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2688492 00:25:41.334 17:36:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:41.334 17:36:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:41.334 17:36:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2688492' 00:25:41.334 killing process with pid 2688492 00:25:41.334 17:36:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2688492 00:25:41.334 17:36:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2688492 00:25:41.593 17:36:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:41.593 17:36:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:41.593 17:36:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:41.593 17:36:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:25:41.593 17:36:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:25:41.593 17:36:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:41.593 17:36:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:25:41.593 17:36:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:41.593 17:36:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:41.593 17:36:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:41.593 17:36:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:41.593 17:36:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:44.128 17:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:44.128 00:25:44.128 real 0m40.468s 00:25:44.128 user 1m49.891s 00:25:44.128 sys 0m11.466s 00:25:44.128 17:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:44.128 17:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:44.128 ************************************ 00:25:44.128 END TEST nvmf_host_multipath_status 00:25:44.128 ************************************ 00:25:44.128 17:36:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:44.128 17:36:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:44.128 17:36:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:44.128 17:36:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.128 ************************************ 00:25:44.128 START TEST nvmf_discovery_remove_ifc 00:25:44.128 ************************************ 00:25:44.128 17:36:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:44.128 * Looking for test storage... 00:25:44.128 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:44.128 17:36:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:44.128 17:36:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:25:44.128 17:36:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:44.128 17:36:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:44.128 17:36:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:44.128 17:36:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:44.128 17:36:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:44.128 17:36:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:25:44.128 17:36:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:25:44.128 17:36:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:25:44.128 17:36:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:25:44.128 17:36:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:25:44.128 17:36:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:25:44.128 17:36:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:25:44.128 17:36:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:44.128 17:36:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:25:44.128 17:36:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:25:44.128 17:36:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:44.128 17:36:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:44.128 17:36:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:25:44.128 17:36:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:25:44.128 17:36:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:44.128 17:36:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:25:44.128 17:36:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:25:44.128 17:36:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:25:44.128 17:36:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:25:44.128 17:36:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:44.128 17:36:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:25:44.128 17:36:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:25:44.128 17:36:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:44.128 17:36:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:44.128 17:36:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:25:44.128 17:36:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:44.128 17:36:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:44.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:44.128 --rc genhtml_branch_coverage=1 00:25:44.128 --rc genhtml_function_coverage=1 00:25:44.128 --rc genhtml_legend=1 00:25:44.128 --rc geninfo_all_blocks=1 00:25:44.128 --rc geninfo_unexecuted_blocks=1 00:25:44.128 00:25:44.128 ' 00:25:44.128 17:36:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:44.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:44.128 --rc genhtml_branch_coverage=1 00:25:44.128 --rc genhtml_function_coverage=1 00:25:44.128 --rc genhtml_legend=1 00:25:44.128 --rc geninfo_all_blocks=1 00:25:44.128 --rc geninfo_unexecuted_blocks=1 00:25:44.128 00:25:44.128 ' 00:25:44.128 17:36:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:44.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:44.128 --rc genhtml_branch_coverage=1 00:25:44.129 --rc genhtml_function_coverage=1 00:25:44.129 --rc genhtml_legend=1 00:25:44.129 --rc geninfo_all_blocks=1 00:25:44.129 --rc geninfo_unexecuted_blocks=1 00:25:44.129 00:25:44.129 ' 00:25:44.129 17:36:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:44.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:44.129 --rc genhtml_branch_coverage=1 00:25:44.129 --rc genhtml_function_coverage=1 00:25:44.129 --rc genhtml_legend=1 00:25:44.129 --rc geninfo_all_blocks=1 00:25:44.129 --rc geninfo_unexecuted_blocks=1 00:25:44.129 00:25:44.129 ' 00:25:44.129 17:36:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:44.129 17:36:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:25:44.129 17:36:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:44.129 17:36:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:44.129 17:36:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:44.129 17:36:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:44.129 17:36:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:44.129 17:36:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:44.129 17:36:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:44.129 17:36:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:44.129 17:36:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:44.129 17:36:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:44.129 17:36:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:25:44.129 17:36:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:25:44.129 17:36:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:44.129 17:36:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:44.129 17:36:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:44.129 17:36:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:44.129 17:36:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:44.129 17:36:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:25:44.129 17:36:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:44.129 17:36:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:44.129 17:36:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:44.129 17:36:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.129 17:36:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.129 17:36:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.129 17:36:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:25:44.129 17:36:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.129 17:36:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:25:44.129 17:36:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:44.129 17:36:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:44.129 17:36:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:44.129 17:36:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:44.129 17:36:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:44.129 17:36:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:44.129 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:44.129 17:36:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:44.129 17:36:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:44.129 17:36:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:44.129 17:36:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:25:44.129 17:36:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:25:44.129 17:36:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:25:44.129 17:36:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:25:44.129 17:36:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:25:44.129 17:36:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:25:44.129 17:36:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:25:44.129 17:36:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:44.129 17:36:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:44.129 17:36:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:44.129 17:36:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:44.129 17:36:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:44.129 17:36:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:44.129 17:36:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:44.129 17:36:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:44.129 17:36:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:44.129 17:36:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:44.129 17:36:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:25:44.129 17:36:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:50.703 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:50.703 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:25:50.703 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:50.703 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:50.703 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:50.703 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:50.703 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:50.703 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:25:50.703 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:50.703 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:25:50.703 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:25:50.703 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:25:50.703 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:25:50.703 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:25:50.703 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:25:50.703 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:50.703 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:50.703 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:50.703 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:50.703 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:50.703 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:50.703 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:50.703 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:50.703 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:50.703 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:50.703 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:50.703 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:50.703 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:50.703 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:50.703 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:50.703 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:50.703 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:50.703 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:50.703 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:50.703 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:50.703 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:50.703 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:50.703 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:50.703 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:50.703 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:50.703 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:50.703 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:50.703 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:50.703 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:50.703 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:50.703 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:50.703 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:50.703 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:50.703 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:50.703 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:50.703 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:50.703 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:50.703 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:50.703 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:50.703 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:50.703 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:50.703 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:50.703 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:50.703 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:50.703 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:50.703 Found net devices under 0000:af:00.0: cvl_0_0 00:25:50.703 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:50.703 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:50.703 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:50.703 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:50.703 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:50.703 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:50.703 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:50.703 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:50.703 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:50.703 Found net devices under 0000:af:00.1: cvl_0_1 00:25:50.704 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:50.704 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:50.704 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:25:50.704 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:50.704 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:50.704 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:50.704 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:50.704 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:50.704 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:50.704 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:50.704 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:50.704 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:50.704 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:50.704 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:50.704 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:50.704 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:50.704 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:50.704 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:50.704 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:50.704 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:50.704 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:50.704 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:50.704 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:50.704 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:50.704 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:50.704 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:50.704 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:50.704 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:50.704 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:50.704 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:50.704 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:25:50.704 00:25:50.704 --- 10.0.0.2 ping statistics --- 00:25:50.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:50.704 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:25:50.704 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:50.704 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:50.704 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:25:50.704 00:25:50.704 --- 10.0.0.1 ping statistics --- 00:25:50.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:50.704 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:25:50.704 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:50.704 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:25:50.704 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:50.704 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:50.704 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:50.704 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:50.704 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:50.704 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:50.704 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:50.704 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:25:50.704 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:50.704 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:50.704 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:50.704 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=2697923 00:25:50.704 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 2697923 00:25:50.704 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:50.704 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2697923 ']' 00:25:50.704 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:50.704 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:50.704 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:50.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:50.704 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:50.704 17:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:50.704 [2024-12-09 17:36:19.030372] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:25:50.704 [2024-12-09 17:36:19.030417] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:50.704 [2024-12-09 17:36:19.109007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:50.704 [2024-12-09 17:36:19.148798] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:50.704 [2024-12-09 17:36:19.148834] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:50.704 [2024-12-09 17:36:19.148842] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:50.704 [2024-12-09 17:36:19.148848] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:50.704 [2024-12-09 17:36:19.148853] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:50.704 [2024-12-09 17:36:19.149394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:50.704 17:36:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:50.704 17:36:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:25:50.704 17:36:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:50.704 17:36:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:50.704 17:36:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:50.704 17:36:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:50.704 17:36:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:25:50.704 17:36:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.704 17:36:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:50.704 [2024-12-09 17:36:19.301398] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:50.704 [2024-12-09 17:36:19.309571] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:50.704 null0 00:25:50.704 [2024-12-09 17:36:19.341550] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:50.704 17:36:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.704 17:36:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2697952 00:25:50.704 17:36:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:25:50.704 17:36:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2697952 /tmp/host.sock 00:25:50.704 17:36:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2697952 ']' 00:25:50.704 17:36:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:25:50.704 17:36:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:50.705 17:36:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:50.705 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:50.705 17:36:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:50.705 17:36:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:50.705 [2024-12-09 17:36:19.410763] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:25:50.705 [2024-12-09 17:36:19.410805] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2697952 ] 00:25:50.705 [2024-12-09 17:36:19.467168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:50.705 [2024-12-09 17:36:19.507918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:50.705 17:36:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:50.705 17:36:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:25:50.705 17:36:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:50.705 17:36:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:25:50.705 17:36:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.705 17:36:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:50.705 17:36:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.705 17:36:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:25:50.705 17:36:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.705 17:36:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:50.705 17:36:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.705 17:36:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:25:50.705 17:36:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.705 17:36:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:51.640 [2024-12-09 17:36:20.698360] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:51.640 [2024-12-09 17:36:20.698382] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:51.640 [2024-12-09 17:36:20.698394] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:51.640 [2024-12-09 17:36:20.784646] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:51.898 [2024-12-09 17:36:20.880365] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:25:51.898 [2024-12-09 17:36:20.881114] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1250210:1 started. 00:25:51.898 [2024-12-09 17:36:20.882355] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:51.898 [2024-12-09 17:36:20.882396] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:51.898 [2024-12-09 17:36:20.882415] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:51.898 [2024-12-09 17:36:20.882427] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:51.898 [2024-12-09 17:36:20.882445] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:51.898 17:36:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.898 17:36:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:25:51.898 17:36:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:51.898 [2024-12-09 17:36:20.886851] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1250210 was disconnected and freed. delete nvme_qpair. 00:25:51.898 17:36:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:51.898 17:36:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:51.898 17:36:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.898 17:36:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:51.898 17:36:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:51.898 17:36:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:51.898 17:36:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.898 17:36:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:25:51.898 17:36:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:25:51.898 17:36:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:25:52.157 17:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:25:52.157 17:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:52.157 17:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:52.157 17:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:52.157 17:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.157 17:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:52.157 17:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:52.157 17:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:52.157 17:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.157 17:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:52.157 17:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:53.092 17:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:53.092 17:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:53.092 17:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:53.092 17:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.092 17:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:53.092 17:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:53.092 17:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:53.092 17:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.092 17:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:53.093 17:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:54.468 17:36:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:54.468 17:36:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:54.468 17:36:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:54.468 17:36:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.468 17:36:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:54.468 17:36:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:54.468 17:36:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:54.468 17:36:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.468 17:36:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:54.468 17:36:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:55.403 17:36:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:55.403 17:36:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:55.403 17:36:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:55.403 17:36:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.403 17:36:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:55.403 17:36:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:55.403 17:36:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:55.403 17:36:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.403 17:36:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:55.403 17:36:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:56.337 17:36:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:56.337 17:36:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:56.337 17:36:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:56.337 17:36:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.337 17:36:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:56.337 17:36:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:56.337 17:36:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:56.337 17:36:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.337 17:36:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:56.337 17:36:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:57.271 [2024-12-09 17:36:26.323985] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:25:57.271 [2024-12-09 17:36:26.324026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.271 [2024-12-09 17:36:26.324053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.271 [2024-12-09 17:36:26.324064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.271 [2024-12-09 17:36:26.324071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.271 [2024-12-09 17:36:26.324089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.271 [2024-12-09 17:36:26.324095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.271 [2024-12-09 17:36:26.324103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.271 [2024-12-09 17:36:26.324109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.271 [2024-12-09 17:36:26.324116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.271 [2024-12-09 17:36:26.324122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.271 [2024-12-09 17:36:26.324135] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x122ca10 is same with the state(6) to be set 00:25:57.271 [2024-12-09 17:36:26.334007] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x122ca10 (9): Bad file descriptor 00:25:57.271 [2024-12-09 17:36:26.344044] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:57.271 [2024-12-09 17:36:26.344054] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:57.271 [2024-12-09 17:36:26.344060] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:57.271 [2024-12-09 17:36:26.344064] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:57.271 [2024-12-09 17:36:26.344085] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:57.271 17:36:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:57.271 17:36:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:57.271 17:36:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:57.271 17:36:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.271 17:36:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:57.271 17:36:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:57.271 17:36:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:58.206 [2024-12-09 17:36:27.354265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:25:58.206 [2024-12-09 17:36:27.354344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x122ca10 with addr=10.0.0.2, port=4420 00:25:58.206 [2024-12-09 17:36:27.354378] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x122ca10 is same with the state(6) to be set 00:25:58.206 [2024-12-09 17:36:27.354432] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x122ca10 (9): Bad file descriptor 00:25:58.206 [2024-12-09 17:36:27.355392] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:25:58.206 [2024-12-09 17:36:27.355455] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:58.206 [2024-12-09 17:36:27.355478] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:58.206 [2024-12-09 17:36:27.355502] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:58.206 [2024-12-09 17:36:27.355522] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:58.206 [2024-12-09 17:36:27.355537] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:58.206 [2024-12-09 17:36:27.355551] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:58.206 [2024-12-09 17:36:27.355573] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:58.206 [2024-12-09 17:36:27.355588] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:58.206 17:36:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.206 17:36:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:58.206 17:36:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:59.580 [2024-12-09 17:36:28.358104] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:59.580 [2024-12-09 17:36:28.358125] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:59.580 [2024-12-09 17:36:28.358135] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:59.580 [2024-12-09 17:36:28.358142] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:59.580 [2024-12-09 17:36:28.358149] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:25:59.580 [2024-12-09 17:36:28.358156] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:59.580 [2024-12-09 17:36:28.358176] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:59.580 [2024-12-09 17:36:28.358180] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:59.580 [2024-12-09 17:36:28.358202] bdev_nvme.c:7262:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:25:59.580 [2024-12-09 17:36:28.358227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:59.580 [2024-12-09 17:36:28.358237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.580 [2024-12-09 17:36:28.358247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:59.580 [2024-12-09 17:36:28.358255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.580 [2024-12-09 17:36:28.358267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:59.580 [2024-12-09 17:36:28.358273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.580 [2024-12-09 17:36:28.358280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:59.580 [2024-12-09 17:36:28.358287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.580 [2024-12-09 17:36:28.358294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:59.580 [2024-12-09 17:36:28.358301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.580 [2024-12-09 17:36:28.358307] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:25:59.580 [2024-12-09 17:36:28.358591] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121bd20 (9): Bad file descriptor 00:25:59.580 [2024-12-09 17:36:28.359602] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:25:59.580 [2024-12-09 17:36:28.359613] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:25:59.580 17:36:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:59.580 17:36:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:59.580 17:36:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:59.580 17:36:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.580 17:36:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:59.580 17:36:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:59.580 17:36:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:59.580 17:36:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.580 17:36:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:25:59.580 17:36:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:59.580 17:36:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:59.580 17:36:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:25:59.580 17:36:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:59.580 17:36:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:59.580 17:36:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:59.580 17:36:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.580 17:36:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:59.580 17:36:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:59.580 17:36:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:59.580 17:36:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.580 17:36:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:59.580 17:36:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:00.514 17:36:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:00.514 17:36:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:00.514 17:36:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:00.514 17:36:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.514 17:36:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:00.514 17:36:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:00.514 17:36:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:00.514 17:36:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.514 17:36:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:00.514 17:36:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:01.449 [2024-12-09 17:36:30.414350] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:01.449 [2024-12-09 17:36:30.414368] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:01.449 [2024-12-09 17:36:30.414380] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:01.449 [2024-12-09 17:36:30.502631] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:26:01.449 17:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:01.449 17:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:01.449 17:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:01.449 17:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.449 17:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:01.449 17:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:01.449 17:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:01.449 17:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.707 17:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:01.707 17:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:01.707 [2024-12-09 17:36:30.683592] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:26:01.707 [2024-12-09 17:36:30.684115] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x1259980:1 started. 00:26:01.707 [2024-12-09 17:36:30.685134] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:01.707 [2024-12-09 17:36:30.685165] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:01.707 [2024-12-09 17:36:30.685181] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:01.707 [2024-12-09 17:36:30.685193] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:26:01.707 [2024-12-09 17:36:30.685199] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:01.707 [2024-12-09 17:36:30.691712] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x1259980 was disconnected and freed. delete nvme_qpair. 00:26:02.642 17:36:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:02.642 17:36:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:02.642 17:36:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:02.642 17:36:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.642 17:36:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:02.642 17:36:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:02.642 17:36:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:02.642 17:36:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.642 17:36:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:26:02.642 17:36:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:26:02.642 17:36:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2697952 00:26:02.642 17:36:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2697952 ']' 00:26:02.642 17:36:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2697952 00:26:02.642 17:36:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:26:02.642 17:36:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:02.642 17:36:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2697952 00:26:02.642 17:36:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:02.642 17:36:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:02.642 17:36:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2697952' 00:26:02.642 killing process with pid 2697952 00:26:02.642 17:36:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2697952 00:26:02.642 17:36:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2697952 00:26:02.901 17:36:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:26:02.901 17:36:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:02.901 17:36:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:26:02.901 17:36:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:02.901 17:36:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:26:02.901 17:36:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:02.901 17:36:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:02.901 rmmod nvme_tcp 00:26:02.901 rmmod nvme_fabrics 00:26:02.901 rmmod nvme_keyring 00:26:02.901 17:36:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:02.901 17:36:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:26:02.901 17:36:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:26:02.901 17:36:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 2697923 ']' 00:26:02.901 17:36:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 2697923 00:26:02.901 17:36:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2697923 ']' 00:26:02.901 17:36:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2697923 00:26:02.901 17:36:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:26:02.901 17:36:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:02.901 17:36:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2697923 00:26:02.901 17:36:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:02.901 17:36:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:02.901 17:36:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2697923' 00:26:02.901 killing process with pid 2697923 00:26:02.901 17:36:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2697923 00:26:02.901 17:36:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2697923 00:26:03.161 17:36:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:03.161 17:36:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:03.161 17:36:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:03.161 17:36:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:26:03.161 17:36:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:26:03.161 17:36:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:03.161 17:36:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:26:03.161 17:36:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:03.161 17:36:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:03.161 17:36:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:03.161 17:36:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:03.161 17:36:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:05.695 17:36:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:05.695 00:26:05.695 real 0m21.424s 00:26:05.695 user 0m26.578s 00:26:05.695 sys 0m5.858s 00:26:05.695 17:36:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:05.695 17:36:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:05.695 ************************************ 00:26:05.695 END TEST nvmf_discovery_remove_ifc 00:26:05.695 ************************************ 00:26:05.695 17:36:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:05.695 17:36:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:05.695 17:36:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:05.695 17:36:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.695 ************************************ 00:26:05.695 START TEST nvmf_identify_kernel_target 00:26:05.695 ************************************ 00:26:05.695 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:05.695 * Looking for test storage... 00:26:05.695 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:05.695 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:05.695 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:26:05.695 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:05.695 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:05.695 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:05.695 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:05.695 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:05.695 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:26:05.695 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:26:05.695 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:26:05.695 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:26:05.695 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:26:05.695 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:26:05.695 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:26:05.695 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:05.695 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:26:05.695 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:26:05.695 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:05.695 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:05.695 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:26:05.695 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:26:05.695 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:05.695 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:26:05.695 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:26:05.695 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:26:05.695 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:26:05.696 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:05.696 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:26:05.696 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:26:05.696 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:05.696 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:05.696 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:26:05.696 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:05.696 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:05.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:05.696 --rc genhtml_branch_coverage=1 00:26:05.696 --rc genhtml_function_coverage=1 00:26:05.696 --rc genhtml_legend=1 00:26:05.696 --rc geninfo_all_blocks=1 00:26:05.696 --rc geninfo_unexecuted_blocks=1 00:26:05.696 00:26:05.696 ' 00:26:05.696 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:05.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:05.696 --rc genhtml_branch_coverage=1 00:26:05.696 --rc genhtml_function_coverage=1 00:26:05.696 --rc genhtml_legend=1 00:26:05.696 --rc geninfo_all_blocks=1 00:26:05.696 --rc geninfo_unexecuted_blocks=1 00:26:05.696 00:26:05.696 ' 00:26:05.696 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:05.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:05.696 --rc genhtml_branch_coverage=1 00:26:05.696 --rc genhtml_function_coverage=1 00:26:05.696 --rc genhtml_legend=1 00:26:05.696 --rc geninfo_all_blocks=1 00:26:05.696 --rc geninfo_unexecuted_blocks=1 00:26:05.696 00:26:05.696 ' 00:26:05.696 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:05.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:05.696 --rc genhtml_branch_coverage=1 00:26:05.696 --rc genhtml_function_coverage=1 00:26:05.696 --rc genhtml_legend=1 00:26:05.696 --rc geninfo_all_blocks=1 00:26:05.696 --rc geninfo_unexecuted_blocks=1 00:26:05.696 00:26:05.696 ' 00:26:05.696 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:05.696 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:26:05.696 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:05.696 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:05.696 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:05.696 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:05.696 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:05.696 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:05.696 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:05.696 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:05.696 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:05.696 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:05.696 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:26:05.696 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:26:05.696 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:05.696 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:05.696 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:05.696 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:05.696 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:05.696 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:26:05.696 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:05.696 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:05.696 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:05.696 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.696 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.696 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.696 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:26:05.696 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.696 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:26:05.696 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:05.696 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:05.696 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:05.696 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:05.696 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:05.696 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:05.696 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:05.696 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:05.696 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:05.696 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:05.696 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:26:05.696 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:05.696 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:05.696 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:05.696 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:05.696 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:05.696 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:05.696 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:05.696 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:05.696 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:05.696 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:05.696 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:26:05.696 17:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:10.967 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:10.967 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:26:10.967 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:10.967 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:10.967 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:10.967 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:10.967 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:10.967 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:26:10.967 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:10.967 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:26:10.967 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:26:10.967 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:26:10.967 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:26:10.967 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:26:10.967 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:26:10.967 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:10.967 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:10.967 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:10.967 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:10.967 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:10.967 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:10.967 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:10.967 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:10.967 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:10.967 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:10.967 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:10.967 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:10.967 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:10.967 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:10.967 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:10.967 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:10.968 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:10.968 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:10.968 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:10.968 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:10.968 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:10.968 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:10.968 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:10.968 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:10.968 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:10.968 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:10.968 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:10.968 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:10.968 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:10.968 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:10.968 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:10.968 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:10.968 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:10.968 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:10.968 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:10.968 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:10.968 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:10.968 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:10.968 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:10.968 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:10.968 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:10.968 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:10.968 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:10.968 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:10.968 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:10.968 Found net devices under 0000:af:00.0: cvl_0_0 00:26:10.968 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:10.968 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:10.968 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:10.968 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:10.968 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:10.968 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:10.968 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:10.968 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:10.968 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:10.968 Found net devices under 0000:af:00.1: cvl_0_1 00:26:10.968 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:10.968 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:10.968 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:26:10.968 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:10.968 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:10.968 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:10.968 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:10.968 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:10.968 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:10.968 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:10.968 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:10.968 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:10.968 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:10.968 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:10.968 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:10.968 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:10.968 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:10.968 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:10.968 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:10.968 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:10.968 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:11.227 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:11.227 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:11.227 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:11.227 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:11.227 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:11.227 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:11.227 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:11.227 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:11.227 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:11.227 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.346 ms 00:26:11.227 00:26:11.227 --- 10.0.0.2 ping statistics --- 00:26:11.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:11.227 rtt min/avg/max/mdev = 0.346/0.346/0.346/0.000 ms 00:26:11.227 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:11.227 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:11.227 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:26:11.227 00:26:11.227 --- 10.0.0.1 ping statistics --- 00:26:11.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:11.227 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:26:11.227 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:11.227 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:26:11.227 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:11.227 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:11.227 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:11.227 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:11.227 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:11.227 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:11.227 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:11.227 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:26:11.227 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:26:11.227 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:26:11.227 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:11.227 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:11.227 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.227 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.227 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:11.227 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:11.227 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:11.227 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:11.227 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:11.227 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:26:11.227 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:11.227 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:11.227 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:26:11.486 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:11.486 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:11.486 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:11.486 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:26:11.486 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:26:11.486 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:26:11.486 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:11.486 17:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:14.020 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:26:14.279 Waiting for block devices as requested 00:26:14.279 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:26:14.537 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:14.537 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:14.537 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:14.537 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:14.797 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:14.797 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:14.797 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:15.056 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:15.056 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:15.056 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:15.314 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:15.314 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:15.314 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:15.314 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:15.573 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:15.573 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:15.573 17:36:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:26:15.573 17:36:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:15.573 17:36:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:26:15.573 17:36:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:26:15.573 17:36:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:15.573 17:36:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:26:15.573 17:36:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:26:15.573 17:36:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:26:15.573 17:36:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:15.832 No valid GPT data, bailing 00:26:15.832 17:36:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:15.832 17:36:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:26:15.832 17:36:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:26:15.832 17:36:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:26:15.832 17:36:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:26:15.832 17:36:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:26:15.832 17:36:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:26:15.832 17:36:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:26:15.832 17:36:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:26:15.832 17:36:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:26:15.832 17:36:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:26:15.832 17:36:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:26:15.832 17:36:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:26:15.832 No valid GPT data, bailing 00:26:15.832 17:36:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:26:15.832 17:36:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:26:15.832 17:36:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:26:15.832 17:36:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:26:15.832 17:36:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:26:15.832 17:36:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n2 ]] 00:26:15.832 17:36:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n2 00:26:15.832 17:36:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:26:15.832 17:36:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:26:15.832 17:36:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ host-managed != none ]] 00:26:15.832 17:36:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # continue 00:26:15.832 17:36:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:26:15.833 17:36:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:15.833 17:36:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:15.833 17:36:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:15.833 17:36:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:26:15.833 17:36:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:26:15.833 17:36:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:26:15.833 17:36:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:26:15.833 17:36:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:26:15.833 17:36:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:26:15.833 17:36:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:26:15.833 17:36:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:26:15.833 17:36:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:15.833 17:36:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:26:15.833 00:26:15.833 Discovery Log Number of Records 2, Generation counter 2 00:26:15.833 =====Discovery Log Entry 0====== 00:26:15.833 trtype: tcp 00:26:15.833 adrfam: ipv4 00:26:15.833 subtype: current discovery subsystem 00:26:15.833 treq: not specified, sq flow control disable supported 00:26:15.833 portid: 1 00:26:15.833 trsvcid: 4420 00:26:15.833 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:15.833 traddr: 10.0.0.1 00:26:15.833 eflags: none 00:26:15.833 sectype: none 00:26:15.833 =====Discovery Log Entry 1====== 00:26:15.833 trtype: tcp 00:26:15.833 adrfam: ipv4 00:26:15.833 subtype: nvme subsystem 00:26:15.833 treq: not specified, sq flow control disable supported 00:26:15.833 portid: 1 00:26:15.833 trsvcid: 4420 00:26:15.833 subnqn: nqn.2016-06.io.spdk:testnqn 00:26:15.833 traddr: 10.0.0.1 00:26:15.833 eflags: none 00:26:15.833 sectype: none 00:26:15.833 17:36:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:26:15.833 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:26:16.096 ===================================================== 00:26:16.096 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:16.096 ===================================================== 00:26:16.096 Controller Capabilities/Features 00:26:16.096 ================================ 00:26:16.096 Vendor ID: 0000 00:26:16.096 Subsystem Vendor ID: 0000 00:26:16.096 Serial Number: 4845a7c1d53cbca31701 00:26:16.097 Model Number: Linux 00:26:16.097 Firmware Version: 6.8.9-20 00:26:16.097 Recommended Arb Burst: 0 00:26:16.097 IEEE OUI Identifier: 00 00 00 00:26:16.097 Multi-path I/O 00:26:16.097 May have multiple subsystem ports: No 00:26:16.097 May have multiple controllers: No 00:26:16.097 Associated with SR-IOV VF: No 00:26:16.097 Max Data Transfer Size: Unlimited 00:26:16.097 Max Number of Namespaces: 0 00:26:16.097 Max Number of I/O Queues: 1024 00:26:16.097 NVMe Specification Version (VS): 1.3 00:26:16.097 NVMe Specification Version (Identify): 1.3 00:26:16.097 Maximum Queue Entries: 1024 00:26:16.097 Contiguous Queues Required: No 00:26:16.097 Arbitration Mechanisms Supported 00:26:16.097 Weighted Round Robin: Not Supported 00:26:16.097 Vendor Specific: Not Supported 00:26:16.097 Reset Timeout: 7500 ms 00:26:16.097 Doorbell Stride: 4 bytes 00:26:16.097 NVM Subsystem Reset: Not Supported 00:26:16.097 Command Sets Supported 00:26:16.097 NVM Command Set: Supported 00:26:16.097 Boot Partition: Not Supported 00:26:16.097 Memory Page Size Minimum: 4096 bytes 00:26:16.097 Memory Page Size Maximum: 4096 bytes 00:26:16.097 Persistent Memory Region: Not Supported 00:26:16.097 Optional Asynchronous Events Supported 00:26:16.097 Namespace Attribute Notices: Not Supported 00:26:16.097 Firmware Activation Notices: Not Supported 00:26:16.097 ANA Change Notices: Not Supported 00:26:16.097 PLE Aggregate Log Change Notices: Not Supported 00:26:16.097 LBA Status Info Alert Notices: Not Supported 00:26:16.097 EGE Aggregate Log Change Notices: Not Supported 00:26:16.097 Normal NVM Subsystem Shutdown event: Not Supported 00:26:16.097 Zone Descriptor Change Notices: Not Supported 00:26:16.097 Discovery Log Change Notices: Supported 00:26:16.097 Controller Attributes 00:26:16.097 128-bit Host Identifier: Not Supported 00:26:16.097 Non-Operational Permissive Mode: Not Supported 00:26:16.097 NVM Sets: Not Supported 00:26:16.097 Read Recovery Levels: Not Supported 00:26:16.097 Endurance Groups: Not Supported 00:26:16.097 Predictable Latency Mode: Not Supported 00:26:16.097 Traffic Based Keep ALive: Not Supported 00:26:16.097 Namespace Granularity: Not Supported 00:26:16.097 SQ Associations: Not Supported 00:26:16.097 UUID List: Not Supported 00:26:16.097 Multi-Domain Subsystem: Not Supported 00:26:16.097 Fixed Capacity Management: Not Supported 00:26:16.097 Variable Capacity Management: Not Supported 00:26:16.097 Delete Endurance Group: Not Supported 00:26:16.097 Delete NVM Set: Not Supported 00:26:16.097 Extended LBA Formats Supported: Not Supported 00:26:16.097 Flexible Data Placement Supported: Not Supported 00:26:16.097 00:26:16.097 Controller Memory Buffer Support 00:26:16.097 ================================ 00:26:16.097 Supported: No 00:26:16.097 00:26:16.097 Persistent Memory Region Support 00:26:16.097 ================================ 00:26:16.097 Supported: No 00:26:16.097 00:26:16.097 Admin Command Set Attributes 00:26:16.097 ============================ 00:26:16.097 Security Send/Receive: Not Supported 00:26:16.097 Format NVM: Not Supported 00:26:16.097 Firmware Activate/Download: Not Supported 00:26:16.097 Namespace Management: Not Supported 00:26:16.097 Device Self-Test: Not Supported 00:26:16.097 Directives: Not Supported 00:26:16.097 NVMe-MI: Not Supported 00:26:16.097 Virtualization Management: Not Supported 00:26:16.097 Doorbell Buffer Config: Not Supported 00:26:16.097 Get LBA Status Capability: Not Supported 00:26:16.097 Command & Feature Lockdown Capability: Not Supported 00:26:16.097 Abort Command Limit: 1 00:26:16.097 Async Event Request Limit: 1 00:26:16.097 Number of Firmware Slots: N/A 00:26:16.097 Firmware Slot 1 Read-Only: N/A 00:26:16.097 Firmware Activation Without Reset: N/A 00:26:16.097 Multiple Update Detection Support: N/A 00:26:16.097 Firmware Update Granularity: No Information Provided 00:26:16.097 Per-Namespace SMART Log: No 00:26:16.097 Asymmetric Namespace Access Log Page: Not Supported 00:26:16.097 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:16.097 Command Effects Log Page: Not Supported 00:26:16.097 Get Log Page Extended Data: Supported 00:26:16.097 Telemetry Log Pages: Not Supported 00:26:16.097 Persistent Event Log Pages: Not Supported 00:26:16.097 Supported Log Pages Log Page: May Support 00:26:16.097 Commands Supported & Effects Log Page: Not Supported 00:26:16.097 Feature Identifiers & Effects Log Page:May Support 00:26:16.097 NVMe-MI Commands & Effects Log Page: May Support 00:26:16.097 Data Area 4 for Telemetry Log: Not Supported 00:26:16.097 Error Log Page Entries Supported: 1 00:26:16.097 Keep Alive: Not Supported 00:26:16.097 00:26:16.097 NVM Command Set Attributes 00:26:16.097 ========================== 00:26:16.097 Submission Queue Entry Size 00:26:16.097 Max: 1 00:26:16.097 Min: 1 00:26:16.097 Completion Queue Entry Size 00:26:16.097 Max: 1 00:26:16.097 Min: 1 00:26:16.097 Number of Namespaces: 0 00:26:16.097 Compare Command: Not Supported 00:26:16.098 Write Uncorrectable Command: Not Supported 00:26:16.098 Dataset Management Command: Not Supported 00:26:16.098 Write Zeroes Command: Not Supported 00:26:16.098 Set Features Save Field: Not Supported 00:26:16.098 Reservations: Not Supported 00:26:16.098 Timestamp: Not Supported 00:26:16.098 Copy: Not Supported 00:26:16.098 Volatile Write Cache: Not Present 00:26:16.098 Atomic Write Unit (Normal): 1 00:26:16.098 Atomic Write Unit (PFail): 1 00:26:16.098 Atomic Compare & Write Unit: 1 00:26:16.098 Fused Compare & Write: Not Supported 00:26:16.098 Scatter-Gather List 00:26:16.098 SGL Command Set: Supported 00:26:16.098 SGL Keyed: Not Supported 00:26:16.098 SGL Bit Bucket Descriptor: Not Supported 00:26:16.098 SGL Metadata Pointer: Not Supported 00:26:16.098 Oversized SGL: Not Supported 00:26:16.098 SGL Metadata Address: Not Supported 00:26:16.098 SGL Offset: Supported 00:26:16.098 Transport SGL Data Block: Not Supported 00:26:16.098 Replay Protected Memory Block: Not Supported 00:26:16.098 00:26:16.098 Firmware Slot Information 00:26:16.098 ========================= 00:26:16.098 Active slot: 0 00:26:16.098 00:26:16.098 00:26:16.098 Error Log 00:26:16.098 ========= 00:26:16.098 00:26:16.098 Active Namespaces 00:26:16.098 ================= 00:26:16.098 Discovery Log Page 00:26:16.098 ================== 00:26:16.098 Generation Counter: 2 00:26:16.098 Number of Records: 2 00:26:16.098 Record Format: 0 00:26:16.098 00:26:16.098 Discovery Log Entry 0 00:26:16.098 ---------------------- 00:26:16.098 Transport Type: 3 (TCP) 00:26:16.098 Address Family: 1 (IPv4) 00:26:16.098 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:16.098 Entry Flags: 00:26:16.098 Duplicate Returned Information: 0 00:26:16.098 Explicit Persistent Connection Support for Discovery: 0 00:26:16.098 Transport Requirements: 00:26:16.098 Secure Channel: Not Specified 00:26:16.098 Port ID: 1 (0x0001) 00:26:16.098 Controller ID: 65535 (0xffff) 00:26:16.098 Admin Max SQ Size: 32 00:26:16.098 Transport Service Identifier: 4420 00:26:16.098 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:16.098 Transport Address: 10.0.0.1 00:26:16.098 Discovery Log Entry 1 00:26:16.098 ---------------------- 00:26:16.098 Transport Type: 3 (TCP) 00:26:16.098 Address Family: 1 (IPv4) 00:26:16.098 Subsystem Type: 2 (NVM Subsystem) 00:26:16.098 Entry Flags: 00:26:16.098 Duplicate Returned Information: 0 00:26:16.098 Explicit Persistent Connection Support for Discovery: 0 00:26:16.098 Transport Requirements: 00:26:16.098 Secure Channel: Not Specified 00:26:16.098 Port ID: 1 (0x0001) 00:26:16.098 Controller ID: 65535 (0xffff) 00:26:16.098 Admin Max SQ Size: 32 00:26:16.098 Transport Service Identifier: 4420 00:26:16.098 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:26:16.098 Transport Address: 10.0.0.1 00:26:16.098 17:36:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:16.098 get_feature(0x01) failed 00:26:16.098 get_feature(0x02) failed 00:26:16.098 get_feature(0x04) failed 00:26:16.098 ===================================================== 00:26:16.098 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:16.098 ===================================================== 00:26:16.098 Controller Capabilities/Features 00:26:16.098 ================================ 00:26:16.098 Vendor ID: 0000 00:26:16.098 Subsystem Vendor ID: 0000 00:26:16.098 Serial Number: 16be20fe4e93d0104e77 00:26:16.098 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:26:16.098 Firmware Version: 6.8.9-20 00:26:16.098 Recommended Arb Burst: 6 00:26:16.098 IEEE OUI Identifier: 00 00 00 00:26:16.098 Multi-path I/O 00:26:16.098 May have multiple subsystem ports: Yes 00:26:16.098 May have multiple controllers: Yes 00:26:16.098 Associated with SR-IOV VF: No 00:26:16.098 Max Data Transfer Size: Unlimited 00:26:16.098 Max Number of Namespaces: 1024 00:26:16.098 Max Number of I/O Queues: 128 00:26:16.098 NVMe Specification Version (VS): 1.3 00:26:16.098 NVMe Specification Version (Identify): 1.3 00:26:16.098 Maximum Queue Entries: 1024 00:26:16.098 Contiguous Queues Required: No 00:26:16.098 Arbitration Mechanisms Supported 00:26:16.098 Weighted Round Robin: Not Supported 00:26:16.098 Vendor Specific: Not Supported 00:26:16.098 Reset Timeout: 7500 ms 00:26:16.098 Doorbell Stride: 4 bytes 00:26:16.098 NVM Subsystem Reset: Not Supported 00:26:16.098 Command Sets Supported 00:26:16.098 NVM Command Set: Supported 00:26:16.098 Boot Partition: Not Supported 00:26:16.098 Memory Page Size Minimum: 4096 bytes 00:26:16.098 Memory Page Size Maximum: 4096 bytes 00:26:16.098 Persistent Memory Region: Not Supported 00:26:16.098 Optional Asynchronous Events Supported 00:26:16.098 Namespace Attribute Notices: Supported 00:26:16.098 Firmware Activation Notices: Not Supported 00:26:16.098 ANA Change Notices: Supported 00:26:16.098 PLE Aggregate Log Change Notices: Not Supported 00:26:16.098 LBA Status Info Alert Notices: Not Supported 00:26:16.098 EGE Aggregate Log Change Notices: Not Supported 00:26:16.098 Normal NVM Subsystem Shutdown event: Not Supported 00:26:16.098 Zone Descriptor Change Notices: Not Supported 00:26:16.098 Discovery Log Change Notices: Not Supported 00:26:16.098 Controller Attributes 00:26:16.098 128-bit Host Identifier: Supported 00:26:16.098 Non-Operational Permissive Mode: Not Supported 00:26:16.098 NVM Sets: Not Supported 00:26:16.098 Read Recovery Levels: Not Supported 00:26:16.098 Endurance Groups: Not Supported 00:26:16.098 Predictable Latency Mode: Not Supported 00:26:16.098 Traffic Based Keep ALive: Supported 00:26:16.098 Namespace Granularity: Not Supported 00:26:16.098 SQ Associations: Not Supported 00:26:16.098 UUID List: Not Supported 00:26:16.098 Multi-Domain Subsystem: Not Supported 00:26:16.098 Fixed Capacity Management: Not Supported 00:26:16.098 Variable Capacity Management: Not Supported 00:26:16.098 Delete Endurance Group: Not Supported 00:26:16.098 Delete NVM Set: Not Supported 00:26:16.098 Extended LBA Formats Supported: Not Supported 00:26:16.098 Flexible Data Placement Supported: Not Supported 00:26:16.098 00:26:16.098 Controller Memory Buffer Support 00:26:16.098 ================================ 00:26:16.098 Supported: No 00:26:16.098 00:26:16.099 Persistent Memory Region Support 00:26:16.099 ================================ 00:26:16.099 Supported: No 00:26:16.099 00:26:16.099 Admin Command Set Attributes 00:26:16.099 ============================ 00:26:16.099 Security Send/Receive: Not Supported 00:26:16.099 Format NVM: Not Supported 00:26:16.099 Firmware Activate/Download: Not Supported 00:26:16.099 Namespace Management: Not Supported 00:26:16.099 Device Self-Test: Not Supported 00:26:16.099 Directives: Not Supported 00:26:16.099 NVMe-MI: Not Supported 00:26:16.099 Virtualization Management: Not Supported 00:26:16.099 Doorbell Buffer Config: Not Supported 00:26:16.099 Get LBA Status Capability: Not Supported 00:26:16.099 Command & Feature Lockdown Capability: Not Supported 00:26:16.099 Abort Command Limit: 4 00:26:16.099 Async Event Request Limit: 4 00:26:16.099 Number of Firmware Slots: N/A 00:26:16.099 Firmware Slot 1 Read-Only: N/A 00:26:16.099 Firmware Activation Without Reset: N/A 00:26:16.099 Multiple Update Detection Support: N/A 00:26:16.099 Firmware Update Granularity: No Information Provided 00:26:16.099 Per-Namespace SMART Log: Yes 00:26:16.099 Asymmetric Namespace Access Log Page: Supported 00:26:16.099 ANA Transition Time : 10 sec 00:26:16.099 00:26:16.099 Asymmetric Namespace Access Capabilities 00:26:16.099 ANA Optimized State : Supported 00:26:16.099 ANA Non-Optimized State : Supported 00:26:16.099 ANA Inaccessible State : Supported 00:26:16.099 ANA Persistent Loss State : Supported 00:26:16.099 ANA Change State : Supported 00:26:16.099 ANAGRPID is not changed : No 00:26:16.099 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:26:16.099 00:26:16.099 ANA Group Identifier Maximum : 128 00:26:16.099 Number of ANA Group Identifiers : 128 00:26:16.099 Max Number of Allowed Namespaces : 1024 00:26:16.099 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:26:16.099 Command Effects Log Page: Supported 00:26:16.099 Get Log Page Extended Data: Supported 00:26:16.099 Telemetry Log Pages: Not Supported 00:26:16.099 Persistent Event Log Pages: Not Supported 00:26:16.099 Supported Log Pages Log Page: May Support 00:26:16.099 Commands Supported & Effects Log Page: Not Supported 00:26:16.099 Feature Identifiers & Effects Log Page:May Support 00:26:16.099 NVMe-MI Commands & Effects Log Page: May Support 00:26:16.099 Data Area 4 for Telemetry Log: Not Supported 00:26:16.099 Error Log Page Entries Supported: 128 00:26:16.099 Keep Alive: Supported 00:26:16.099 Keep Alive Granularity: 1000 ms 00:26:16.099 00:26:16.099 NVM Command Set Attributes 00:26:16.099 ========================== 00:26:16.099 Submission Queue Entry Size 00:26:16.099 Max: 64 00:26:16.099 Min: 64 00:26:16.099 Completion Queue Entry Size 00:26:16.099 Max: 16 00:26:16.099 Min: 16 00:26:16.099 Number of Namespaces: 1024 00:26:16.099 Compare Command: Not Supported 00:26:16.099 Write Uncorrectable Command: Not Supported 00:26:16.099 Dataset Management Command: Supported 00:26:16.099 Write Zeroes Command: Supported 00:26:16.099 Set Features Save Field: Not Supported 00:26:16.099 Reservations: Not Supported 00:26:16.099 Timestamp: Not Supported 00:26:16.099 Copy: Not Supported 00:26:16.099 Volatile Write Cache: Present 00:26:16.099 Atomic Write Unit (Normal): 1 00:26:16.099 Atomic Write Unit (PFail): 1 00:26:16.099 Atomic Compare & Write Unit: 1 00:26:16.099 Fused Compare & Write: Not Supported 00:26:16.099 Scatter-Gather List 00:26:16.099 SGL Command Set: Supported 00:26:16.099 SGL Keyed: Not Supported 00:26:16.099 SGL Bit Bucket Descriptor: Not Supported 00:26:16.099 SGL Metadata Pointer: Not Supported 00:26:16.099 Oversized SGL: Not Supported 00:26:16.099 SGL Metadata Address: Not Supported 00:26:16.099 SGL Offset: Supported 00:26:16.099 Transport SGL Data Block: Not Supported 00:26:16.099 Replay Protected Memory Block: Not Supported 00:26:16.099 00:26:16.099 Firmware Slot Information 00:26:16.099 ========================= 00:26:16.099 Active slot: 0 00:26:16.099 00:26:16.099 Asymmetric Namespace Access 00:26:16.099 =========================== 00:26:16.099 Change Count : 0 00:26:16.099 Number of ANA Group Descriptors : 1 00:26:16.099 ANA Group Descriptor : 0 00:26:16.099 ANA Group ID : 1 00:26:16.099 Number of NSID Values : 1 00:26:16.099 Change Count : 0 00:26:16.099 ANA State : 1 00:26:16.099 Namespace Identifier : 1 00:26:16.099 00:26:16.099 Commands Supported and Effects 00:26:16.099 ============================== 00:26:16.099 Admin Commands 00:26:16.099 -------------- 00:26:16.099 Get Log Page (02h): Supported 00:26:16.099 Identify (06h): Supported 00:26:16.099 Abort (08h): Supported 00:26:16.099 Set Features (09h): Supported 00:26:16.099 Get Features (0Ah): Supported 00:26:16.099 Asynchronous Event Request (0Ch): Supported 00:26:16.099 Keep Alive (18h): Supported 00:26:16.099 I/O Commands 00:26:16.099 ------------ 00:26:16.099 Flush (00h): Supported 00:26:16.099 Write (01h): Supported LBA-Change 00:26:16.099 Read (02h): Supported 00:26:16.099 Write Zeroes (08h): Supported LBA-Change 00:26:16.099 Dataset Management (09h): Supported 00:26:16.099 00:26:16.099 Error Log 00:26:16.099 ========= 00:26:16.099 Entry: 0 00:26:16.099 Error Count: 0x3 00:26:16.099 Submission Queue Id: 0x0 00:26:16.099 Command Id: 0x5 00:26:16.100 Phase Bit: 0 00:26:16.100 Status Code: 0x2 00:26:16.100 Status Code Type: 0x0 00:26:16.100 Do Not Retry: 1 00:26:16.100 Error Location: 0x28 00:26:16.100 LBA: 0x0 00:26:16.100 Namespace: 0x0 00:26:16.100 Vendor Log Page: 0x0 00:26:16.100 ----------- 00:26:16.100 Entry: 1 00:26:16.100 Error Count: 0x2 00:26:16.100 Submission Queue Id: 0x0 00:26:16.100 Command Id: 0x5 00:26:16.100 Phase Bit: 0 00:26:16.100 Status Code: 0x2 00:26:16.100 Status Code Type: 0x0 00:26:16.100 Do Not Retry: 1 00:26:16.100 Error Location: 0x28 00:26:16.100 LBA: 0x0 00:26:16.100 Namespace: 0x0 00:26:16.100 Vendor Log Page: 0x0 00:26:16.100 ----------- 00:26:16.100 Entry: 2 00:26:16.100 Error Count: 0x1 00:26:16.100 Submission Queue Id: 0x0 00:26:16.100 Command Id: 0x4 00:26:16.100 Phase Bit: 0 00:26:16.100 Status Code: 0x2 00:26:16.100 Status Code Type: 0x0 00:26:16.100 Do Not Retry: 1 00:26:16.100 Error Location: 0x28 00:26:16.100 LBA: 0x0 00:26:16.100 Namespace: 0x0 00:26:16.100 Vendor Log Page: 0x0 00:26:16.100 00:26:16.100 Number of Queues 00:26:16.100 ================ 00:26:16.100 Number of I/O Submission Queues: 128 00:26:16.100 Number of I/O Completion Queues: 128 00:26:16.100 00:26:16.100 ZNS Specific Controller Data 00:26:16.100 ============================ 00:26:16.100 Zone Append Size Limit: 0 00:26:16.100 00:26:16.100 00:26:16.100 Active Namespaces 00:26:16.100 ================= 00:26:16.100 get_feature(0x05) failed 00:26:16.100 Namespace ID:1 00:26:16.100 Command Set Identifier: NVM (00h) 00:26:16.100 Deallocate: Supported 00:26:16.100 Deallocated/Unwritten Error: Not Supported 00:26:16.100 Deallocated Read Value: Unknown 00:26:16.100 Deallocate in Write Zeroes: Not Supported 00:26:16.100 Deallocated Guard Field: 0xFFFF 00:26:16.100 Flush: Supported 00:26:16.100 Reservation: Not Supported 00:26:16.100 Namespace Sharing Capabilities: Multiple Controllers 00:26:16.100 Size (in LBAs): 4194304 (2GiB) 00:26:16.100 Capacity (in LBAs): 4194304 (2GiB) 00:26:16.100 Utilization (in LBAs): 4194304 (2GiB) 00:26:16.100 UUID: 1950414c-964f-426e-a500-84a2b62912bf 00:26:16.100 Thin Provisioning: Not Supported 00:26:16.100 Per-NS Atomic Units: Yes 00:26:16.100 Atomic Boundary Size (Normal): 0 00:26:16.100 Atomic Boundary Size (PFail): 0 00:26:16.100 Atomic Boundary Offset: 0 00:26:16.100 NGUID/EUI64 Never Reused: No 00:26:16.100 ANA group ID: 1 00:26:16.100 Namespace Write Protected: No 00:26:16.100 Number of LBA Formats: 1 00:26:16.100 Current LBA Format: LBA Format #00 00:26:16.100 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:16.100 00:26:16.100 17:36:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:26:16.100 17:36:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:16.100 17:36:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:26:16.100 17:36:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:16.100 17:36:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:26:16.100 17:36:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:16.100 17:36:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:16.100 rmmod nvme_tcp 00:26:16.100 rmmod nvme_fabrics 00:26:16.100 17:36:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:16.100 17:36:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:26:16.100 17:36:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:26:16.100 17:36:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:26:16.100 17:36:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:16.100 17:36:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:16.100 17:36:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:16.100 17:36:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:26:16.100 17:36:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:26:16.100 17:36:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:16.100 17:36:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:26:16.100 17:36:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:16.100 17:36:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:16.100 17:36:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:16.100 17:36:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:16.100 17:36:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:18.635 17:36:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:18.635 17:36:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:26:18.635 17:36:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:26:18.635 17:36:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:26:18.635 17:36:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:18.635 17:36:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:18.635 17:36:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:18.635 17:36:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:18.635 17:36:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:26:18.635 17:36:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:26:18.635 17:36:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:20.752 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:26:21.326 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:21.326 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:21.326 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:21.326 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:21.326 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:21.326 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:21.326 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:21.326 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:21.326 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:21.326 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:21.326 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:21.326 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:21.326 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:21.326 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:21.326 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:21.326 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:22.263 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:26:22.263 00:26:22.263 real 0m17.036s 00:26:22.263 user 0m4.526s 00:26:22.263 sys 0m8.917s 00:26:22.263 17:36:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:22.263 17:36:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:22.263 ************************************ 00:26:22.263 END TEST nvmf_identify_kernel_target 00:26:22.263 ************************************ 00:26:22.263 17:36:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:22.263 17:36:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:22.263 17:36:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:22.263 17:36:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.263 ************************************ 00:26:22.263 START TEST nvmf_auth_host 00:26:22.263 ************************************ 00:26:22.263 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:22.522 * Looking for test storage... 00:26:22.522 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:22.522 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:22.522 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:22.522 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:26:22.522 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:22.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:22.523 --rc genhtml_branch_coverage=1 00:26:22.523 --rc genhtml_function_coverage=1 00:26:22.523 --rc genhtml_legend=1 00:26:22.523 --rc geninfo_all_blocks=1 00:26:22.523 --rc geninfo_unexecuted_blocks=1 00:26:22.523 00:26:22.523 ' 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:22.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:22.523 --rc genhtml_branch_coverage=1 00:26:22.523 --rc genhtml_function_coverage=1 00:26:22.523 --rc genhtml_legend=1 00:26:22.523 --rc geninfo_all_blocks=1 00:26:22.523 --rc geninfo_unexecuted_blocks=1 00:26:22.523 00:26:22.523 ' 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:22.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:22.523 --rc genhtml_branch_coverage=1 00:26:22.523 --rc genhtml_function_coverage=1 00:26:22.523 --rc genhtml_legend=1 00:26:22.523 --rc geninfo_all_blocks=1 00:26:22.523 --rc geninfo_unexecuted_blocks=1 00:26:22.523 00:26:22.523 ' 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:22.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:22.523 --rc genhtml_branch_coverage=1 00:26:22.523 --rc genhtml_function_coverage=1 00:26:22.523 --rc genhtml_legend=1 00:26:22.523 --rc geninfo_all_blocks=1 00:26:22.523 --rc geninfo_unexecuted_blocks=1 00:26:22.523 00:26:22.523 ' 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:22.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:22.523 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:22.524 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:22.524 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:22.524 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:22.524 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:22.524 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:22.524 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:22.524 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:26:22.524 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.093 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:29.093 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:29.094 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:29.094 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:29.094 Found net devices under 0000:af:00.0: cvl_0_0 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:29.094 Found net devices under 0000:af:00.1: cvl_0_1 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:29.094 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:29.094 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.320 ms 00:26:29.094 00:26:29.094 --- 10.0.0.2 ping statistics --- 00:26:29.094 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:29.094 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:29.094 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:29.094 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:26:29.094 00:26:29.094 --- 10.0.0.1 ping statistics --- 00:26:29.094 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:29.094 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:26:29.094 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:29.095 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:29.095 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.095 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=2710068 00:26:29.095 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 2710068 00:26:29.095 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:26:29.095 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2710068 ']' 00:26:29.095 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:29.095 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:29.095 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:29.095 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:29.095 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.095 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:29.095 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:26:29.095 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:29.095 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:29.095 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.095 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:29.095 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:26:29.095 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:26:29.095 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:29.095 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:29.095 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:29.095 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:26:29.095 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:29.095 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:29.095 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=331ef5a21b7c671b01ca699b3c204fc1 00:26:29.095 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:26:29.095 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.zvV 00:26:29.095 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 331ef5a21b7c671b01ca699b3c204fc1 0 00:26:29.095 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 331ef5a21b7c671b01ca699b3c204fc1 0 00:26:29.095 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:29.095 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:29.095 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=331ef5a21b7c671b01ca699b3c204fc1 00:26:29.095 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:26:29.095 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:29.095 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.zvV 00:26:29.095 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.zvV 00:26:29.095 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.zvV 00:26:29.095 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:26:29.095 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:29.095 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:29.095 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:29.095 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:26:29.095 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:26:29.095 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:29.095 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=61cec4711018f2282ee620ea4137a2cc194e317fcd6d4af5f2ae40110e751f15 00:26:29.095 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:26:29.095 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.9xS 00:26:29.095 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 61cec4711018f2282ee620ea4137a2cc194e317fcd6d4af5f2ae40110e751f15 3 00:26:29.095 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 61cec4711018f2282ee620ea4137a2cc194e317fcd6d4af5f2ae40110e751f15 3 00:26:29.095 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:29.095 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:29.095 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=61cec4711018f2282ee620ea4137a2cc194e317fcd6d4af5f2ae40110e751f15 00:26:29.095 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:26:29.095 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:29.095 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.9xS 00:26:29.095 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.9xS 00:26:29.095 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.9xS 00:26:29.095 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:26:29.095 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:29.095 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:29.095 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:29.095 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:26:29.095 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:26:29.095 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:29.095 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=63624ee0ff88bb18692e3cd022efe4b4cf732e886f00ef2e 00:26:29.095 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:26:29.095 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.ysp 00:26:29.095 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 63624ee0ff88bb18692e3cd022efe4b4cf732e886f00ef2e 0 00:26:29.095 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 63624ee0ff88bb18692e3cd022efe4b4cf732e886f00ef2e 0 00:26:29.095 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:29.095 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:29.095 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=63624ee0ff88bb18692e3cd022efe4b4cf732e886f00ef2e 00:26:29.095 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:26:29.095 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:29.095 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.ysp 00:26:29.095 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.ysp 00:26:29.095 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.ysp 00:26:29.095 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:26:29.095 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:29.095 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:29.095 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:29.095 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:26:29.095 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:26:29.095 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:29.095 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e1da1ae52017efcb287aee35afb4434371cc8a14c9a732e6 00:26:29.095 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:26:29.095 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Gfq 00:26:29.095 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e1da1ae52017efcb287aee35afb4434371cc8a14c9a732e6 2 00:26:29.095 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e1da1ae52017efcb287aee35afb4434371cc8a14c9a732e6 2 00:26:29.095 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:29.095 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:29.095 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e1da1ae52017efcb287aee35afb4434371cc8a14c9a732e6 00:26:29.095 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:26:29.095 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:29.095 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Gfq 00:26:29.095 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Gfq 00:26:29.095 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.Gfq 00:26:29.095 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:29.095 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:29.095 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:29.095 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:29.095 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:26:29.095 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:29.095 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:29.095 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=9b9f86ac334d62d4ed2769f83c066b58 00:26:29.095 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:26:29.095 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.0Wq 00:26:29.095 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 9b9f86ac334d62d4ed2769f83c066b58 1 00:26:29.095 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 9b9f86ac334d62d4ed2769f83c066b58 1 00:26:29.095 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:29.095 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:29.095 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=9b9f86ac334d62d4ed2769f83c066b58 00:26:29.096 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:26:29.096 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:29.096 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.0Wq 00:26:29.096 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.0Wq 00:26:29.096 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.0Wq 00:26:29.096 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:29.096 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:29.096 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:29.096 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:29.096 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:26:29.096 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:29.096 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:29.096 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d7aa6ee1e39059eeaa7c99b82335bea0 00:26:29.096 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:26:29.096 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Eqo 00:26:29.096 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d7aa6ee1e39059eeaa7c99b82335bea0 1 00:26:29.096 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d7aa6ee1e39059eeaa7c99b82335bea0 1 00:26:29.096 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:29.096 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:29.096 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d7aa6ee1e39059eeaa7c99b82335bea0 00:26:29.096 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:26:29.096 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:29.096 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Eqo 00:26:29.096 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Eqo 00:26:29.096 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.Eqo 00:26:29.096 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:26:29.096 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:29.096 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:29.096 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:29.096 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:26:29.096 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:26:29.096 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:29.096 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=bf96787c50f0f66df4d0953070f9fd2fe7dcc749cef62b82 00:26:29.096 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:26:29.096 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.E7i 00:26:29.096 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key bf96787c50f0f66df4d0953070f9fd2fe7dcc749cef62b82 2 00:26:29.096 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 bf96787c50f0f66df4d0953070f9fd2fe7dcc749cef62b82 2 00:26:29.096 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:29.096 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:29.096 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=bf96787c50f0f66df4d0953070f9fd2fe7dcc749cef62b82 00:26:29.096 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:26:29.096 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:29.355 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.E7i 00:26:29.355 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.E7i 00:26:29.355 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.E7i 00:26:29.355 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:26:29.355 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:29.355 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:29.355 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:29.355 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:26:29.355 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:29.355 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:29.355 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ee7eec8fc6d9fb0cb2719c5d0805abb6 00:26:29.355 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:26:29.355 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.mct 00:26:29.355 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ee7eec8fc6d9fb0cb2719c5d0805abb6 0 00:26:29.355 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ee7eec8fc6d9fb0cb2719c5d0805abb6 0 00:26:29.355 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:29.355 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:29.355 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ee7eec8fc6d9fb0cb2719c5d0805abb6 00:26:29.355 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:26:29.355 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:29.355 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.mct 00:26:29.355 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.mct 00:26:29.355 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.mct 00:26:29.355 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:26:29.355 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:29.355 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:29.355 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:29.355 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:26:29.355 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:26:29.355 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:29.355 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=de8eac4add0b916b0ec445c1a86367e220ee46746073d3debbf044673f182708 00:26:29.355 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:26:29.355 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.QyO 00:26:29.355 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key de8eac4add0b916b0ec445c1a86367e220ee46746073d3debbf044673f182708 3 00:26:29.355 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 de8eac4add0b916b0ec445c1a86367e220ee46746073d3debbf044673f182708 3 00:26:29.355 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:29.355 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:29.355 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=de8eac4add0b916b0ec445c1a86367e220ee46746073d3debbf044673f182708 00:26:29.355 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:26:29.355 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:29.355 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.QyO 00:26:29.355 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.QyO 00:26:29.355 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.QyO 00:26:29.355 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:26:29.355 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2710068 00:26:29.355 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2710068 ']' 00:26:29.355 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:29.355 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:29.355 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:29.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:29.356 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:29.356 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.615 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:29.615 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:26:29.615 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:29.615 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.zvV 00:26:29.615 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.615 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.615 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.615 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.9xS ]] 00:26:29.615 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.9xS 00:26:29.615 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.615 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.615 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.615 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:29.615 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.ysp 00:26:29.615 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.615 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.615 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.615 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.Gfq ]] 00:26:29.615 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Gfq 00:26:29.615 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.615 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.615 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.615 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:29.615 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.0Wq 00:26:29.615 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.615 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.615 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.615 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.Eqo ]] 00:26:29.615 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Eqo 00:26:29.615 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.615 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.615 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.615 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:29.615 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.E7i 00:26:29.615 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.615 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.615 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.615 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.mct ]] 00:26:29.615 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.mct 00:26:29.615 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.615 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.615 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.615 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:29.615 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.QyO 00:26:29.615 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.615 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.615 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.615 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:26:29.615 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:26:29.615 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:26:29.615 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:29.615 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:29.615 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:29.615 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:29.615 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:29.615 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:29.615 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:29.615 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:29.615 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:29.615 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:29.615 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:26:29.615 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:26:29.615 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:26:29.615 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:29.615 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:29.615 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:29.615 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:26:29.615 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:26:29.615 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:26:29.615 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:29.616 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:32.146 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:26:32.405 Waiting for block devices as requested 00:26:32.405 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:26:32.663 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:32.663 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:32.663 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:32.921 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:32.921 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:32.921 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:32.921 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:33.179 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:33.180 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:33.180 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:33.438 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:33.438 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:33.438 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:33.438 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:33.697 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:33.697 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:34.263 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:26:34.263 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:34.263 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:26:34.263 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:26:34.263 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:34.263 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:26:34.263 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:26:34.263 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:26:34.263 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:34.263 No valid GPT data, bailing 00:26:34.263 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:34.263 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:26:34.263 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:26:34.263 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:26:34.263 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:26:34.263 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:26:34.263 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:26:34.263 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:26:34.263 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:26:34.263 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:26:34.263 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:26:34.263 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:26:34.263 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:26:34.522 No valid GPT data, bailing 00:26:34.522 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:26:34.522 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:26:34.522 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:26:34.522 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:26:34.522 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:26:34.522 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n2 ]] 00:26:34.522 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n2 00:26:34.522 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:26:34.522 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:26:34.522 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ host-managed != none ]] 00:26:34.522 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # continue 00:26:34.522 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:26:34.522 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:34.522 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:34.522 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:34.522 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:26:34.522 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:26:34.522 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:26:34.522 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:26:34.522 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:26:34.522 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:26:34.522 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:26:34.522 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:26:34.522 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:34.522 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:26:34.522 00:26:34.522 Discovery Log Number of Records 2, Generation counter 2 00:26:34.522 =====Discovery Log Entry 0====== 00:26:34.522 trtype: tcp 00:26:34.522 adrfam: ipv4 00:26:34.522 subtype: current discovery subsystem 00:26:34.522 treq: not specified, sq flow control disable supported 00:26:34.522 portid: 1 00:26:34.522 trsvcid: 4420 00:26:34.522 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:34.522 traddr: 10.0.0.1 00:26:34.522 eflags: none 00:26:34.522 sectype: none 00:26:34.522 =====Discovery Log Entry 1====== 00:26:34.522 trtype: tcp 00:26:34.522 adrfam: ipv4 00:26:34.522 subtype: nvme subsystem 00:26:34.522 treq: not specified, sq flow control disable supported 00:26:34.522 portid: 1 00:26:34.522 trsvcid: 4420 00:26:34.522 subnqn: nqn.2024-02.io.spdk:cnode0 00:26:34.522 traddr: 10.0.0.1 00:26:34.522 eflags: none 00:26:34.522 sectype: none 00:26:34.522 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:34.522 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:26:34.522 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:34.522 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:34.522 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:34.522 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:34.522 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:34.522 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:34.522 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjM2MjRlZTBmZjg4YmIxODY5MmUzY2QwMjJlZmU0YjRjZjczMmU4ODZmMDBlZjJl2UF/Qg==: 00:26:34.522 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTFkYTFhZTUyMDE3ZWZjYjI4N2FlZTM1YWZiNDQzNDM3MWNjOGExNGM5YTczMmU2QjRzBg==: 00:26:34.522 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:34.522 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:34.522 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjM2MjRlZTBmZjg4YmIxODY5MmUzY2QwMjJlZmU0YjRjZjczMmU4ODZmMDBlZjJl2UF/Qg==: 00:26:34.522 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTFkYTFhZTUyMDE3ZWZjYjI4N2FlZTM1YWZiNDQzNDM3MWNjOGExNGM5YTczMmU2QjRzBg==: ]] 00:26:34.522 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTFkYTFhZTUyMDE3ZWZjYjI4N2FlZTM1YWZiNDQzNDM3MWNjOGExNGM5YTczMmU2QjRzBg==: 00:26:34.522 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:34.522 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:26:34.522 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:34.522 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:34.522 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:26:34.522 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:34.522 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:26:34.522 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:34.522 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:34.522 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:34.522 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:34.522 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.522 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.522 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.523 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:34.523 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:34.523 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:34.523 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:34.523 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:34.523 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:34.523 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:34.523 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:34.523 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:34.523 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:34.523 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:34.523 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:34.523 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.523 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.782 nvme0n1 00:26:34.782 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.782 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:34.782 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:34.782 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.782 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.782 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.782 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:34.782 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:34.782 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.782 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.782 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.782 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:34.782 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:34.782 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:34.782 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:26:34.782 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:34.782 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:34.782 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:34.782 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:34.782 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzMxZWY1YTIxYjdjNjcxYjAxY2E2OTliM2MyMDRmYzHWtk5D: 00:26:34.782 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjFjZWM0NzExMDE4ZjIyODJlZTYyMGVhNDEzN2EyY2MxOTRlMzE3ZmNkNmQ0YWY1ZjJhZTQwMTEwZTc1MWYxNYIm3IQ=: 00:26:34.782 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:34.782 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:34.782 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzMxZWY1YTIxYjdjNjcxYjAxY2E2OTliM2MyMDRmYzHWtk5D: 00:26:34.782 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjFjZWM0NzExMDE4ZjIyODJlZTYyMGVhNDEzN2EyY2MxOTRlMzE3ZmNkNmQ0YWY1ZjJhZTQwMTEwZTc1MWYxNYIm3IQ=: ]] 00:26:34.782 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjFjZWM0NzExMDE4ZjIyODJlZTYyMGVhNDEzN2EyY2MxOTRlMzE3ZmNkNmQ0YWY1ZjJhZTQwMTEwZTc1MWYxNYIm3IQ=: 00:26:34.782 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:26:34.782 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:34.782 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:34.782 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:34.782 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:34.782 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:34.782 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:34.782 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.782 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.782 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.782 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:34.782 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:34.782 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:34.782 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:34.782 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:34.782 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:34.782 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:34.782 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:34.782 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:34.782 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:34.782 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:34.782 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:34.782 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.782 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.041 nvme0n1 00:26:35.041 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.041 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:35.041 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:35.041 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.041 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.041 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.041 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:35.041 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:35.041 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.041 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.041 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.041 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:35.041 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:35.041 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:35.041 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:35.041 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:35.041 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:35.041 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjM2MjRlZTBmZjg4YmIxODY5MmUzY2QwMjJlZmU0YjRjZjczMmU4ODZmMDBlZjJl2UF/Qg==: 00:26:35.041 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTFkYTFhZTUyMDE3ZWZjYjI4N2FlZTM1YWZiNDQzNDM3MWNjOGExNGM5YTczMmU2QjRzBg==: 00:26:35.041 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:35.041 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:35.041 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjM2MjRlZTBmZjg4YmIxODY5MmUzY2QwMjJlZmU0YjRjZjczMmU4ODZmMDBlZjJl2UF/Qg==: 00:26:35.041 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTFkYTFhZTUyMDE3ZWZjYjI4N2FlZTM1YWZiNDQzNDM3MWNjOGExNGM5YTczMmU2QjRzBg==: ]] 00:26:35.041 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTFkYTFhZTUyMDE3ZWZjYjI4N2FlZTM1YWZiNDQzNDM3MWNjOGExNGM5YTczMmU2QjRzBg==: 00:26:35.041 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:26:35.041 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:35.041 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:35.041 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:35.041 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:35.041 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:35.041 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:35.041 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.041 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.041 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.041 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:35.041 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:35.041 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:35.041 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:35.041 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:35.041 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:35.041 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:35.041 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:35.041 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:35.042 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:35.042 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:35.042 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:35.042 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.042 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.300 nvme0n1 00:26:35.300 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.300 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:35.300 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.300 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:35.300 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.300 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.300 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:35.300 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:35.300 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.300 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.300 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.300 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:35.300 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:35.300 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:35.300 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:35.300 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:35.300 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:35.300 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWI5Zjg2YWMzMzRkNjJkNGVkMjc2OWY4M2MwNjZiNTilIsed: 00:26:35.300 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDdhYTZlZTFlMzkwNTllZWFhN2M5OWI4MjMzNWJlYTAGszsB: 00:26:35.300 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:35.300 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:35.300 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWI5Zjg2YWMzMzRkNjJkNGVkMjc2OWY4M2MwNjZiNTilIsed: 00:26:35.300 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDdhYTZlZTFlMzkwNTllZWFhN2M5OWI4MjMzNWJlYTAGszsB: ]] 00:26:35.300 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDdhYTZlZTFlMzkwNTllZWFhN2M5OWI4MjMzNWJlYTAGszsB: 00:26:35.300 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:26:35.300 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:35.300 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:35.300 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:35.300 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:35.300 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:35.300 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:35.300 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.300 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.300 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.300 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:35.300 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:35.300 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:35.300 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:35.300 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:35.300 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:35.300 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:35.300 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:35.300 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:35.300 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:35.300 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:35.300 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:35.300 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.300 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.300 nvme0n1 00:26:35.300 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.300 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:35.300 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:35.300 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.300 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.300 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.559 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:35.559 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:35.559 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.559 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.559 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.559 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:35.559 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:26:35.559 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:35.559 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:35.559 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:35.559 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:35.559 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmY5Njc4N2M1MGYwZjY2ZGY0ZDA5NTMwNzBmOWZkMmZlN2RjYzc0OWNlZjYyYjgylAm1xA==: 00:26:35.559 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWU3ZWVjOGZjNmQ5ZmIwY2IyNzE5YzVkMDgwNWFiYjaKOmhp: 00:26:35.559 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:35.559 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:35.559 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmY5Njc4N2M1MGYwZjY2ZGY0ZDA5NTMwNzBmOWZkMmZlN2RjYzc0OWNlZjYyYjgylAm1xA==: 00:26:35.559 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWU3ZWVjOGZjNmQ5ZmIwY2IyNzE5YzVkMDgwNWFiYjaKOmhp: ]] 00:26:35.559 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWU3ZWVjOGZjNmQ5ZmIwY2IyNzE5YzVkMDgwNWFiYjaKOmhp: 00:26:35.559 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:26:35.559 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:35.559 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:35.559 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:35.559 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:35.559 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:35.559 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:35.559 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.559 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.559 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.559 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:35.559 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:35.559 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:35.559 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:35.559 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:35.559 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:35.559 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:35.559 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:35.559 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:35.559 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:35.559 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:35.559 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:35.559 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.559 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.559 nvme0n1 00:26:35.559 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.559 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:35.559 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:35.559 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.559 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.559 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.559 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:35.559 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:35.559 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.559 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.559 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.559 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:35.559 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:26:35.559 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:35.818 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:35.818 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:35.818 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:35.818 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGU4ZWFjNGFkZDBiOTE2YjBlYzQ0NWMxYTg2MzY3ZTIyMGVlNDY3NDYwNzNkM2RlYmJmMDQ0NjczZjE4MjcwOKi6Z5c=: 00:26:35.818 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:35.818 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:35.818 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:35.818 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGU4ZWFjNGFkZDBiOTE2YjBlYzQ0NWMxYTg2MzY3ZTIyMGVlNDY3NDYwNzNkM2RlYmJmMDQ0NjczZjE4MjcwOKi6Z5c=: 00:26:35.818 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:35.818 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:26:35.818 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:35.818 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:35.818 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:35.818 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:35.818 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:35.818 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:35.818 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.818 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.818 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.818 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:35.818 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:35.818 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:35.819 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:35.819 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:35.819 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:35.819 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:35.819 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:35.819 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:35.819 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:35.819 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:35.819 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:35.819 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.819 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.819 nvme0n1 00:26:35.819 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.819 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:35.819 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:35.819 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.819 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.819 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.819 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:35.819 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:35.819 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.819 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.819 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.819 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:35.819 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:35.819 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:26:35.819 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:35.819 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:35.819 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:35.819 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:35.819 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzMxZWY1YTIxYjdjNjcxYjAxY2E2OTliM2MyMDRmYzHWtk5D: 00:26:35.819 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjFjZWM0NzExMDE4ZjIyODJlZTYyMGVhNDEzN2EyY2MxOTRlMzE3ZmNkNmQ0YWY1ZjJhZTQwMTEwZTc1MWYxNYIm3IQ=: 00:26:35.819 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:35.819 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:35.819 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzMxZWY1YTIxYjdjNjcxYjAxY2E2OTliM2MyMDRmYzHWtk5D: 00:26:35.819 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjFjZWM0NzExMDE4ZjIyODJlZTYyMGVhNDEzN2EyY2MxOTRlMzE3ZmNkNmQ0YWY1ZjJhZTQwMTEwZTc1MWYxNYIm3IQ=: ]] 00:26:35.819 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjFjZWM0NzExMDE4ZjIyODJlZTYyMGVhNDEzN2EyY2MxOTRlMzE3ZmNkNmQ0YWY1ZjJhZTQwMTEwZTc1MWYxNYIm3IQ=: 00:26:35.819 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:26:35.819 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:35.819 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:35.819 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:35.819 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:35.819 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:35.819 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:35.819 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.819 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.819 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.819 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:35.819 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:35.819 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:35.819 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:35.819 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:35.819 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:35.819 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:35.819 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:35.819 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:35.819 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:35.819 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:35.819 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:35.819 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.819 17:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.077 nvme0n1 00:26:36.077 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.077 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:36.077 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:36.078 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.078 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.078 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.078 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:36.078 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:36.078 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.078 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.078 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.078 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:36.078 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:26:36.078 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:36.078 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:36.078 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:36.078 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:36.078 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjM2MjRlZTBmZjg4YmIxODY5MmUzY2QwMjJlZmU0YjRjZjczMmU4ODZmMDBlZjJl2UF/Qg==: 00:26:36.078 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTFkYTFhZTUyMDE3ZWZjYjI4N2FlZTM1YWZiNDQzNDM3MWNjOGExNGM5YTczMmU2QjRzBg==: 00:26:36.078 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:36.078 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:36.078 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjM2MjRlZTBmZjg4YmIxODY5MmUzY2QwMjJlZmU0YjRjZjczMmU4ODZmMDBlZjJl2UF/Qg==: 00:26:36.078 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTFkYTFhZTUyMDE3ZWZjYjI4N2FlZTM1YWZiNDQzNDM3MWNjOGExNGM5YTczMmU2QjRzBg==: ]] 00:26:36.078 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTFkYTFhZTUyMDE3ZWZjYjI4N2FlZTM1YWZiNDQzNDM3MWNjOGExNGM5YTczMmU2QjRzBg==: 00:26:36.078 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:26:36.078 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:36.078 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:36.078 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:36.078 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:36.078 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:36.078 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:36.078 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.078 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.078 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.078 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:36.078 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:36.078 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:36.078 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:36.078 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:36.078 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:36.078 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:36.078 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:36.078 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:36.078 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:36.078 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:36.078 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:36.078 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.078 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.336 nvme0n1 00:26:36.336 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.336 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:36.336 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:36.336 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.337 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.337 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.337 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:36.337 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:36.337 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.337 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.337 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.337 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:36.337 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:26:36.337 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:36.337 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:36.337 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:36.337 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:36.337 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWI5Zjg2YWMzMzRkNjJkNGVkMjc2OWY4M2MwNjZiNTilIsed: 00:26:36.337 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDdhYTZlZTFlMzkwNTllZWFhN2M5OWI4MjMzNWJlYTAGszsB: 00:26:36.337 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:36.337 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:36.337 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWI5Zjg2YWMzMzRkNjJkNGVkMjc2OWY4M2MwNjZiNTilIsed: 00:26:36.337 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDdhYTZlZTFlMzkwNTllZWFhN2M5OWI4MjMzNWJlYTAGszsB: ]] 00:26:36.337 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDdhYTZlZTFlMzkwNTllZWFhN2M5OWI4MjMzNWJlYTAGszsB: 00:26:36.337 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:26:36.337 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:36.337 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:36.337 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:36.337 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:36.337 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:36.337 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:36.337 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.337 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.337 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.337 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:36.337 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:36.337 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:36.337 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:36.337 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:36.337 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:36.337 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:36.337 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:36.337 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:36.337 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:36.337 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:36.337 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:36.337 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.337 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.596 nvme0n1 00:26:36.596 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.596 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:36.596 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:36.596 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.596 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.596 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.596 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:36.596 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:36.596 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.596 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.596 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.596 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:36.596 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:26:36.596 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:36.596 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:36.596 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:36.596 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:36.596 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmY5Njc4N2M1MGYwZjY2ZGY0ZDA5NTMwNzBmOWZkMmZlN2RjYzc0OWNlZjYyYjgylAm1xA==: 00:26:36.596 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWU3ZWVjOGZjNmQ5ZmIwY2IyNzE5YzVkMDgwNWFiYjaKOmhp: 00:26:36.596 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:36.597 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:36.597 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmY5Njc4N2M1MGYwZjY2ZGY0ZDA5NTMwNzBmOWZkMmZlN2RjYzc0OWNlZjYyYjgylAm1xA==: 00:26:36.597 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWU3ZWVjOGZjNmQ5ZmIwY2IyNzE5YzVkMDgwNWFiYjaKOmhp: ]] 00:26:36.597 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWU3ZWVjOGZjNmQ5ZmIwY2IyNzE5YzVkMDgwNWFiYjaKOmhp: 00:26:36.597 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:26:36.597 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:36.597 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:36.597 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:36.597 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:36.597 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:36.597 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:36.597 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.597 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.597 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.597 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:36.597 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:36.597 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:36.597 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:36.597 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:36.597 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:36.597 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:36.597 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:36.597 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:36.597 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:36.597 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:36.597 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:36.597 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.597 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.855 nvme0n1 00:26:36.855 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.855 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:36.855 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:36.855 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.855 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.855 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.855 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:36.855 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:36.855 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.855 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.855 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.855 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:36.855 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:26:36.855 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:36.855 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:36.855 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:36.855 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:36.855 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGU4ZWFjNGFkZDBiOTE2YjBlYzQ0NWMxYTg2MzY3ZTIyMGVlNDY3NDYwNzNkM2RlYmJmMDQ0NjczZjE4MjcwOKi6Z5c=: 00:26:36.855 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:36.855 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:36.855 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:36.855 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGU4ZWFjNGFkZDBiOTE2YjBlYzQ0NWMxYTg2MzY3ZTIyMGVlNDY3NDYwNzNkM2RlYmJmMDQ0NjczZjE4MjcwOKi6Z5c=: 00:26:36.855 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:36.855 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:26:36.855 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:36.855 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:36.855 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:36.856 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:36.856 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:36.856 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:36.856 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.856 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.856 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.856 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:36.856 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:36.856 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:36.856 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:36.856 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:36.856 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:36.856 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:36.856 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:36.856 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:36.856 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:36.856 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:36.856 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:36.856 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.856 17:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.114 nvme0n1 00:26:37.114 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.114 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:37.114 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:37.114 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.114 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.114 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.114 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:37.114 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:37.114 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.114 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.114 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.114 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:37.114 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:37.114 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:26:37.114 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:37.114 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:37.114 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:37.114 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:37.114 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzMxZWY1YTIxYjdjNjcxYjAxY2E2OTliM2MyMDRmYzHWtk5D: 00:26:37.114 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjFjZWM0NzExMDE4ZjIyODJlZTYyMGVhNDEzN2EyY2MxOTRlMzE3ZmNkNmQ0YWY1ZjJhZTQwMTEwZTc1MWYxNYIm3IQ=: 00:26:37.114 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:37.114 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:37.114 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzMxZWY1YTIxYjdjNjcxYjAxY2E2OTliM2MyMDRmYzHWtk5D: 00:26:37.114 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjFjZWM0NzExMDE4ZjIyODJlZTYyMGVhNDEzN2EyY2MxOTRlMzE3ZmNkNmQ0YWY1ZjJhZTQwMTEwZTc1MWYxNYIm3IQ=: ]] 00:26:37.114 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjFjZWM0NzExMDE4ZjIyODJlZTYyMGVhNDEzN2EyY2MxOTRlMzE3ZmNkNmQ0YWY1ZjJhZTQwMTEwZTc1MWYxNYIm3IQ=: 00:26:37.114 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:26:37.114 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:37.114 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:37.114 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:37.114 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:37.114 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:37.114 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:37.114 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.114 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.114 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.114 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:37.114 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:37.114 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:37.114 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:37.114 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:37.114 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:37.114 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:37.114 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:37.114 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:37.114 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:37.114 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:37.114 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:37.114 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.114 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.372 nvme0n1 00:26:37.372 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.372 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:37.372 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:37.372 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.372 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.372 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.372 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:37.372 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:37.372 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.372 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.372 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.372 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:37.372 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:26:37.372 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:37.372 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:37.372 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:37.372 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:37.372 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjM2MjRlZTBmZjg4YmIxODY5MmUzY2QwMjJlZmU0YjRjZjczMmU4ODZmMDBlZjJl2UF/Qg==: 00:26:37.372 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTFkYTFhZTUyMDE3ZWZjYjI4N2FlZTM1YWZiNDQzNDM3MWNjOGExNGM5YTczMmU2QjRzBg==: 00:26:37.372 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:37.372 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:37.372 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjM2MjRlZTBmZjg4YmIxODY5MmUzY2QwMjJlZmU0YjRjZjczMmU4ODZmMDBlZjJl2UF/Qg==: 00:26:37.372 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTFkYTFhZTUyMDE3ZWZjYjI4N2FlZTM1YWZiNDQzNDM3MWNjOGExNGM5YTczMmU2QjRzBg==: ]] 00:26:37.372 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTFkYTFhZTUyMDE3ZWZjYjI4N2FlZTM1YWZiNDQzNDM3MWNjOGExNGM5YTczMmU2QjRzBg==: 00:26:37.372 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:26:37.372 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:37.372 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:37.372 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:37.372 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:37.372 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:37.372 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:37.372 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.372 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.372 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.373 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:37.373 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:37.373 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:37.373 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:37.373 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:37.373 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:37.373 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:37.373 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:37.373 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:37.373 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:37.373 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:37.373 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:37.373 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.373 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.631 nvme0n1 00:26:37.631 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.631 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:37.631 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:37.631 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.631 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.631 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.890 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:37.890 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:37.890 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.890 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.890 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.890 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:37.890 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:26:37.890 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:37.890 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:37.890 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:37.890 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:37.890 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWI5Zjg2YWMzMzRkNjJkNGVkMjc2OWY4M2MwNjZiNTilIsed: 00:26:37.890 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDdhYTZlZTFlMzkwNTllZWFhN2M5OWI4MjMzNWJlYTAGszsB: 00:26:37.890 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:37.890 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:37.890 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWI5Zjg2YWMzMzRkNjJkNGVkMjc2OWY4M2MwNjZiNTilIsed: 00:26:37.890 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDdhYTZlZTFlMzkwNTllZWFhN2M5OWI4MjMzNWJlYTAGszsB: ]] 00:26:37.890 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDdhYTZlZTFlMzkwNTllZWFhN2M5OWI4MjMzNWJlYTAGszsB: 00:26:37.890 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:26:37.890 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:37.890 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:37.890 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:37.890 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:37.890 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:37.890 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:37.890 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.890 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.890 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.890 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:37.890 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:37.890 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:37.890 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:37.890 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:37.890 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:37.890 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:37.890 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:37.890 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:37.890 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:37.890 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:37.890 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:37.890 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.890 17:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.148 nvme0n1 00:26:38.148 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.148 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:38.148 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:38.148 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.148 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.148 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.148 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:38.148 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:38.148 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.148 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.148 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.148 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:38.148 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:26:38.148 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:38.148 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:38.148 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:38.148 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:38.148 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmY5Njc4N2M1MGYwZjY2ZGY0ZDA5NTMwNzBmOWZkMmZlN2RjYzc0OWNlZjYyYjgylAm1xA==: 00:26:38.148 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWU3ZWVjOGZjNmQ5ZmIwY2IyNzE5YzVkMDgwNWFiYjaKOmhp: 00:26:38.148 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:38.148 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:38.148 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmY5Njc4N2M1MGYwZjY2ZGY0ZDA5NTMwNzBmOWZkMmZlN2RjYzc0OWNlZjYyYjgylAm1xA==: 00:26:38.148 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWU3ZWVjOGZjNmQ5ZmIwY2IyNzE5YzVkMDgwNWFiYjaKOmhp: ]] 00:26:38.148 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWU3ZWVjOGZjNmQ5ZmIwY2IyNzE5YzVkMDgwNWFiYjaKOmhp: 00:26:38.148 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:26:38.148 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:38.148 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:38.148 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:38.149 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:38.149 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:38.149 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:38.149 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.149 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.149 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.149 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:38.149 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:38.149 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:38.149 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:38.149 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:38.149 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:38.149 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:38.149 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:38.149 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:38.149 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:38.149 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:38.149 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:38.149 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.149 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.407 nvme0n1 00:26:38.407 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.407 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:38.407 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.407 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:38.407 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.407 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.407 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:38.407 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:38.407 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.407 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.407 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.407 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:38.407 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:26:38.407 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:38.407 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:38.407 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:38.407 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:38.407 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGU4ZWFjNGFkZDBiOTE2YjBlYzQ0NWMxYTg2MzY3ZTIyMGVlNDY3NDYwNzNkM2RlYmJmMDQ0NjczZjE4MjcwOKi6Z5c=: 00:26:38.407 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:38.407 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:38.407 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:38.407 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGU4ZWFjNGFkZDBiOTE2YjBlYzQ0NWMxYTg2MzY3ZTIyMGVlNDY3NDYwNzNkM2RlYmJmMDQ0NjczZjE4MjcwOKi6Z5c=: 00:26:38.407 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:38.407 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:26:38.407 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:38.407 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:38.407 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:38.407 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:38.408 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:38.408 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:38.408 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.408 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.408 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.408 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:38.408 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:38.408 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:38.408 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:38.408 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:38.408 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:38.408 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:38.408 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:38.408 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:38.408 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:38.408 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:38.408 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:38.408 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.408 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.667 nvme0n1 00:26:38.667 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.667 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:38.667 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:38.667 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.667 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.667 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.667 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:38.667 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:38.667 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.667 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.667 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.667 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:38.667 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:38.667 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:26:38.667 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:38.667 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:38.667 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:38.667 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:38.667 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzMxZWY1YTIxYjdjNjcxYjAxY2E2OTliM2MyMDRmYzHWtk5D: 00:26:38.667 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjFjZWM0NzExMDE4ZjIyODJlZTYyMGVhNDEzN2EyY2MxOTRlMzE3ZmNkNmQ0YWY1ZjJhZTQwMTEwZTc1MWYxNYIm3IQ=: 00:26:38.667 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:38.667 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:38.667 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzMxZWY1YTIxYjdjNjcxYjAxY2E2OTliM2MyMDRmYzHWtk5D: 00:26:38.667 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjFjZWM0NzExMDE4ZjIyODJlZTYyMGVhNDEzN2EyY2MxOTRlMzE3ZmNkNmQ0YWY1ZjJhZTQwMTEwZTc1MWYxNYIm3IQ=: ]] 00:26:38.667 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjFjZWM0NzExMDE4ZjIyODJlZTYyMGVhNDEzN2EyY2MxOTRlMzE3ZmNkNmQ0YWY1ZjJhZTQwMTEwZTc1MWYxNYIm3IQ=: 00:26:38.667 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:26:38.667 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:38.667 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:38.667 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:38.667 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:38.667 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:38.667 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:38.667 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.667 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.667 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.667 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:38.667 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:38.667 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:38.667 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:38.667 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:38.667 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:38.667 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:38.667 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:38.667 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:38.667 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:38.667 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:38.667 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:38.667 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.667 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.234 nvme0n1 00:26:39.234 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.234 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:39.234 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:39.234 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.234 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.234 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.234 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:39.234 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:39.234 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.234 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.234 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.234 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:39.234 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:26:39.234 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:39.234 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:39.234 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:39.234 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:39.234 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjM2MjRlZTBmZjg4YmIxODY5MmUzY2QwMjJlZmU0YjRjZjczMmU4ODZmMDBlZjJl2UF/Qg==: 00:26:39.234 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTFkYTFhZTUyMDE3ZWZjYjI4N2FlZTM1YWZiNDQzNDM3MWNjOGExNGM5YTczMmU2QjRzBg==: 00:26:39.234 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:39.234 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:39.234 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjM2MjRlZTBmZjg4YmIxODY5MmUzY2QwMjJlZmU0YjRjZjczMmU4ODZmMDBlZjJl2UF/Qg==: 00:26:39.234 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTFkYTFhZTUyMDE3ZWZjYjI4N2FlZTM1YWZiNDQzNDM3MWNjOGExNGM5YTczMmU2QjRzBg==: ]] 00:26:39.234 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTFkYTFhZTUyMDE3ZWZjYjI4N2FlZTM1YWZiNDQzNDM3MWNjOGExNGM5YTczMmU2QjRzBg==: 00:26:39.234 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:26:39.234 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:39.234 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:39.234 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:39.234 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:39.234 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:39.234 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:39.234 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.234 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.234 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.234 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:39.234 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:39.234 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:39.234 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:39.234 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:39.234 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:39.234 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:39.234 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:39.234 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:39.234 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:39.234 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:39.234 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:39.234 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.234 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.492 nvme0n1 00:26:39.492 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.492 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:39.492 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:39.492 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.492 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.492 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.751 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:39.751 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:39.751 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.751 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.751 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.751 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:39.751 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:26:39.751 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:39.751 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:39.751 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:39.751 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:39.751 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWI5Zjg2YWMzMzRkNjJkNGVkMjc2OWY4M2MwNjZiNTilIsed: 00:26:39.751 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDdhYTZlZTFlMzkwNTllZWFhN2M5OWI4MjMzNWJlYTAGszsB: 00:26:39.751 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:39.751 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:39.751 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWI5Zjg2YWMzMzRkNjJkNGVkMjc2OWY4M2MwNjZiNTilIsed: 00:26:39.751 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDdhYTZlZTFlMzkwNTllZWFhN2M5OWI4MjMzNWJlYTAGszsB: ]] 00:26:39.751 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDdhYTZlZTFlMzkwNTllZWFhN2M5OWI4MjMzNWJlYTAGszsB: 00:26:39.751 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:26:39.751 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:39.751 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:39.751 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:39.751 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:39.751 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:39.751 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:39.751 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.751 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.751 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.751 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:39.751 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:39.751 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:39.751 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:39.751 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:39.751 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:39.751 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:39.751 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:39.751 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:39.751 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:39.751 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:39.751 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:39.751 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.751 17:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.009 nvme0n1 00:26:40.009 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.009 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:40.009 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:40.009 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.009 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.009 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.009 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:40.009 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:40.009 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.009 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.009 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.009 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:40.009 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:26:40.009 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:40.009 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:40.009 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:40.009 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:40.009 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmY5Njc4N2M1MGYwZjY2ZGY0ZDA5NTMwNzBmOWZkMmZlN2RjYzc0OWNlZjYyYjgylAm1xA==: 00:26:40.009 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWU3ZWVjOGZjNmQ5ZmIwY2IyNzE5YzVkMDgwNWFiYjaKOmhp: 00:26:40.009 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:40.009 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:40.009 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmY5Njc4N2M1MGYwZjY2ZGY0ZDA5NTMwNzBmOWZkMmZlN2RjYzc0OWNlZjYyYjgylAm1xA==: 00:26:40.009 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWU3ZWVjOGZjNmQ5ZmIwY2IyNzE5YzVkMDgwNWFiYjaKOmhp: ]] 00:26:40.009 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWU3ZWVjOGZjNmQ5ZmIwY2IyNzE5YzVkMDgwNWFiYjaKOmhp: 00:26:40.009 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:26:40.009 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:40.009 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:40.009 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:40.009 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:40.009 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:40.009 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:40.009 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.009 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.009 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.009 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:40.009 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:40.009 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:40.009 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:40.010 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:40.010 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:40.010 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:40.010 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:40.010 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:40.010 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:40.010 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:40.010 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:40.010 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.010 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.577 nvme0n1 00:26:40.577 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.577 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:40.577 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:40.577 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.577 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.577 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.577 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:40.577 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:40.577 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.577 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.577 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.577 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:40.577 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:26:40.577 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:40.577 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:40.577 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:40.577 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:40.577 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGU4ZWFjNGFkZDBiOTE2YjBlYzQ0NWMxYTg2MzY3ZTIyMGVlNDY3NDYwNzNkM2RlYmJmMDQ0NjczZjE4MjcwOKi6Z5c=: 00:26:40.577 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:40.577 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:40.577 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:40.577 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGU4ZWFjNGFkZDBiOTE2YjBlYzQ0NWMxYTg2MzY3ZTIyMGVlNDY3NDYwNzNkM2RlYmJmMDQ0NjczZjE4MjcwOKi6Z5c=: 00:26:40.577 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:40.577 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:26:40.577 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:40.577 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:40.577 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:40.577 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:40.577 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:40.577 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:40.577 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.577 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.577 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.577 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:40.577 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:40.577 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:40.577 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:40.577 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:40.577 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:40.577 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:40.577 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:40.577 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:40.577 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:40.577 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:40.577 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:40.577 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.577 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.835 nvme0n1 00:26:40.835 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.835 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:40.835 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:40.835 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.835 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.835 17:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.093 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:41.093 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:41.093 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.093 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.093 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.093 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:41.093 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:41.093 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:26:41.093 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:41.093 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:41.093 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:41.093 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:41.093 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzMxZWY1YTIxYjdjNjcxYjAxY2E2OTliM2MyMDRmYzHWtk5D: 00:26:41.093 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjFjZWM0NzExMDE4ZjIyODJlZTYyMGVhNDEzN2EyY2MxOTRlMzE3ZmNkNmQ0YWY1ZjJhZTQwMTEwZTc1MWYxNYIm3IQ=: 00:26:41.093 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:41.093 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:41.093 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzMxZWY1YTIxYjdjNjcxYjAxY2E2OTliM2MyMDRmYzHWtk5D: 00:26:41.093 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjFjZWM0NzExMDE4ZjIyODJlZTYyMGVhNDEzN2EyY2MxOTRlMzE3ZmNkNmQ0YWY1ZjJhZTQwMTEwZTc1MWYxNYIm3IQ=: ]] 00:26:41.093 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjFjZWM0NzExMDE4ZjIyODJlZTYyMGVhNDEzN2EyY2MxOTRlMzE3ZmNkNmQ0YWY1ZjJhZTQwMTEwZTc1MWYxNYIm3IQ=: 00:26:41.093 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:26:41.093 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:41.093 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:41.093 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:41.093 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:41.093 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:41.093 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:41.093 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.093 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.093 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.093 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:41.093 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:41.093 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:41.093 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:41.093 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:41.093 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:41.093 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:41.094 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:41.094 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:41.094 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:41.094 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:41.094 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:41.094 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.094 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.660 nvme0n1 00:26:41.660 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.660 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:41.660 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:41.660 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.660 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.660 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.660 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:41.660 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:41.660 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.660 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.660 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.660 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:41.660 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:26:41.660 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:41.661 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:41.661 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:41.661 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:41.661 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjM2MjRlZTBmZjg4YmIxODY5MmUzY2QwMjJlZmU0YjRjZjczMmU4ODZmMDBlZjJl2UF/Qg==: 00:26:41.661 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTFkYTFhZTUyMDE3ZWZjYjI4N2FlZTM1YWZiNDQzNDM3MWNjOGExNGM5YTczMmU2QjRzBg==: 00:26:41.661 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:41.661 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:41.661 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjM2MjRlZTBmZjg4YmIxODY5MmUzY2QwMjJlZmU0YjRjZjczMmU4ODZmMDBlZjJl2UF/Qg==: 00:26:41.661 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTFkYTFhZTUyMDE3ZWZjYjI4N2FlZTM1YWZiNDQzNDM3MWNjOGExNGM5YTczMmU2QjRzBg==: ]] 00:26:41.661 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTFkYTFhZTUyMDE3ZWZjYjI4N2FlZTM1YWZiNDQzNDM3MWNjOGExNGM5YTczMmU2QjRzBg==: 00:26:41.661 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:26:41.661 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:41.661 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:41.661 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:41.661 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:41.661 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:41.661 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:41.661 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.661 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.661 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.661 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:41.661 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:41.661 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:41.661 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:41.661 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:41.661 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:41.661 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:41.661 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:41.661 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:41.661 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:41.661 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:41.661 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:41.661 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.661 17:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.228 nvme0n1 00:26:42.228 17:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.228 17:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:42.228 17:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:42.228 17:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.228 17:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.228 17:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.228 17:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:42.228 17:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:42.228 17:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.228 17:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.228 17:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.228 17:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:42.228 17:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:26:42.228 17:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:42.228 17:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:42.228 17:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:42.228 17:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:42.228 17:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWI5Zjg2YWMzMzRkNjJkNGVkMjc2OWY4M2MwNjZiNTilIsed: 00:26:42.228 17:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDdhYTZlZTFlMzkwNTllZWFhN2M5OWI4MjMzNWJlYTAGszsB: 00:26:42.228 17:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:42.228 17:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:42.228 17:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWI5Zjg2YWMzMzRkNjJkNGVkMjc2OWY4M2MwNjZiNTilIsed: 00:26:42.228 17:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDdhYTZlZTFlMzkwNTllZWFhN2M5OWI4MjMzNWJlYTAGszsB: ]] 00:26:42.228 17:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDdhYTZlZTFlMzkwNTllZWFhN2M5OWI4MjMzNWJlYTAGszsB: 00:26:42.228 17:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:26:42.228 17:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:42.228 17:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:42.228 17:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:42.228 17:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:42.228 17:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:42.228 17:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:42.228 17:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.228 17:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.228 17:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.228 17:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:42.228 17:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:42.228 17:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:42.228 17:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:42.228 17:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:42.228 17:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:42.228 17:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:42.228 17:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:42.228 17:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:42.228 17:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:42.228 17:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:42.228 17:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:42.228 17:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.228 17:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.795 nvme0n1 00:26:42.795 17:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.795 17:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:42.795 17:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:42.795 17:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.795 17:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.795 17:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.054 17:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:43.054 17:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:43.054 17:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.054 17:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.054 17:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.054 17:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:43.054 17:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:26:43.054 17:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:43.054 17:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:43.054 17:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:43.054 17:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:43.054 17:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmY5Njc4N2M1MGYwZjY2ZGY0ZDA5NTMwNzBmOWZkMmZlN2RjYzc0OWNlZjYyYjgylAm1xA==: 00:26:43.054 17:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWU3ZWVjOGZjNmQ5ZmIwY2IyNzE5YzVkMDgwNWFiYjaKOmhp: 00:26:43.054 17:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:43.054 17:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:43.054 17:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmY5Njc4N2M1MGYwZjY2ZGY0ZDA5NTMwNzBmOWZkMmZlN2RjYzc0OWNlZjYyYjgylAm1xA==: 00:26:43.054 17:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWU3ZWVjOGZjNmQ5ZmIwY2IyNzE5YzVkMDgwNWFiYjaKOmhp: ]] 00:26:43.054 17:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWU3ZWVjOGZjNmQ5ZmIwY2IyNzE5YzVkMDgwNWFiYjaKOmhp: 00:26:43.054 17:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:26:43.054 17:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:43.054 17:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:43.054 17:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:43.054 17:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:43.054 17:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:43.054 17:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:43.054 17:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.054 17:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.054 17:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.054 17:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:43.054 17:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:43.054 17:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:43.054 17:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:43.054 17:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:43.054 17:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:43.054 17:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:43.054 17:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:43.054 17:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:43.054 17:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:43.054 17:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:43.054 17:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:43.054 17:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.054 17:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.621 nvme0n1 00:26:43.621 17:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.621 17:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:43.621 17:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:43.621 17:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.621 17:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.621 17:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.621 17:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:43.621 17:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:43.621 17:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.621 17:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.621 17:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.621 17:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:43.621 17:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:26:43.621 17:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:43.621 17:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:43.621 17:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:43.621 17:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:43.621 17:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGU4ZWFjNGFkZDBiOTE2YjBlYzQ0NWMxYTg2MzY3ZTIyMGVlNDY3NDYwNzNkM2RlYmJmMDQ0NjczZjE4MjcwOKi6Z5c=: 00:26:43.621 17:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:43.621 17:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:43.621 17:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:43.621 17:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGU4ZWFjNGFkZDBiOTE2YjBlYzQ0NWMxYTg2MzY3ZTIyMGVlNDY3NDYwNzNkM2RlYmJmMDQ0NjczZjE4MjcwOKi6Z5c=: 00:26:43.621 17:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:43.621 17:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:26:43.621 17:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:43.621 17:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:43.621 17:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:43.621 17:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:43.621 17:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:43.621 17:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:43.621 17:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.621 17:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.621 17:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.621 17:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:43.621 17:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:43.621 17:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:43.621 17:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:43.621 17:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:43.621 17:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:43.621 17:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:43.621 17:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:43.621 17:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:43.621 17:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:43.621 17:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:43.621 17:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:43.621 17:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.621 17:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.188 nvme0n1 00:26:44.188 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.188 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:44.188 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:44.188 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.188 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.188 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.188 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:44.188 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:44.188 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.188 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.188 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.188 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:44.188 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:44.188 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:44.188 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:26:44.188 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:44.188 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:44.188 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:44.188 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:44.188 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzMxZWY1YTIxYjdjNjcxYjAxY2E2OTliM2MyMDRmYzHWtk5D: 00:26:44.188 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjFjZWM0NzExMDE4ZjIyODJlZTYyMGVhNDEzN2EyY2MxOTRlMzE3ZmNkNmQ0YWY1ZjJhZTQwMTEwZTc1MWYxNYIm3IQ=: 00:26:44.188 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:44.188 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:44.188 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzMxZWY1YTIxYjdjNjcxYjAxY2E2OTliM2MyMDRmYzHWtk5D: 00:26:44.188 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjFjZWM0NzExMDE4ZjIyODJlZTYyMGVhNDEzN2EyY2MxOTRlMzE3ZmNkNmQ0YWY1ZjJhZTQwMTEwZTc1MWYxNYIm3IQ=: ]] 00:26:44.188 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjFjZWM0NzExMDE4ZjIyODJlZTYyMGVhNDEzN2EyY2MxOTRlMzE3ZmNkNmQ0YWY1ZjJhZTQwMTEwZTc1MWYxNYIm3IQ=: 00:26:44.188 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:26:44.189 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:44.189 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:44.189 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:44.189 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:44.189 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:44.189 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:44.189 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.189 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.189 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.189 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:44.189 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:44.189 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:44.189 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:44.189 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:44.189 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:44.189 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:44.189 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:44.189 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:44.189 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:44.189 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:44.189 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:44.189 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.189 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.447 nvme0n1 00:26:44.447 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.447 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:44.448 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:44.448 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.448 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.448 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.448 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:44.448 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:44.448 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.448 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.448 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.448 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:44.448 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:26:44.448 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:44.448 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:44.448 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:44.448 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:44.448 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjM2MjRlZTBmZjg4YmIxODY5MmUzY2QwMjJlZmU0YjRjZjczMmU4ODZmMDBlZjJl2UF/Qg==: 00:26:44.448 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTFkYTFhZTUyMDE3ZWZjYjI4N2FlZTM1YWZiNDQzNDM3MWNjOGExNGM5YTczMmU2QjRzBg==: 00:26:44.448 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:44.448 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:44.448 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjM2MjRlZTBmZjg4YmIxODY5MmUzY2QwMjJlZmU0YjRjZjczMmU4ODZmMDBlZjJl2UF/Qg==: 00:26:44.448 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTFkYTFhZTUyMDE3ZWZjYjI4N2FlZTM1YWZiNDQzNDM3MWNjOGExNGM5YTczMmU2QjRzBg==: ]] 00:26:44.448 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTFkYTFhZTUyMDE3ZWZjYjI4N2FlZTM1YWZiNDQzNDM3MWNjOGExNGM5YTczMmU2QjRzBg==: 00:26:44.448 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:26:44.448 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:44.448 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:44.448 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:44.448 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:44.448 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:44.448 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:44.448 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.448 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.448 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.448 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:44.448 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:44.448 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:44.448 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:44.448 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:44.448 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:44.448 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:44.448 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:44.448 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:44.448 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:44.448 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:44.448 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:44.448 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.448 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.706 nvme0n1 00:26:44.706 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.706 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:44.706 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:44.706 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.706 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.706 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.706 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:44.706 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:44.706 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.706 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.706 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.706 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:44.706 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:26:44.706 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:44.706 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:44.706 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:44.706 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:44.706 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWI5Zjg2YWMzMzRkNjJkNGVkMjc2OWY4M2MwNjZiNTilIsed: 00:26:44.706 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDdhYTZlZTFlMzkwNTllZWFhN2M5OWI4MjMzNWJlYTAGszsB: 00:26:44.706 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:44.706 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:44.706 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWI5Zjg2YWMzMzRkNjJkNGVkMjc2OWY4M2MwNjZiNTilIsed: 00:26:44.706 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDdhYTZlZTFlMzkwNTllZWFhN2M5OWI4MjMzNWJlYTAGszsB: ]] 00:26:44.706 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDdhYTZlZTFlMzkwNTllZWFhN2M5OWI4MjMzNWJlYTAGszsB: 00:26:44.706 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:26:44.706 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:44.706 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:44.706 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:44.706 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:44.706 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:44.706 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:44.706 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.706 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.706 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.706 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:44.706 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:44.706 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:44.706 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:44.706 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:44.706 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:44.706 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:44.706 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:44.706 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:44.706 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:44.706 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:44.707 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:44.707 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.707 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.965 nvme0n1 00:26:44.965 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.965 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:44.965 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:44.965 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.965 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.965 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.965 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:44.965 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:44.965 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.965 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.965 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.965 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:44.965 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:26:44.965 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:44.965 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:44.965 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:44.965 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:44.965 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmY5Njc4N2M1MGYwZjY2ZGY0ZDA5NTMwNzBmOWZkMmZlN2RjYzc0OWNlZjYyYjgylAm1xA==: 00:26:44.965 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWU3ZWVjOGZjNmQ5ZmIwY2IyNzE5YzVkMDgwNWFiYjaKOmhp: 00:26:44.965 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:44.965 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:44.965 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmY5Njc4N2M1MGYwZjY2ZGY0ZDA5NTMwNzBmOWZkMmZlN2RjYzc0OWNlZjYyYjgylAm1xA==: 00:26:44.965 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWU3ZWVjOGZjNmQ5ZmIwY2IyNzE5YzVkMDgwNWFiYjaKOmhp: ]] 00:26:44.965 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWU3ZWVjOGZjNmQ5ZmIwY2IyNzE5YzVkMDgwNWFiYjaKOmhp: 00:26:44.965 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:26:44.965 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:44.965 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:44.965 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:44.965 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:44.965 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:44.965 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:44.965 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.965 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.965 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.965 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:44.965 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:44.965 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:44.965 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:44.965 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:44.965 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:44.965 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:44.965 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:44.965 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:44.965 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:44.965 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:44.965 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:44.965 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.965 17:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.965 nvme0n1 00:26:44.965 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.965 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:44.965 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:44.965 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.965 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.224 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.224 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:45.224 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:45.224 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.224 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.224 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.224 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:45.224 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:26:45.224 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:45.224 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:45.224 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:45.224 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:45.224 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGU4ZWFjNGFkZDBiOTE2YjBlYzQ0NWMxYTg2MzY3ZTIyMGVlNDY3NDYwNzNkM2RlYmJmMDQ0NjczZjE4MjcwOKi6Z5c=: 00:26:45.224 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:45.224 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:45.224 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:45.224 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGU4ZWFjNGFkZDBiOTE2YjBlYzQ0NWMxYTg2MzY3ZTIyMGVlNDY3NDYwNzNkM2RlYmJmMDQ0NjczZjE4MjcwOKi6Z5c=: 00:26:45.224 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:45.224 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:26:45.224 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:45.224 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:45.224 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:45.224 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:45.224 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:45.224 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:45.224 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.224 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.224 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.224 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:45.224 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:45.224 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:45.224 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:45.224 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:45.224 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:45.224 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:45.224 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:45.224 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:45.224 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:45.224 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:45.224 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:45.224 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.224 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.224 nvme0n1 00:26:45.224 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.224 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:45.224 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:45.224 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.224 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.224 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.483 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:45.483 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:45.483 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.483 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.483 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.483 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:45.483 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:45.483 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:26:45.483 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:45.483 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:45.483 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:45.483 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:45.483 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzMxZWY1YTIxYjdjNjcxYjAxY2E2OTliM2MyMDRmYzHWtk5D: 00:26:45.483 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjFjZWM0NzExMDE4ZjIyODJlZTYyMGVhNDEzN2EyY2MxOTRlMzE3ZmNkNmQ0YWY1ZjJhZTQwMTEwZTc1MWYxNYIm3IQ=: 00:26:45.483 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:45.483 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:45.483 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzMxZWY1YTIxYjdjNjcxYjAxY2E2OTliM2MyMDRmYzHWtk5D: 00:26:45.483 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjFjZWM0NzExMDE4ZjIyODJlZTYyMGVhNDEzN2EyY2MxOTRlMzE3ZmNkNmQ0YWY1ZjJhZTQwMTEwZTc1MWYxNYIm3IQ=: ]] 00:26:45.483 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjFjZWM0NzExMDE4ZjIyODJlZTYyMGVhNDEzN2EyY2MxOTRlMzE3ZmNkNmQ0YWY1ZjJhZTQwMTEwZTc1MWYxNYIm3IQ=: 00:26:45.483 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:26:45.483 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:45.483 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:45.483 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:45.483 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:45.483 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:45.483 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:45.484 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.484 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.484 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.484 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:45.484 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:45.484 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:45.484 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:45.484 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:45.484 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:45.484 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:45.484 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:45.484 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:45.484 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:45.484 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:45.484 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:45.484 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.484 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.484 nvme0n1 00:26:45.484 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.484 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:45.484 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.484 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.484 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:45.484 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.484 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:45.484 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:45.484 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.484 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.742 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.742 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:45.742 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:26:45.742 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:45.742 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:45.742 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:45.742 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:45.742 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjM2MjRlZTBmZjg4YmIxODY5MmUzY2QwMjJlZmU0YjRjZjczMmU4ODZmMDBlZjJl2UF/Qg==: 00:26:45.742 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTFkYTFhZTUyMDE3ZWZjYjI4N2FlZTM1YWZiNDQzNDM3MWNjOGExNGM5YTczMmU2QjRzBg==: 00:26:45.742 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:45.742 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:45.742 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjM2MjRlZTBmZjg4YmIxODY5MmUzY2QwMjJlZmU0YjRjZjczMmU4ODZmMDBlZjJl2UF/Qg==: 00:26:45.742 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTFkYTFhZTUyMDE3ZWZjYjI4N2FlZTM1YWZiNDQzNDM3MWNjOGExNGM5YTczMmU2QjRzBg==: ]] 00:26:45.742 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTFkYTFhZTUyMDE3ZWZjYjI4N2FlZTM1YWZiNDQzNDM3MWNjOGExNGM5YTczMmU2QjRzBg==: 00:26:45.742 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:26:45.742 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:45.742 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:45.742 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:45.742 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:45.742 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:45.742 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:45.742 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.742 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.742 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.742 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:45.742 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:45.742 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:45.742 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:45.742 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:45.742 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:45.742 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:45.742 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:45.742 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:45.742 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:45.742 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:45.742 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:45.742 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.742 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.742 nvme0n1 00:26:45.742 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.742 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:45.742 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:45.742 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.742 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.742 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.742 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:45.742 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:45.742 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.742 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.001 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.001 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:46.001 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:26:46.001 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:46.001 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:46.001 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:46.001 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:46.001 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWI5Zjg2YWMzMzRkNjJkNGVkMjc2OWY4M2MwNjZiNTilIsed: 00:26:46.001 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDdhYTZlZTFlMzkwNTllZWFhN2M5OWI4MjMzNWJlYTAGszsB: 00:26:46.001 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:46.001 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:46.001 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWI5Zjg2YWMzMzRkNjJkNGVkMjc2OWY4M2MwNjZiNTilIsed: 00:26:46.001 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDdhYTZlZTFlMzkwNTllZWFhN2M5OWI4MjMzNWJlYTAGszsB: ]] 00:26:46.001 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDdhYTZlZTFlMzkwNTllZWFhN2M5OWI4MjMzNWJlYTAGszsB: 00:26:46.001 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:26:46.001 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:46.001 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:46.001 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:46.001 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:46.001 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:46.001 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:46.001 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.001 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.001 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.001 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:46.001 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:46.001 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:46.001 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:46.001 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:46.001 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:46.001 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:46.001 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:46.001 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:46.001 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:46.001 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:46.001 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:46.001 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.001 17:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.001 nvme0n1 00:26:46.001 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.001 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:46.001 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:46.001 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.001 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.001 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.001 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:46.001 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:46.001 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.001 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.259 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.259 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:46.259 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:26:46.259 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:46.259 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:46.259 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:46.260 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:46.260 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmY5Njc4N2M1MGYwZjY2ZGY0ZDA5NTMwNzBmOWZkMmZlN2RjYzc0OWNlZjYyYjgylAm1xA==: 00:26:46.260 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWU3ZWVjOGZjNmQ5ZmIwY2IyNzE5YzVkMDgwNWFiYjaKOmhp: 00:26:46.260 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:46.260 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:46.260 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmY5Njc4N2M1MGYwZjY2ZGY0ZDA5NTMwNzBmOWZkMmZlN2RjYzc0OWNlZjYyYjgylAm1xA==: 00:26:46.260 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWU3ZWVjOGZjNmQ5ZmIwY2IyNzE5YzVkMDgwNWFiYjaKOmhp: ]] 00:26:46.260 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWU3ZWVjOGZjNmQ5ZmIwY2IyNzE5YzVkMDgwNWFiYjaKOmhp: 00:26:46.260 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:26:46.260 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:46.260 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:46.260 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:46.260 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:46.260 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:46.260 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:46.260 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.260 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.260 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.260 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:46.260 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:46.260 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:46.260 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:46.260 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:46.260 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:46.260 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:46.260 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:46.260 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:46.260 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:46.260 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:46.260 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:46.260 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.260 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.260 nvme0n1 00:26:46.260 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.260 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:46.260 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.260 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.260 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:46.260 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.260 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:46.260 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:46.260 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.260 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.260 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.260 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:46.260 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:26:46.260 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:46.260 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:46.260 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:46.260 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:46.260 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGU4ZWFjNGFkZDBiOTE2YjBlYzQ0NWMxYTg2MzY3ZTIyMGVlNDY3NDYwNzNkM2RlYmJmMDQ0NjczZjE4MjcwOKi6Z5c=: 00:26:46.260 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:46.260 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:46.260 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:46.260 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGU4ZWFjNGFkZDBiOTE2YjBlYzQ0NWMxYTg2MzY3ZTIyMGVlNDY3NDYwNzNkM2RlYmJmMDQ0NjczZjE4MjcwOKi6Z5c=: 00:26:46.260 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:46.260 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:26:46.260 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:46.260 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:46.260 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:46.260 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:46.260 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:46.260 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:46.260 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.260 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.518 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.518 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:46.518 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:46.518 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:46.518 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:46.518 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:46.518 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:46.518 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:46.518 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:46.518 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:46.518 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:46.518 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:46.518 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:46.518 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.518 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.518 nvme0n1 00:26:46.518 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.518 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:46.518 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.518 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:46.518 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.518 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.518 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:46.518 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:46.518 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.518 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.518 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.519 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:46.519 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:46.519 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:26:46.519 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:46.519 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:46.519 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:46.519 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:46.519 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzMxZWY1YTIxYjdjNjcxYjAxY2E2OTliM2MyMDRmYzHWtk5D: 00:26:46.519 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjFjZWM0NzExMDE4ZjIyODJlZTYyMGVhNDEzN2EyY2MxOTRlMzE3ZmNkNmQ0YWY1ZjJhZTQwMTEwZTc1MWYxNYIm3IQ=: 00:26:46.519 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:46.519 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:46.519 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzMxZWY1YTIxYjdjNjcxYjAxY2E2OTliM2MyMDRmYzHWtk5D: 00:26:46.519 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjFjZWM0NzExMDE4ZjIyODJlZTYyMGVhNDEzN2EyY2MxOTRlMzE3ZmNkNmQ0YWY1ZjJhZTQwMTEwZTc1MWYxNYIm3IQ=: ]] 00:26:46.519 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjFjZWM0NzExMDE4ZjIyODJlZTYyMGVhNDEzN2EyY2MxOTRlMzE3ZmNkNmQ0YWY1ZjJhZTQwMTEwZTc1MWYxNYIm3IQ=: 00:26:46.519 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:26:46.519 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:46.519 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:46.519 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:46.519 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:46.519 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:46.519 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:46.519 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.519 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.519 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.519 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:46.777 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:46.777 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:46.777 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:46.777 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:46.777 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:46.777 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:46.777 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:46.777 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:46.777 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:46.777 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:46.777 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:46.777 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.777 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.777 nvme0n1 00:26:46.777 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.777 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:46.777 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:46.777 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.777 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.036 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.036 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:47.036 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:47.036 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.036 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.036 17:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.036 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:47.036 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:26:47.036 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:47.036 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:47.036 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:47.036 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:47.036 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjM2MjRlZTBmZjg4YmIxODY5MmUzY2QwMjJlZmU0YjRjZjczMmU4ODZmMDBlZjJl2UF/Qg==: 00:26:47.036 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTFkYTFhZTUyMDE3ZWZjYjI4N2FlZTM1YWZiNDQzNDM3MWNjOGExNGM5YTczMmU2QjRzBg==: 00:26:47.036 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:47.036 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:47.036 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjM2MjRlZTBmZjg4YmIxODY5MmUzY2QwMjJlZmU0YjRjZjczMmU4ODZmMDBlZjJl2UF/Qg==: 00:26:47.036 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTFkYTFhZTUyMDE3ZWZjYjI4N2FlZTM1YWZiNDQzNDM3MWNjOGExNGM5YTczMmU2QjRzBg==: ]] 00:26:47.036 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTFkYTFhZTUyMDE3ZWZjYjI4N2FlZTM1YWZiNDQzNDM3MWNjOGExNGM5YTczMmU2QjRzBg==: 00:26:47.036 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:26:47.036 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:47.036 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:47.036 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:47.036 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:47.036 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:47.036 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:47.036 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.036 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.036 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.036 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:47.036 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:47.036 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:47.036 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:47.036 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:47.036 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:47.036 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:47.036 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:47.036 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:47.036 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:47.036 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:47.036 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:47.036 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.036 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.295 nvme0n1 00:26:47.295 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.295 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:47.295 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:47.295 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.295 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.295 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.295 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:47.295 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:47.295 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.295 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.295 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.295 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:47.295 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:26:47.295 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:47.295 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:47.295 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:47.295 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:47.295 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWI5Zjg2YWMzMzRkNjJkNGVkMjc2OWY4M2MwNjZiNTilIsed: 00:26:47.295 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDdhYTZlZTFlMzkwNTllZWFhN2M5OWI4MjMzNWJlYTAGszsB: 00:26:47.295 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:47.295 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:47.295 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWI5Zjg2YWMzMzRkNjJkNGVkMjc2OWY4M2MwNjZiNTilIsed: 00:26:47.295 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDdhYTZlZTFlMzkwNTllZWFhN2M5OWI4MjMzNWJlYTAGszsB: ]] 00:26:47.295 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDdhYTZlZTFlMzkwNTllZWFhN2M5OWI4MjMzNWJlYTAGszsB: 00:26:47.295 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:26:47.295 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:47.295 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:47.295 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:47.295 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:47.295 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:47.296 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:47.296 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.296 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.296 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.296 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:47.296 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:47.296 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:47.296 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:47.296 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:47.296 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:47.296 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:47.296 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:47.296 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:47.296 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:47.296 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:47.296 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:47.296 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.296 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.554 nvme0n1 00:26:47.554 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.554 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:47.554 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.554 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.554 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:47.554 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.554 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:47.554 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:47.554 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.554 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.554 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.554 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:47.554 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:26:47.554 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:47.554 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:47.554 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:47.554 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:47.554 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmY5Njc4N2M1MGYwZjY2ZGY0ZDA5NTMwNzBmOWZkMmZlN2RjYzc0OWNlZjYyYjgylAm1xA==: 00:26:47.554 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWU3ZWVjOGZjNmQ5ZmIwY2IyNzE5YzVkMDgwNWFiYjaKOmhp: 00:26:47.554 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:47.554 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:47.554 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmY5Njc4N2M1MGYwZjY2ZGY0ZDA5NTMwNzBmOWZkMmZlN2RjYzc0OWNlZjYyYjgylAm1xA==: 00:26:47.554 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWU3ZWVjOGZjNmQ5ZmIwY2IyNzE5YzVkMDgwNWFiYjaKOmhp: ]] 00:26:47.554 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWU3ZWVjOGZjNmQ5ZmIwY2IyNzE5YzVkMDgwNWFiYjaKOmhp: 00:26:47.554 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:26:47.554 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:47.554 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:47.554 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:47.554 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:47.554 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:47.554 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:47.554 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.555 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.555 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.555 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:47.555 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:47.555 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:47.555 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:47.555 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:47.555 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:47.555 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:47.555 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:47.555 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:47.555 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:47.555 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:47.555 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:47.555 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.555 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.813 nvme0n1 00:26:47.813 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.813 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:47.813 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:47.813 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.813 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.813 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.813 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:47.813 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:47.813 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.813 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.813 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.813 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:47.813 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:26:47.813 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:47.813 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:47.813 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:47.813 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:47.813 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGU4ZWFjNGFkZDBiOTE2YjBlYzQ0NWMxYTg2MzY3ZTIyMGVlNDY3NDYwNzNkM2RlYmJmMDQ0NjczZjE4MjcwOKi6Z5c=: 00:26:47.813 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:47.813 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:47.813 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:47.813 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGU4ZWFjNGFkZDBiOTE2YjBlYzQ0NWMxYTg2MzY3ZTIyMGVlNDY3NDYwNzNkM2RlYmJmMDQ0NjczZjE4MjcwOKi6Z5c=: 00:26:47.813 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:47.813 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:26:47.813 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:47.813 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:47.813 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:47.813 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:47.813 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:47.814 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:47.814 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.072 17:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.072 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.072 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:48.072 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:48.072 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:48.072 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:48.072 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:48.072 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:48.072 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:48.072 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:48.072 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:48.072 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:48.072 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:48.072 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:48.072 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.072 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.072 nvme0n1 00:26:48.072 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.072 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:48.331 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:48.331 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.331 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.331 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.331 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:48.331 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:48.331 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.331 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.331 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.331 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:48.331 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:48.331 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:26:48.331 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:48.331 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:48.331 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:48.331 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:48.331 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzMxZWY1YTIxYjdjNjcxYjAxY2E2OTliM2MyMDRmYzHWtk5D: 00:26:48.331 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjFjZWM0NzExMDE4ZjIyODJlZTYyMGVhNDEzN2EyY2MxOTRlMzE3ZmNkNmQ0YWY1ZjJhZTQwMTEwZTc1MWYxNYIm3IQ=: 00:26:48.331 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:48.331 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:48.331 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzMxZWY1YTIxYjdjNjcxYjAxY2E2OTliM2MyMDRmYzHWtk5D: 00:26:48.331 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjFjZWM0NzExMDE4ZjIyODJlZTYyMGVhNDEzN2EyY2MxOTRlMzE3ZmNkNmQ0YWY1ZjJhZTQwMTEwZTc1MWYxNYIm3IQ=: ]] 00:26:48.331 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjFjZWM0NzExMDE4ZjIyODJlZTYyMGVhNDEzN2EyY2MxOTRlMzE3ZmNkNmQ0YWY1ZjJhZTQwMTEwZTc1MWYxNYIm3IQ=: 00:26:48.331 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:26:48.331 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:48.331 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:48.331 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:48.331 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:48.331 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:48.331 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:48.331 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.331 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.331 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.331 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:48.331 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:48.331 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:48.331 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:48.331 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:48.331 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:48.331 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:48.331 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:48.331 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:48.331 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:48.331 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:48.331 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:48.331 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.331 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.589 nvme0n1 00:26:48.589 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.589 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:48.589 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:48.589 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.589 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.589 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.589 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:48.589 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:48.589 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.589 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.589 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.589 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:48.589 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:26:48.589 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:48.589 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:48.589 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:48.589 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:48.589 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjM2MjRlZTBmZjg4YmIxODY5MmUzY2QwMjJlZmU0YjRjZjczMmU4ODZmMDBlZjJl2UF/Qg==: 00:26:48.589 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTFkYTFhZTUyMDE3ZWZjYjI4N2FlZTM1YWZiNDQzNDM3MWNjOGExNGM5YTczMmU2QjRzBg==: 00:26:48.589 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:48.589 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:48.589 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjM2MjRlZTBmZjg4YmIxODY5MmUzY2QwMjJlZmU0YjRjZjczMmU4ODZmMDBlZjJl2UF/Qg==: 00:26:48.589 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTFkYTFhZTUyMDE3ZWZjYjI4N2FlZTM1YWZiNDQzNDM3MWNjOGExNGM5YTczMmU2QjRzBg==: ]] 00:26:48.589 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTFkYTFhZTUyMDE3ZWZjYjI4N2FlZTM1YWZiNDQzNDM3MWNjOGExNGM5YTczMmU2QjRzBg==: 00:26:48.589 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:26:48.589 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:48.589 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:48.589 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:48.589 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:48.589 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:48.589 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:48.589 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.589 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.848 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.848 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:48.848 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:48.848 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:48.848 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:48.848 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:48.848 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:48.848 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:48.848 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:48.848 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:48.848 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:48.848 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:48.848 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:48.848 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.848 17:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.106 nvme0n1 00:26:49.106 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.106 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:49.106 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:49.106 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.107 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.107 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.107 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:49.107 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:49.107 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.107 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.107 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.107 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:49.107 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:26:49.107 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:49.107 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:49.107 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:49.107 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:49.107 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWI5Zjg2YWMzMzRkNjJkNGVkMjc2OWY4M2MwNjZiNTilIsed: 00:26:49.107 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDdhYTZlZTFlMzkwNTllZWFhN2M5OWI4MjMzNWJlYTAGszsB: 00:26:49.107 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:49.107 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:49.107 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWI5Zjg2YWMzMzRkNjJkNGVkMjc2OWY4M2MwNjZiNTilIsed: 00:26:49.107 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDdhYTZlZTFlMzkwNTllZWFhN2M5OWI4MjMzNWJlYTAGszsB: ]] 00:26:49.107 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDdhYTZlZTFlMzkwNTllZWFhN2M5OWI4MjMzNWJlYTAGszsB: 00:26:49.107 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:26:49.107 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:49.107 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:49.107 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:49.107 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:49.107 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:49.107 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:49.107 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.107 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.107 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.107 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:49.107 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:49.107 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:49.107 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:49.107 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:49.107 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:49.107 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:49.107 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:49.107 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:49.107 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:49.107 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:49.107 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:49.107 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.107 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.674 nvme0n1 00:26:49.674 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.674 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:49.674 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:49.674 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.674 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.674 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.674 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:49.674 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:49.674 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.674 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.674 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.674 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:49.674 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:26:49.674 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:49.674 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:49.674 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:49.674 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:49.674 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmY5Njc4N2M1MGYwZjY2ZGY0ZDA5NTMwNzBmOWZkMmZlN2RjYzc0OWNlZjYyYjgylAm1xA==: 00:26:49.674 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWU3ZWVjOGZjNmQ5ZmIwY2IyNzE5YzVkMDgwNWFiYjaKOmhp: 00:26:49.674 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:49.674 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:49.674 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmY5Njc4N2M1MGYwZjY2ZGY0ZDA5NTMwNzBmOWZkMmZlN2RjYzc0OWNlZjYyYjgylAm1xA==: 00:26:49.674 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWU3ZWVjOGZjNmQ5ZmIwY2IyNzE5YzVkMDgwNWFiYjaKOmhp: ]] 00:26:49.674 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWU3ZWVjOGZjNmQ5ZmIwY2IyNzE5YzVkMDgwNWFiYjaKOmhp: 00:26:49.674 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:26:49.674 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:49.674 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:49.674 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:49.674 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:49.674 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:49.674 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:49.674 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.674 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.674 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.674 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:49.674 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:49.674 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:49.674 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:49.674 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:49.674 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:49.674 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:49.674 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:49.674 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:49.674 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:49.674 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:49.674 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:49.674 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.674 17:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.933 nvme0n1 00:26:49.933 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.933 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:49.933 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:49.933 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.933 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.933 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.933 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:49.933 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:49.933 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.933 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.224 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.225 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:50.225 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:26:50.225 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:50.225 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:50.225 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:50.225 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:50.225 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGU4ZWFjNGFkZDBiOTE2YjBlYzQ0NWMxYTg2MzY3ZTIyMGVlNDY3NDYwNzNkM2RlYmJmMDQ0NjczZjE4MjcwOKi6Z5c=: 00:26:50.225 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:50.225 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:50.225 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:50.225 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGU4ZWFjNGFkZDBiOTE2YjBlYzQ0NWMxYTg2MzY3ZTIyMGVlNDY3NDYwNzNkM2RlYmJmMDQ0NjczZjE4MjcwOKi6Z5c=: 00:26:50.225 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:50.225 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:26:50.225 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:50.225 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:50.225 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:50.225 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:50.225 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:50.225 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:50.225 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.225 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.225 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.225 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:50.225 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:50.225 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:50.225 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:50.225 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:50.225 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:50.225 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:50.225 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:50.225 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:50.225 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:50.225 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:50.225 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:50.225 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.225 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.514 nvme0n1 00:26:50.514 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.514 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:50.514 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:50.514 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.514 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.514 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.514 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:50.514 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:50.514 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.514 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.514 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.514 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:50.514 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:50.514 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:26:50.514 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:50.514 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:50.514 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:50.514 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:50.514 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzMxZWY1YTIxYjdjNjcxYjAxY2E2OTliM2MyMDRmYzHWtk5D: 00:26:50.514 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjFjZWM0NzExMDE4ZjIyODJlZTYyMGVhNDEzN2EyY2MxOTRlMzE3ZmNkNmQ0YWY1ZjJhZTQwMTEwZTc1MWYxNYIm3IQ=: 00:26:50.514 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:50.514 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:50.514 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzMxZWY1YTIxYjdjNjcxYjAxY2E2OTliM2MyMDRmYzHWtk5D: 00:26:50.514 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjFjZWM0NzExMDE4ZjIyODJlZTYyMGVhNDEzN2EyY2MxOTRlMzE3ZmNkNmQ0YWY1ZjJhZTQwMTEwZTc1MWYxNYIm3IQ=: ]] 00:26:50.514 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjFjZWM0NzExMDE4ZjIyODJlZTYyMGVhNDEzN2EyY2MxOTRlMzE3ZmNkNmQ0YWY1ZjJhZTQwMTEwZTc1MWYxNYIm3IQ=: 00:26:50.514 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:26:50.514 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:50.514 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:50.514 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:50.514 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:50.514 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:50.514 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:50.514 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.514 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.514 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.514 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:50.514 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:50.514 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:50.514 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:50.514 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:50.514 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:50.514 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:50.514 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:50.514 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:50.514 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:50.514 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:50.514 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:50.514 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.514 17:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.081 nvme0n1 00:26:51.081 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.081 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:51.081 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:51.081 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.081 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.081 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.081 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:51.081 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:51.081 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.081 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.081 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.081 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:51.081 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:26:51.081 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:51.081 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:51.081 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:51.081 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:51.081 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjM2MjRlZTBmZjg4YmIxODY5MmUzY2QwMjJlZmU0YjRjZjczMmU4ODZmMDBlZjJl2UF/Qg==: 00:26:51.081 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTFkYTFhZTUyMDE3ZWZjYjI4N2FlZTM1YWZiNDQzNDM3MWNjOGExNGM5YTczMmU2QjRzBg==: 00:26:51.081 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:51.081 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:51.081 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjM2MjRlZTBmZjg4YmIxODY5MmUzY2QwMjJlZmU0YjRjZjczMmU4ODZmMDBlZjJl2UF/Qg==: 00:26:51.081 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTFkYTFhZTUyMDE3ZWZjYjI4N2FlZTM1YWZiNDQzNDM3MWNjOGExNGM5YTczMmU2QjRzBg==: ]] 00:26:51.081 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTFkYTFhZTUyMDE3ZWZjYjI4N2FlZTM1YWZiNDQzNDM3MWNjOGExNGM5YTczMmU2QjRzBg==: 00:26:51.081 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:26:51.081 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:51.081 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:51.081 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:51.081 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:51.081 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:51.081 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:51.081 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.081 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.081 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.081 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:51.081 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:51.081 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:51.081 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:51.081 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:51.081 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:51.081 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:51.081 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:51.081 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:51.081 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:51.081 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:51.081 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:51.081 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.081 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.648 nvme0n1 00:26:51.648 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.648 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:51.648 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:51.648 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.648 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.648 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.907 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:51.907 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:51.907 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.907 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.907 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.907 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:51.907 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:26:51.907 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:51.907 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:51.907 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:51.907 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:51.907 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWI5Zjg2YWMzMzRkNjJkNGVkMjc2OWY4M2MwNjZiNTilIsed: 00:26:51.907 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDdhYTZlZTFlMzkwNTllZWFhN2M5OWI4MjMzNWJlYTAGszsB: 00:26:51.907 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:51.907 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:51.907 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWI5Zjg2YWMzMzRkNjJkNGVkMjc2OWY4M2MwNjZiNTilIsed: 00:26:51.907 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDdhYTZlZTFlMzkwNTllZWFhN2M5OWI4MjMzNWJlYTAGszsB: ]] 00:26:51.907 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDdhYTZlZTFlMzkwNTllZWFhN2M5OWI4MjMzNWJlYTAGszsB: 00:26:51.907 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:26:51.907 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:51.907 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:51.907 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:51.907 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:51.907 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:51.907 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:51.907 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.907 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.907 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.907 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:51.907 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:51.907 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:51.907 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:51.907 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:51.907 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:51.907 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:51.907 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:51.907 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:51.907 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:51.907 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:51.907 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:51.907 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.907 17:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.474 nvme0n1 00:26:52.474 17:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.474 17:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:52.474 17:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:52.474 17:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.474 17:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.474 17:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.474 17:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:52.474 17:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:52.474 17:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.474 17:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.474 17:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.474 17:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:52.474 17:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:26:52.474 17:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:52.474 17:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:52.474 17:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:52.474 17:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:52.474 17:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmY5Njc4N2M1MGYwZjY2ZGY0ZDA5NTMwNzBmOWZkMmZlN2RjYzc0OWNlZjYyYjgylAm1xA==: 00:26:52.474 17:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWU3ZWVjOGZjNmQ5ZmIwY2IyNzE5YzVkMDgwNWFiYjaKOmhp: 00:26:52.474 17:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:52.474 17:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:52.474 17:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmY5Njc4N2M1MGYwZjY2ZGY0ZDA5NTMwNzBmOWZkMmZlN2RjYzc0OWNlZjYyYjgylAm1xA==: 00:26:52.474 17:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWU3ZWVjOGZjNmQ5ZmIwY2IyNzE5YzVkMDgwNWFiYjaKOmhp: ]] 00:26:52.474 17:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWU3ZWVjOGZjNmQ5ZmIwY2IyNzE5YzVkMDgwNWFiYjaKOmhp: 00:26:52.474 17:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:26:52.474 17:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:52.474 17:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:52.474 17:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:52.474 17:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:52.474 17:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:52.474 17:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:52.474 17:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.474 17:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.474 17:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.474 17:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:52.474 17:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:52.474 17:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:52.474 17:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:52.474 17:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:52.474 17:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:52.474 17:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:52.474 17:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:52.474 17:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:52.474 17:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:52.474 17:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:52.474 17:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:52.474 17:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.474 17:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.041 nvme0n1 00:26:53.041 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.041 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:53.041 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:53.041 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.041 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.041 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.041 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:53.041 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:53.041 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.041 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.041 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.041 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:53.041 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:26:53.041 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:53.041 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:53.041 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:53.041 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:53.041 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGU4ZWFjNGFkZDBiOTE2YjBlYzQ0NWMxYTg2MzY3ZTIyMGVlNDY3NDYwNzNkM2RlYmJmMDQ0NjczZjE4MjcwOKi6Z5c=: 00:26:53.041 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:53.041 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:53.041 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:53.041 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGU4ZWFjNGFkZDBiOTE2YjBlYzQ0NWMxYTg2MzY3ZTIyMGVlNDY3NDYwNzNkM2RlYmJmMDQ0NjczZjE4MjcwOKi6Z5c=: 00:26:53.041 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:53.041 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:26:53.041 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:53.041 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:53.041 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:53.041 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:53.041 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:53.042 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:53.042 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.042 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.042 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.042 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:53.042 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:53.042 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:53.042 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:53.042 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:53.042 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:53.042 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:53.042 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:53.042 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:53.042 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:53.042 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:53.042 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:53.042 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.042 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.608 nvme0n1 00:26:53.608 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.608 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:53.608 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:53.608 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.608 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.608 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.608 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:53.608 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:53.608 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.608 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.608 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.608 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:53.608 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:53.608 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:53.608 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:26:53.866 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:53.866 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:53.866 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:53.866 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:53.866 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzMxZWY1YTIxYjdjNjcxYjAxY2E2OTliM2MyMDRmYzHWtk5D: 00:26:53.866 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjFjZWM0NzExMDE4ZjIyODJlZTYyMGVhNDEzN2EyY2MxOTRlMzE3ZmNkNmQ0YWY1ZjJhZTQwMTEwZTc1MWYxNYIm3IQ=: 00:26:53.866 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:53.866 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:53.867 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzMxZWY1YTIxYjdjNjcxYjAxY2E2OTliM2MyMDRmYzHWtk5D: 00:26:53.867 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjFjZWM0NzExMDE4ZjIyODJlZTYyMGVhNDEzN2EyY2MxOTRlMzE3ZmNkNmQ0YWY1ZjJhZTQwMTEwZTc1MWYxNYIm3IQ=: ]] 00:26:53.867 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjFjZWM0NzExMDE4ZjIyODJlZTYyMGVhNDEzN2EyY2MxOTRlMzE3ZmNkNmQ0YWY1ZjJhZTQwMTEwZTc1MWYxNYIm3IQ=: 00:26:53.867 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:26:53.867 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:53.867 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:53.867 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:53.867 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:53.867 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:53.867 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:53.867 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.867 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.867 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.867 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:53.867 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:53.867 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:53.867 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:53.867 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:53.867 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:53.867 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:53.867 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:53.867 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:53.867 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:53.867 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:53.867 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:53.867 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.867 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.867 nvme0n1 00:26:53.867 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.867 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:53.867 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:53.867 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.867 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.867 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.867 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:53.867 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:53.867 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.867 17:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.867 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.867 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:53.867 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:26:53.867 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:53.867 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:53.867 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:53.867 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:53.867 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjM2MjRlZTBmZjg4YmIxODY5MmUzY2QwMjJlZmU0YjRjZjczMmU4ODZmMDBlZjJl2UF/Qg==: 00:26:53.867 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTFkYTFhZTUyMDE3ZWZjYjI4N2FlZTM1YWZiNDQzNDM3MWNjOGExNGM5YTczMmU2QjRzBg==: 00:26:53.867 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:53.867 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:53.867 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjM2MjRlZTBmZjg4YmIxODY5MmUzY2QwMjJlZmU0YjRjZjczMmU4ODZmMDBlZjJl2UF/Qg==: 00:26:53.867 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTFkYTFhZTUyMDE3ZWZjYjI4N2FlZTM1YWZiNDQzNDM3MWNjOGExNGM5YTczMmU2QjRzBg==: ]] 00:26:53.867 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTFkYTFhZTUyMDE3ZWZjYjI4N2FlZTM1YWZiNDQzNDM3MWNjOGExNGM5YTczMmU2QjRzBg==: 00:26:53.867 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:26:53.867 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:53.867 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:53.867 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:53.867 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:53.867 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:53.867 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:53.867 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.867 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.867 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.867 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:53.867 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:53.867 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:53.867 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:53.867 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:53.867 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:53.867 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:53.867 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:53.867 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:53.867 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:53.867 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:53.867 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:53.867 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.867 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.125 nvme0n1 00:26:54.125 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.125 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:54.125 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:54.125 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.125 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.125 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.125 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:54.125 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:54.125 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.125 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.126 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.126 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:54.126 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:26:54.126 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:54.126 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:54.126 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:54.126 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:54.126 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWI5Zjg2YWMzMzRkNjJkNGVkMjc2OWY4M2MwNjZiNTilIsed: 00:26:54.126 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDdhYTZlZTFlMzkwNTllZWFhN2M5OWI4MjMzNWJlYTAGszsB: 00:26:54.126 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:54.126 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:54.126 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWI5Zjg2YWMzMzRkNjJkNGVkMjc2OWY4M2MwNjZiNTilIsed: 00:26:54.126 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDdhYTZlZTFlMzkwNTllZWFhN2M5OWI4MjMzNWJlYTAGszsB: ]] 00:26:54.126 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDdhYTZlZTFlMzkwNTllZWFhN2M5OWI4MjMzNWJlYTAGszsB: 00:26:54.126 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:26:54.126 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:54.126 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:54.126 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:54.126 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:54.126 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:54.126 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:54.126 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.126 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.126 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.126 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:54.126 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:54.126 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:54.126 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:54.126 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:54.126 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:54.126 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:54.126 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:54.126 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:54.126 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:54.126 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:54.126 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:54.126 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.126 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.384 nvme0n1 00:26:54.384 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.384 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:54.384 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:54.384 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.384 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.384 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.384 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:54.384 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:54.384 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.384 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.384 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.384 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:54.384 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:26:54.384 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:54.384 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:54.384 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:54.384 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:54.384 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmY5Njc4N2M1MGYwZjY2ZGY0ZDA5NTMwNzBmOWZkMmZlN2RjYzc0OWNlZjYyYjgylAm1xA==: 00:26:54.384 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWU3ZWVjOGZjNmQ5ZmIwY2IyNzE5YzVkMDgwNWFiYjaKOmhp: 00:26:54.384 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:54.384 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:54.384 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmY5Njc4N2M1MGYwZjY2ZGY0ZDA5NTMwNzBmOWZkMmZlN2RjYzc0OWNlZjYyYjgylAm1xA==: 00:26:54.384 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWU3ZWVjOGZjNmQ5ZmIwY2IyNzE5YzVkMDgwNWFiYjaKOmhp: ]] 00:26:54.384 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWU3ZWVjOGZjNmQ5ZmIwY2IyNzE5YzVkMDgwNWFiYjaKOmhp: 00:26:54.384 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:26:54.384 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:54.384 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:54.384 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:54.384 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:54.384 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:54.384 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:54.384 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.384 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.384 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.384 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:54.384 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:54.384 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:54.384 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:54.384 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:54.384 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:54.384 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:54.384 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:54.384 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:54.384 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:54.384 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:54.384 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:54.384 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.384 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.643 nvme0n1 00:26:54.643 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.643 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:54.643 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:54.643 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.643 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.643 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.643 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:54.643 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:54.643 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.643 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.643 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.643 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:54.643 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:26:54.643 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:54.643 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:54.643 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:54.643 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:54.643 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGU4ZWFjNGFkZDBiOTE2YjBlYzQ0NWMxYTg2MzY3ZTIyMGVlNDY3NDYwNzNkM2RlYmJmMDQ0NjczZjE4MjcwOKi6Z5c=: 00:26:54.643 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:54.643 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:54.643 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:54.643 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGU4ZWFjNGFkZDBiOTE2YjBlYzQ0NWMxYTg2MzY3ZTIyMGVlNDY3NDYwNzNkM2RlYmJmMDQ0NjczZjE4MjcwOKi6Z5c=: 00:26:54.643 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:54.643 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:26:54.643 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:54.643 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:54.643 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:54.643 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:54.643 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:54.643 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:54.643 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.643 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.643 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.643 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:54.643 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:54.643 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:54.643 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:54.643 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:54.643 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:54.643 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:54.643 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:54.643 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:54.643 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:54.643 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:54.643 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:54.643 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.643 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.902 nvme0n1 00:26:54.902 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.902 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:54.902 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:54.902 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.902 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.902 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.902 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:54.902 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:54.902 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.902 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.902 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.902 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:54.902 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:54.902 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:26:54.902 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:54.902 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:54.902 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:54.902 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:54.902 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzMxZWY1YTIxYjdjNjcxYjAxY2E2OTliM2MyMDRmYzHWtk5D: 00:26:54.902 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjFjZWM0NzExMDE4ZjIyODJlZTYyMGVhNDEzN2EyY2MxOTRlMzE3ZmNkNmQ0YWY1ZjJhZTQwMTEwZTc1MWYxNYIm3IQ=: 00:26:54.902 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:54.903 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:54.903 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzMxZWY1YTIxYjdjNjcxYjAxY2E2OTliM2MyMDRmYzHWtk5D: 00:26:54.903 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjFjZWM0NzExMDE4ZjIyODJlZTYyMGVhNDEzN2EyY2MxOTRlMzE3ZmNkNmQ0YWY1ZjJhZTQwMTEwZTc1MWYxNYIm3IQ=: ]] 00:26:54.903 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjFjZWM0NzExMDE4ZjIyODJlZTYyMGVhNDEzN2EyY2MxOTRlMzE3ZmNkNmQ0YWY1ZjJhZTQwMTEwZTc1MWYxNYIm3IQ=: 00:26:54.903 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:26:54.903 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:54.903 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:54.903 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:54.903 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:54.903 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:54.903 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:54.903 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.903 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.903 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.903 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:54.903 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:54.903 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:54.903 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:54.903 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:54.903 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:54.903 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:54.903 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:54.903 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:54.903 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:54.903 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:54.903 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:54.903 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.903 17:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.161 nvme0n1 00:26:55.161 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.161 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:55.161 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:55.161 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.161 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.161 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.161 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:55.161 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:55.161 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.161 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.161 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.161 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:55.161 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:26:55.161 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:55.161 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:55.161 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:55.161 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:55.161 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjM2MjRlZTBmZjg4YmIxODY5MmUzY2QwMjJlZmU0YjRjZjczMmU4ODZmMDBlZjJl2UF/Qg==: 00:26:55.161 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTFkYTFhZTUyMDE3ZWZjYjI4N2FlZTM1YWZiNDQzNDM3MWNjOGExNGM5YTczMmU2QjRzBg==: 00:26:55.161 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:55.161 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:55.161 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjM2MjRlZTBmZjg4YmIxODY5MmUzY2QwMjJlZmU0YjRjZjczMmU4ODZmMDBlZjJl2UF/Qg==: 00:26:55.161 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTFkYTFhZTUyMDE3ZWZjYjI4N2FlZTM1YWZiNDQzNDM3MWNjOGExNGM5YTczMmU2QjRzBg==: ]] 00:26:55.161 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTFkYTFhZTUyMDE3ZWZjYjI4N2FlZTM1YWZiNDQzNDM3MWNjOGExNGM5YTczMmU2QjRzBg==: 00:26:55.161 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:26:55.161 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:55.162 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:55.162 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:55.162 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:55.162 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:55.162 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:55.162 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.162 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.162 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.162 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:55.162 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:55.162 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:55.162 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:55.162 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:55.162 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:55.162 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:55.162 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:55.162 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:55.162 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:55.162 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:55.162 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:55.162 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.162 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.420 nvme0n1 00:26:55.420 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.420 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:55.420 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:55.420 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.420 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.420 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.420 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:55.420 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:55.420 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.420 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.420 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.420 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:55.420 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:26:55.420 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:55.420 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:55.420 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:55.420 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:55.420 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWI5Zjg2YWMzMzRkNjJkNGVkMjc2OWY4M2MwNjZiNTilIsed: 00:26:55.420 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDdhYTZlZTFlMzkwNTllZWFhN2M5OWI4MjMzNWJlYTAGszsB: 00:26:55.420 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:55.420 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:55.420 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWI5Zjg2YWMzMzRkNjJkNGVkMjc2OWY4M2MwNjZiNTilIsed: 00:26:55.420 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDdhYTZlZTFlMzkwNTllZWFhN2M5OWI4MjMzNWJlYTAGszsB: ]] 00:26:55.420 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDdhYTZlZTFlMzkwNTllZWFhN2M5OWI4MjMzNWJlYTAGszsB: 00:26:55.420 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:26:55.420 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:55.420 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:55.420 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:55.420 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:55.420 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:55.420 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:55.420 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.420 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.420 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.420 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:55.420 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:55.421 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:55.421 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:55.421 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:55.421 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:55.421 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:55.421 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:55.421 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:55.421 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:55.421 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:55.421 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:55.421 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.421 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.680 nvme0n1 00:26:55.680 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.680 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:55.680 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:55.680 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.680 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.680 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.680 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:55.680 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:55.680 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.680 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.680 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.680 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:55.680 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:26:55.680 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:55.680 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:55.680 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:55.680 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:55.680 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmY5Njc4N2M1MGYwZjY2ZGY0ZDA5NTMwNzBmOWZkMmZlN2RjYzc0OWNlZjYyYjgylAm1xA==: 00:26:55.680 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWU3ZWVjOGZjNmQ5ZmIwY2IyNzE5YzVkMDgwNWFiYjaKOmhp: 00:26:55.680 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:55.680 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:55.680 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmY5Njc4N2M1MGYwZjY2ZGY0ZDA5NTMwNzBmOWZkMmZlN2RjYzc0OWNlZjYyYjgylAm1xA==: 00:26:55.680 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWU3ZWVjOGZjNmQ5ZmIwY2IyNzE5YzVkMDgwNWFiYjaKOmhp: ]] 00:26:55.680 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWU3ZWVjOGZjNmQ5ZmIwY2IyNzE5YzVkMDgwNWFiYjaKOmhp: 00:26:55.680 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:26:55.680 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:55.680 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:55.680 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:55.680 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:55.680 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:55.680 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:55.680 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.680 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.680 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.680 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:55.680 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:55.680 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:55.680 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:55.680 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:55.680 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:55.680 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:55.680 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:55.680 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:55.680 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:55.680 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:55.680 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:55.680 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.680 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.939 nvme0n1 00:26:55.939 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.939 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:55.939 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:55.939 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.939 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.939 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.939 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:55.939 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:55.939 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.939 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.939 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.939 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:55.939 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:26:55.939 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:55.939 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:55.939 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:55.939 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:55.939 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGU4ZWFjNGFkZDBiOTE2YjBlYzQ0NWMxYTg2MzY3ZTIyMGVlNDY3NDYwNzNkM2RlYmJmMDQ0NjczZjE4MjcwOKi6Z5c=: 00:26:55.939 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:55.939 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:55.939 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:55.939 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGU4ZWFjNGFkZDBiOTE2YjBlYzQ0NWMxYTg2MzY3ZTIyMGVlNDY3NDYwNzNkM2RlYmJmMDQ0NjczZjE4MjcwOKi6Z5c=: 00:26:55.939 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:55.939 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:26:55.939 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:55.939 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:55.939 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:55.939 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:55.939 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:55.939 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:55.939 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.939 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.939 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.939 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:55.939 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:55.939 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:55.939 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:55.939 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:55.939 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:55.939 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:55.939 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:55.939 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:55.939 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:55.939 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:55.939 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:55.939 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.939 17:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.195 nvme0n1 00:26:56.195 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.195 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:56.195 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:56.195 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.195 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.195 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.195 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:56.195 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:56.195 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.195 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.195 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.195 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:56.195 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:56.195 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:26:56.195 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:56.195 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:56.195 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:56.195 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:56.195 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzMxZWY1YTIxYjdjNjcxYjAxY2E2OTliM2MyMDRmYzHWtk5D: 00:26:56.195 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjFjZWM0NzExMDE4ZjIyODJlZTYyMGVhNDEzN2EyY2MxOTRlMzE3ZmNkNmQ0YWY1ZjJhZTQwMTEwZTc1MWYxNYIm3IQ=: 00:26:56.195 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:56.195 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:56.195 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzMxZWY1YTIxYjdjNjcxYjAxY2E2OTliM2MyMDRmYzHWtk5D: 00:26:56.195 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjFjZWM0NzExMDE4ZjIyODJlZTYyMGVhNDEzN2EyY2MxOTRlMzE3ZmNkNmQ0YWY1ZjJhZTQwMTEwZTc1MWYxNYIm3IQ=: ]] 00:26:56.195 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjFjZWM0NzExMDE4ZjIyODJlZTYyMGVhNDEzN2EyY2MxOTRlMzE3ZmNkNmQ0YWY1ZjJhZTQwMTEwZTc1MWYxNYIm3IQ=: 00:26:56.195 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:26:56.195 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:56.195 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:56.195 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:56.195 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:56.195 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:56.195 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:56.195 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.195 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.195 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.195 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:56.195 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:56.195 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:56.196 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:56.196 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:56.196 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:56.196 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:56.196 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:56.196 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:56.196 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:56.196 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:56.196 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:56.196 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.196 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.452 nvme0n1 00:26:56.453 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.453 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:56.453 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:56.453 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.453 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.453 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.453 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:56.453 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:56.453 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.453 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.453 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.453 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:56.453 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:26:56.453 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:56.453 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:56.453 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:56.453 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:56.453 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjM2MjRlZTBmZjg4YmIxODY5MmUzY2QwMjJlZmU0YjRjZjczMmU4ODZmMDBlZjJl2UF/Qg==: 00:26:56.453 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTFkYTFhZTUyMDE3ZWZjYjI4N2FlZTM1YWZiNDQzNDM3MWNjOGExNGM5YTczMmU2QjRzBg==: 00:26:56.453 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:56.453 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:56.453 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjM2MjRlZTBmZjg4YmIxODY5MmUzY2QwMjJlZmU0YjRjZjczMmU4ODZmMDBlZjJl2UF/Qg==: 00:26:56.453 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTFkYTFhZTUyMDE3ZWZjYjI4N2FlZTM1YWZiNDQzNDM3MWNjOGExNGM5YTczMmU2QjRzBg==: ]] 00:26:56.453 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTFkYTFhZTUyMDE3ZWZjYjI4N2FlZTM1YWZiNDQzNDM3MWNjOGExNGM5YTczMmU2QjRzBg==: 00:26:56.453 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:26:56.453 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:56.453 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:56.453 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:56.453 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:56.453 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:56.453 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:56.453 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.453 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.453 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.453 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:56.453 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:56.453 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:56.453 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:56.453 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:56.453 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:56.453 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:56.453 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:56.453 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:56.453 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:56.453 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:56.453 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:56.453 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.453 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.712 nvme0n1 00:26:56.712 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.712 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:56.712 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:56.712 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.712 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.712 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.712 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:56.712 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:56.712 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.712 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.712 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.712 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:56.712 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:26:56.712 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:56.712 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:56.712 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:56.712 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:56.712 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWI5Zjg2YWMzMzRkNjJkNGVkMjc2OWY4M2MwNjZiNTilIsed: 00:26:56.712 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDdhYTZlZTFlMzkwNTllZWFhN2M5OWI4MjMzNWJlYTAGszsB: 00:26:56.712 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:56.712 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:56.712 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWI5Zjg2YWMzMzRkNjJkNGVkMjc2OWY4M2MwNjZiNTilIsed: 00:26:56.712 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDdhYTZlZTFlMzkwNTllZWFhN2M5OWI4MjMzNWJlYTAGszsB: ]] 00:26:56.712 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDdhYTZlZTFlMzkwNTllZWFhN2M5OWI4MjMzNWJlYTAGszsB: 00:26:56.712 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:26:56.712 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:56.712 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:56.712 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:56.712 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:56.712 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:56.712 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:56.712 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.712 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.712 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.712 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:56.712 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:56.712 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:56.712 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:56.712 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:56.712 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:56.712 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:56.712 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:56.712 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:56.712 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:56.712 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:56.712 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:56.712 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.712 17:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.972 nvme0n1 00:26:56.972 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.972 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:56.972 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:56.972 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.972 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.972 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.231 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.231 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.231 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.231 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.231 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.231 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:57.231 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:26:57.231 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:57.231 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:57.231 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:57.231 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:57.231 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmY5Njc4N2M1MGYwZjY2ZGY0ZDA5NTMwNzBmOWZkMmZlN2RjYzc0OWNlZjYyYjgylAm1xA==: 00:26:57.231 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWU3ZWVjOGZjNmQ5ZmIwY2IyNzE5YzVkMDgwNWFiYjaKOmhp: 00:26:57.231 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:57.231 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:57.231 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmY5Njc4N2M1MGYwZjY2ZGY0ZDA5NTMwNzBmOWZkMmZlN2RjYzc0OWNlZjYyYjgylAm1xA==: 00:26:57.231 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWU3ZWVjOGZjNmQ5ZmIwY2IyNzE5YzVkMDgwNWFiYjaKOmhp: ]] 00:26:57.231 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWU3ZWVjOGZjNmQ5ZmIwY2IyNzE5YzVkMDgwNWFiYjaKOmhp: 00:26:57.231 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:26:57.231 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:57.231 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:57.231 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:57.231 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:57.231 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:57.231 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:57.231 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.231 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.231 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.231 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:57.231 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:57.231 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:57.231 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:57.231 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.231 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.231 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:57.231 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.231 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:57.231 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:57.232 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:57.232 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:57.232 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.232 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.491 nvme0n1 00:26:57.491 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.491 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.491 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.491 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.491 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:57.491 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.491 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.491 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.491 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.491 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.491 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.491 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:57.491 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:26:57.491 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:57.491 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:57.491 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:57.491 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:57.491 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGU4ZWFjNGFkZDBiOTE2YjBlYzQ0NWMxYTg2MzY3ZTIyMGVlNDY3NDYwNzNkM2RlYmJmMDQ0NjczZjE4MjcwOKi6Z5c=: 00:26:57.491 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:57.491 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:57.491 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:57.491 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGU4ZWFjNGFkZDBiOTE2YjBlYzQ0NWMxYTg2MzY3ZTIyMGVlNDY3NDYwNzNkM2RlYmJmMDQ0NjczZjE4MjcwOKi6Z5c=: 00:26:57.491 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:57.491 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:26:57.491 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:57.491 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:57.491 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:57.491 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:57.491 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:57.491 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:57.491 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.491 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.491 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.491 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:57.491 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:57.491 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:57.491 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:57.491 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.491 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.491 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:57.491 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.491 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:57.491 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:57.491 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:57.491 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:57.491 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.491 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.750 nvme0n1 00:26:57.750 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.750 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:57.750 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.750 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.750 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.750 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.750 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.750 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.750 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.750 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.750 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.750 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:57.750 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:57.750 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:26:57.750 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:57.750 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:57.750 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:57.750 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:57.750 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzMxZWY1YTIxYjdjNjcxYjAxY2E2OTliM2MyMDRmYzHWtk5D: 00:26:57.750 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjFjZWM0NzExMDE4ZjIyODJlZTYyMGVhNDEzN2EyY2MxOTRlMzE3ZmNkNmQ0YWY1ZjJhZTQwMTEwZTc1MWYxNYIm3IQ=: 00:26:57.750 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:57.750 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:57.750 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzMxZWY1YTIxYjdjNjcxYjAxY2E2OTliM2MyMDRmYzHWtk5D: 00:26:57.750 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjFjZWM0NzExMDE4ZjIyODJlZTYyMGVhNDEzN2EyY2MxOTRlMzE3ZmNkNmQ0YWY1ZjJhZTQwMTEwZTc1MWYxNYIm3IQ=: ]] 00:26:57.750 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjFjZWM0NzExMDE4ZjIyODJlZTYyMGVhNDEzN2EyY2MxOTRlMzE3ZmNkNmQ0YWY1ZjJhZTQwMTEwZTc1MWYxNYIm3IQ=: 00:26:57.750 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:26:57.750 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:57.750 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:57.750 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:57.750 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:57.750 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:57.750 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:57.750 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.750 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.751 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.751 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:57.751 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:57.751 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:57.751 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:57.751 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.751 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.751 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:57.751 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.751 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:57.751 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:57.751 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:57.751 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:57.751 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.751 17:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.318 nvme0n1 00:26:58.318 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.318 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:58.318 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:58.318 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.318 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.318 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.318 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:58.318 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:58.318 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.318 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.318 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.318 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:58.318 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:26:58.318 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:58.318 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:58.318 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:58.318 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:58.318 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjM2MjRlZTBmZjg4YmIxODY5MmUzY2QwMjJlZmU0YjRjZjczMmU4ODZmMDBlZjJl2UF/Qg==: 00:26:58.318 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTFkYTFhZTUyMDE3ZWZjYjI4N2FlZTM1YWZiNDQzNDM3MWNjOGExNGM5YTczMmU2QjRzBg==: 00:26:58.318 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:58.318 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:58.318 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjM2MjRlZTBmZjg4YmIxODY5MmUzY2QwMjJlZmU0YjRjZjczMmU4ODZmMDBlZjJl2UF/Qg==: 00:26:58.318 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTFkYTFhZTUyMDE3ZWZjYjI4N2FlZTM1YWZiNDQzNDM3MWNjOGExNGM5YTczMmU2QjRzBg==: ]] 00:26:58.318 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTFkYTFhZTUyMDE3ZWZjYjI4N2FlZTM1YWZiNDQzNDM3MWNjOGExNGM5YTczMmU2QjRzBg==: 00:26:58.318 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:26:58.318 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:58.318 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:58.318 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:58.318 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:58.318 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:58.318 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:58.318 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.318 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.318 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.318 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:58.318 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:58.318 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:58.318 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:58.318 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:58.318 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:58.318 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:58.318 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:58.318 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:58.318 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:58.318 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:58.318 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:58.318 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.318 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.577 nvme0n1 00:26:58.577 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.577 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:58.577 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:58.577 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.577 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.577 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.577 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:58.577 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:58.577 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.577 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.577 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.577 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:58.577 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:26:58.577 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:58.577 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:58.577 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:58.577 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:58.577 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWI5Zjg2YWMzMzRkNjJkNGVkMjc2OWY4M2MwNjZiNTilIsed: 00:26:58.577 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDdhYTZlZTFlMzkwNTllZWFhN2M5OWI4MjMzNWJlYTAGszsB: 00:26:58.577 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:58.577 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:58.577 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWI5Zjg2YWMzMzRkNjJkNGVkMjc2OWY4M2MwNjZiNTilIsed: 00:26:58.577 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDdhYTZlZTFlMzkwNTllZWFhN2M5OWI4MjMzNWJlYTAGszsB: ]] 00:26:58.577 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDdhYTZlZTFlMzkwNTllZWFhN2M5OWI4MjMzNWJlYTAGszsB: 00:26:58.577 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:26:58.578 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:58.578 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:58.578 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:58.578 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:58.578 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:58.578 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:58.578 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.578 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.578 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.578 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:58.578 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:58.578 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:58.578 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:58.578 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:58.578 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:58.578 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:58.578 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:58.578 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:58.578 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:58.578 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:58.578 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:58.578 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.578 17:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.146 nvme0n1 00:26:59.146 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.146 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:59.146 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:59.146 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.146 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.146 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.146 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:59.146 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:59.146 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.146 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.146 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.146 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:59.146 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:26:59.146 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:59.146 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:59.146 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:59.146 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:59.146 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmY5Njc4N2M1MGYwZjY2ZGY0ZDA5NTMwNzBmOWZkMmZlN2RjYzc0OWNlZjYyYjgylAm1xA==: 00:26:59.146 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWU3ZWVjOGZjNmQ5ZmIwY2IyNzE5YzVkMDgwNWFiYjaKOmhp: 00:26:59.146 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:59.146 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:59.146 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmY5Njc4N2M1MGYwZjY2ZGY0ZDA5NTMwNzBmOWZkMmZlN2RjYzc0OWNlZjYyYjgylAm1xA==: 00:26:59.146 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWU3ZWVjOGZjNmQ5ZmIwY2IyNzE5YzVkMDgwNWFiYjaKOmhp: ]] 00:26:59.146 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWU3ZWVjOGZjNmQ5ZmIwY2IyNzE5YzVkMDgwNWFiYjaKOmhp: 00:26:59.146 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:26:59.146 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:59.146 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:59.146 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:59.146 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:59.146 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:59.146 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:59.146 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.146 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.146 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.146 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:59.146 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:59.146 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:59.146 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:59.146 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:59.146 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:59.146 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:59.146 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:59.146 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:59.146 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:59.146 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:59.146 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:59.146 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.146 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.405 nvme0n1 00:26:59.405 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.405 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:59.405 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:59.405 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.405 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.405 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.405 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:59.405 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:59.405 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.405 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.405 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.405 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:59.405 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:26:59.405 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:59.405 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:59.405 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:59.405 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:59.405 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGU4ZWFjNGFkZDBiOTE2YjBlYzQ0NWMxYTg2MzY3ZTIyMGVlNDY3NDYwNzNkM2RlYmJmMDQ0NjczZjE4MjcwOKi6Z5c=: 00:26:59.664 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:59.664 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:59.664 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:59.664 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGU4ZWFjNGFkZDBiOTE2YjBlYzQ0NWMxYTg2MzY3ZTIyMGVlNDY3NDYwNzNkM2RlYmJmMDQ0NjczZjE4MjcwOKi6Z5c=: 00:26:59.664 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:59.664 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:26:59.664 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:59.664 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:59.664 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:59.664 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:59.664 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:59.664 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:59.664 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.664 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.664 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.664 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:59.664 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:59.664 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:59.664 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:59.664 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:59.664 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:59.664 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:59.664 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:59.664 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:59.664 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:59.664 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:59.664 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:59.664 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.664 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.923 nvme0n1 00:26:59.923 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.923 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:59.923 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:59.923 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.923 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.923 17:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.923 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:59.923 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:59.923 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.923 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.923 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.923 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:59.923 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:59.923 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:26:59.923 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:59.923 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:59.923 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:59.923 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:59.923 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzMxZWY1YTIxYjdjNjcxYjAxY2E2OTliM2MyMDRmYzHWtk5D: 00:26:59.923 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjFjZWM0NzExMDE4ZjIyODJlZTYyMGVhNDEzN2EyY2MxOTRlMzE3ZmNkNmQ0YWY1ZjJhZTQwMTEwZTc1MWYxNYIm3IQ=: 00:26:59.923 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:59.923 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:59.923 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzMxZWY1YTIxYjdjNjcxYjAxY2E2OTliM2MyMDRmYzHWtk5D: 00:26:59.923 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjFjZWM0NzExMDE4ZjIyODJlZTYyMGVhNDEzN2EyY2MxOTRlMzE3ZmNkNmQ0YWY1ZjJhZTQwMTEwZTc1MWYxNYIm3IQ=: ]] 00:26:59.923 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjFjZWM0NzExMDE4ZjIyODJlZTYyMGVhNDEzN2EyY2MxOTRlMzE3ZmNkNmQ0YWY1ZjJhZTQwMTEwZTc1MWYxNYIm3IQ=: 00:26:59.923 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:26:59.923 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:59.923 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:59.923 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:59.923 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:59.923 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:59.923 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:59.923 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.923 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.923 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.923 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:59.923 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:59.923 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:59.923 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:59.923 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:59.923 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:59.923 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:59.923 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:59.923 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:59.923 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:59.923 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:59.923 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:59.923 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.923 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.491 nvme0n1 00:27:00.491 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.491 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:00.491 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:00.491 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.491 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.491 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.491 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:00.491 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:00.491 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.491 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.491 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.491 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:00.491 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:27:00.491 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:00.491 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:00.491 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:00.491 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:00.491 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjM2MjRlZTBmZjg4YmIxODY5MmUzY2QwMjJlZmU0YjRjZjczMmU4ODZmMDBlZjJl2UF/Qg==: 00:27:00.491 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTFkYTFhZTUyMDE3ZWZjYjI4N2FlZTM1YWZiNDQzNDM3MWNjOGExNGM5YTczMmU2QjRzBg==: 00:27:00.491 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:00.491 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:00.491 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjM2MjRlZTBmZjg4YmIxODY5MmUzY2QwMjJlZmU0YjRjZjczMmU4ODZmMDBlZjJl2UF/Qg==: 00:27:00.491 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTFkYTFhZTUyMDE3ZWZjYjI4N2FlZTM1YWZiNDQzNDM3MWNjOGExNGM5YTczMmU2QjRzBg==: ]] 00:27:00.491 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTFkYTFhZTUyMDE3ZWZjYjI4N2FlZTM1YWZiNDQzNDM3MWNjOGExNGM5YTczMmU2QjRzBg==: 00:27:00.491 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:27:00.491 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:00.491 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:00.491 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:00.491 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:00.491 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:00.491 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:00.491 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.491 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.750 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.750 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:00.750 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:00.750 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:00.750 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:00.750 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:00.750 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:00.750 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:00.750 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:00.750 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:00.750 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:00.750 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:00.750 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:00.750 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.750 17:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.317 nvme0n1 00:27:01.317 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.317 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:01.317 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:01.317 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.317 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.317 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.317 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:01.317 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:01.317 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.317 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.317 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.317 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:01.317 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:27:01.317 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:01.317 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:01.317 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:01.317 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:01.317 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWI5Zjg2YWMzMzRkNjJkNGVkMjc2OWY4M2MwNjZiNTilIsed: 00:27:01.317 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDdhYTZlZTFlMzkwNTllZWFhN2M5OWI4MjMzNWJlYTAGszsB: 00:27:01.317 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:01.317 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:01.317 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWI5Zjg2YWMzMzRkNjJkNGVkMjc2OWY4M2MwNjZiNTilIsed: 00:27:01.317 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDdhYTZlZTFlMzkwNTllZWFhN2M5OWI4MjMzNWJlYTAGszsB: ]] 00:27:01.317 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDdhYTZlZTFlMzkwNTllZWFhN2M5OWI4MjMzNWJlYTAGszsB: 00:27:01.317 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:27:01.317 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:01.318 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:01.318 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:01.318 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:01.318 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:01.318 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:01.318 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.318 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.318 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.318 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:01.318 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:01.318 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:01.318 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:01.318 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:01.318 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:01.318 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:01.318 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:01.318 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:01.318 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:01.318 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:01.318 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:01.318 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.318 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.885 nvme0n1 00:27:01.885 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.885 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:01.885 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:01.885 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.885 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.885 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.885 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:01.885 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:01.885 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.885 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.885 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.885 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:01.885 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:27:01.885 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:01.885 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:01.885 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:01.885 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:01.885 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmY5Njc4N2M1MGYwZjY2ZGY0ZDA5NTMwNzBmOWZkMmZlN2RjYzc0OWNlZjYyYjgylAm1xA==: 00:27:01.885 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWU3ZWVjOGZjNmQ5ZmIwY2IyNzE5YzVkMDgwNWFiYjaKOmhp: 00:27:01.885 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:01.885 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:01.885 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmY5Njc4N2M1MGYwZjY2ZGY0ZDA5NTMwNzBmOWZkMmZlN2RjYzc0OWNlZjYyYjgylAm1xA==: 00:27:01.885 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWU3ZWVjOGZjNmQ5ZmIwY2IyNzE5YzVkMDgwNWFiYjaKOmhp: ]] 00:27:01.885 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWU3ZWVjOGZjNmQ5ZmIwY2IyNzE5YzVkMDgwNWFiYjaKOmhp: 00:27:01.885 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:27:01.885 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:01.885 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:01.885 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:01.886 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:01.886 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:01.886 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:01.886 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.886 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.886 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.886 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:01.886 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:01.886 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:01.886 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:01.886 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:01.886 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:01.886 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:01.886 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:01.886 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:01.886 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:01.886 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:01.886 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:01.886 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.886 17:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.453 nvme0n1 00:27:02.453 17:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.453 17:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:02.453 17:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:02.453 17:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.453 17:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.453 17:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.453 17:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:02.453 17:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:02.453 17:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.453 17:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.453 17:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.453 17:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:02.453 17:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:27:02.453 17:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:02.453 17:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:02.453 17:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:02.453 17:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:02.453 17:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGU4ZWFjNGFkZDBiOTE2YjBlYzQ0NWMxYTg2MzY3ZTIyMGVlNDY3NDYwNzNkM2RlYmJmMDQ0NjczZjE4MjcwOKi6Z5c=: 00:27:02.453 17:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:02.453 17:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:02.453 17:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:02.453 17:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGU4ZWFjNGFkZDBiOTE2YjBlYzQ0NWMxYTg2MzY3ZTIyMGVlNDY3NDYwNzNkM2RlYmJmMDQ0NjczZjE4MjcwOKi6Z5c=: 00:27:02.453 17:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:02.453 17:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:27:02.453 17:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:02.453 17:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:02.453 17:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:02.453 17:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:02.453 17:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:02.453 17:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:02.453 17:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.453 17:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.712 17:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.712 17:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:02.712 17:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:02.712 17:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:02.712 17:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:02.712 17:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:02.712 17:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:02.712 17:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:02.712 17:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:02.712 17:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:02.712 17:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:02.712 17:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:02.712 17:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:02.712 17:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.712 17:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.281 nvme0n1 00:27:03.281 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.281 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:03.281 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:03.281 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.281 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.281 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.281 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:03.281 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:03.281 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.281 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.281 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.281 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:03.281 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:03.281 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:03.281 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:03.281 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:03.281 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjM2MjRlZTBmZjg4YmIxODY5MmUzY2QwMjJlZmU0YjRjZjczMmU4ODZmMDBlZjJl2UF/Qg==: 00:27:03.281 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTFkYTFhZTUyMDE3ZWZjYjI4N2FlZTM1YWZiNDQzNDM3MWNjOGExNGM5YTczMmU2QjRzBg==: 00:27:03.281 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:03.281 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:03.281 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjM2MjRlZTBmZjg4YmIxODY5MmUzY2QwMjJlZmU0YjRjZjczMmU4ODZmMDBlZjJl2UF/Qg==: 00:27:03.281 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTFkYTFhZTUyMDE3ZWZjYjI4N2FlZTM1YWZiNDQzNDM3MWNjOGExNGM5YTczMmU2QjRzBg==: ]] 00:27:03.281 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTFkYTFhZTUyMDE3ZWZjYjI4N2FlZTM1YWZiNDQzNDM3MWNjOGExNGM5YTczMmU2QjRzBg==: 00:27:03.281 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:03.281 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.281 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.281 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.281 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:27:03.281 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:03.281 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:03.281 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:03.281 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:03.281 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:03.281 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:03.281 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:03.281 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:03.281 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:03.281 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:03.281 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:03.281 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:03.281 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:03.281 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:03.281 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:03.281 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:03.281 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:03.281 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:03.281 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.281 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.281 request: 00:27:03.281 { 00:27:03.281 "name": "nvme0", 00:27:03.281 "trtype": "tcp", 00:27:03.281 "traddr": "10.0.0.1", 00:27:03.281 "adrfam": "ipv4", 00:27:03.281 "trsvcid": "4420", 00:27:03.281 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:03.281 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:03.281 "prchk_reftag": false, 00:27:03.281 "prchk_guard": false, 00:27:03.281 "hdgst": false, 00:27:03.281 "ddgst": false, 00:27:03.281 "allow_unrecognized_csi": false, 00:27:03.281 "method": "bdev_nvme_attach_controller", 00:27:03.281 "req_id": 1 00:27:03.281 } 00:27:03.281 Got JSON-RPC error response 00:27:03.281 response: 00:27:03.281 { 00:27:03.281 "code": -5, 00:27:03.281 "message": "Input/output error" 00:27:03.281 } 00:27:03.281 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:03.281 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:03.281 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:03.281 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:03.281 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:03.282 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:27:03.282 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:27:03.282 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.282 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.282 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.282 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:27:03.282 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:27:03.282 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:03.282 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:03.282 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:03.282 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:03.282 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:03.282 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:03.282 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:03.282 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:03.282 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:03.282 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:03.282 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:03.282 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:03.282 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:03.282 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:03.282 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:03.282 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:03.282 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:03.282 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:03.282 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.282 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.282 request: 00:27:03.282 { 00:27:03.282 "name": "nvme0", 00:27:03.282 "trtype": "tcp", 00:27:03.282 "traddr": "10.0.0.1", 00:27:03.282 "adrfam": "ipv4", 00:27:03.282 "trsvcid": "4420", 00:27:03.282 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:03.282 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:03.282 "prchk_reftag": false, 00:27:03.282 "prchk_guard": false, 00:27:03.282 "hdgst": false, 00:27:03.282 "ddgst": false, 00:27:03.282 "dhchap_key": "key2", 00:27:03.282 "allow_unrecognized_csi": false, 00:27:03.282 "method": "bdev_nvme_attach_controller", 00:27:03.282 "req_id": 1 00:27:03.282 } 00:27:03.282 Got JSON-RPC error response 00:27:03.282 response: 00:27:03.282 { 00:27:03.282 "code": -5, 00:27:03.282 "message": "Input/output error" 00:27:03.282 } 00:27:03.282 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:03.282 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:03.282 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:03.282 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:03.282 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:03.282 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:27:03.282 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:27:03.282 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.282 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.282 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.541 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:27:03.541 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:27:03.541 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:03.541 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:03.541 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:03.541 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:03.541 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:03.541 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:03.541 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:03.541 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:03.541 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:03.541 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:03.541 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:03.541 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:03.541 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:03.541 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:03.541 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:03.541 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:03.541 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:03.541 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:03.541 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.541 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.541 request: 00:27:03.541 { 00:27:03.541 "name": "nvme0", 00:27:03.541 "trtype": "tcp", 00:27:03.541 "traddr": "10.0.0.1", 00:27:03.541 "adrfam": "ipv4", 00:27:03.541 "trsvcid": "4420", 00:27:03.541 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:03.541 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:03.541 "prchk_reftag": false, 00:27:03.541 "prchk_guard": false, 00:27:03.541 "hdgst": false, 00:27:03.541 "ddgst": false, 00:27:03.541 "dhchap_key": "key1", 00:27:03.541 "dhchap_ctrlr_key": "ckey2", 00:27:03.541 "allow_unrecognized_csi": false, 00:27:03.541 "method": "bdev_nvme_attach_controller", 00:27:03.541 "req_id": 1 00:27:03.541 } 00:27:03.541 Got JSON-RPC error response 00:27:03.541 response: 00:27:03.541 { 00:27:03.541 "code": -5, 00:27:03.541 "message": "Input/output error" 00:27:03.541 } 00:27:03.541 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:03.541 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:03.541 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:03.541 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:03.541 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:03.542 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:27:03.542 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:03.542 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:03.542 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:03.542 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:03.542 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:03.542 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:03.542 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:03.542 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:03.542 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:03.542 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:03.542 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:27:03.542 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.542 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.542 nvme0n1 00:27:03.542 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.542 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:03.542 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:03.542 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:03.542 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:03.542 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:03.542 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWI5Zjg2YWMzMzRkNjJkNGVkMjc2OWY4M2MwNjZiNTilIsed: 00:27:03.542 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDdhYTZlZTFlMzkwNTllZWFhN2M5OWI4MjMzNWJlYTAGszsB: 00:27:03.542 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:03.542 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:03.542 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWI5Zjg2YWMzMzRkNjJkNGVkMjc2OWY4M2MwNjZiNTilIsed: 00:27:03.542 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDdhYTZlZTFlMzkwNTllZWFhN2M5OWI4MjMzNWJlYTAGszsB: ]] 00:27:03.542 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDdhYTZlZTFlMzkwNTllZWFhN2M5OWI4MjMzNWJlYTAGszsB: 00:27:03.542 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:03.542 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.542 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.801 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.801 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:27:03.801 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:27:03.801 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.801 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.801 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.801 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:03.801 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:03.801 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:03.801 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:03.801 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:03.801 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:03.801 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:03.801 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:03.801 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:03.801 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.801 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.801 request: 00:27:03.801 { 00:27:03.801 "name": "nvme0", 00:27:03.801 "dhchap_key": "key1", 00:27:03.801 "dhchap_ctrlr_key": "ckey2", 00:27:03.801 "method": "bdev_nvme_set_keys", 00:27:03.801 "req_id": 1 00:27:03.801 } 00:27:03.801 Got JSON-RPC error response 00:27:03.801 response: 00:27:03.801 { 00:27:03.801 "code": -13, 00:27:03.801 "message": "Permission denied" 00:27:03.801 } 00:27:03.801 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:03.801 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:03.801 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:03.801 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:03.801 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:03.801 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:03.801 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:03.801 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.801 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.801 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.801 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:27:03.801 17:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:27:05.178 17:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:05.178 17:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:05.178 17:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.178 17:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.178 17:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.178 17:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:27:05.178 17:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:27:06.114 17:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:06.114 17:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:06.114 17:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.114 17:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.114 17:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.114 17:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:27:06.114 17:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:06.114 17:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:06.114 17:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:06.114 17:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:06.114 17:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:06.114 17:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjM2MjRlZTBmZjg4YmIxODY5MmUzY2QwMjJlZmU0YjRjZjczMmU4ODZmMDBlZjJl2UF/Qg==: 00:27:06.114 17:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTFkYTFhZTUyMDE3ZWZjYjI4N2FlZTM1YWZiNDQzNDM3MWNjOGExNGM5YTczMmU2QjRzBg==: 00:27:06.114 17:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:06.114 17:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:06.114 17:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjM2MjRlZTBmZjg4YmIxODY5MmUzY2QwMjJlZmU0YjRjZjczMmU4ODZmMDBlZjJl2UF/Qg==: 00:27:06.114 17:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTFkYTFhZTUyMDE3ZWZjYjI4N2FlZTM1YWZiNDQzNDM3MWNjOGExNGM5YTczMmU2QjRzBg==: ]] 00:27:06.114 17:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTFkYTFhZTUyMDE3ZWZjYjI4N2FlZTM1YWZiNDQzNDM3MWNjOGExNGM5YTczMmU2QjRzBg==: 00:27:06.114 17:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:27:06.114 17:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:06.114 17:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:06.114 17:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:06.114 17:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:06.114 17:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:06.114 17:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:06.114 17:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:06.114 17:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:06.114 17:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:06.115 17:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:06.115 17:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:27:06.115 17:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.115 17:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.115 nvme0n1 00:27:06.115 17:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.115 17:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:06.115 17:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:06.115 17:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:06.115 17:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:06.115 17:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:06.115 17:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWI5Zjg2YWMzMzRkNjJkNGVkMjc2OWY4M2MwNjZiNTilIsed: 00:27:06.115 17:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDdhYTZlZTFlMzkwNTllZWFhN2M5OWI4MjMzNWJlYTAGszsB: 00:27:06.115 17:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:06.115 17:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:06.115 17:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWI5Zjg2YWMzMzRkNjJkNGVkMjc2OWY4M2MwNjZiNTilIsed: 00:27:06.115 17:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDdhYTZlZTFlMzkwNTllZWFhN2M5OWI4MjMzNWJlYTAGszsB: ]] 00:27:06.115 17:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDdhYTZlZTFlMzkwNTllZWFhN2M5OWI4MjMzNWJlYTAGszsB: 00:27:06.115 17:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:06.115 17:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:06.115 17:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:06.115 17:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:06.115 17:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:06.115 17:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:06.115 17:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:06.115 17:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:06.115 17:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.115 17:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.115 request: 00:27:06.115 { 00:27:06.115 "name": "nvme0", 00:27:06.115 "dhchap_key": "key2", 00:27:06.115 "dhchap_ctrlr_key": "ckey1", 00:27:06.115 "method": "bdev_nvme_set_keys", 00:27:06.115 "req_id": 1 00:27:06.115 } 00:27:06.115 Got JSON-RPC error response 00:27:06.115 response: 00:27:06.115 { 00:27:06.115 "code": -13, 00:27:06.115 "message": "Permission denied" 00:27:06.115 } 00:27:06.115 17:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:06.115 17:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:06.115 17:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:06.115 17:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:06.115 17:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:06.115 17:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:27:06.115 17:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:27:06.115 17:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.115 17:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.115 17:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.115 17:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:27:06.115 17:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:27:07.492 17:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:27:07.492 17:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:27:07.492 17:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.492 17:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.492 17:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.492 17:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:27:07.492 17:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:27:07.492 17:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:27:07.492 17:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:27:07.492 17:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:07.492 17:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:27:07.492 17:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:07.492 17:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:27:07.492 17:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:07.492 17:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:07.492 rmmod nvme_tcp 00:27:07.492 rmmod nvme_fabrics 00:27:07.492 17:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:07.492 17:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:27:07.492 17:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:27:07.492 17:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 2710068 ']' 00:27:07.492 17:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 2710068 00:27:07.492 17:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 2710068 ']' 00:27:07.492 17:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 2710068 00:27:07.492 17:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:27:07.492 17:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:07.492 17:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2710068 00:27:07.492 17:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:07.492 17:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:07.492 17:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2710068' 00:27:07.492 killing process with pid 2710068 00:27:07.492 17:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 2710068 00:27:07.492 17:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 2710068 00:27:07.492 17:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:07.492 17:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:07.492 17:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:07.492 17:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:27:07.492 17:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:27:07.492 17:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:27:07.492 17:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:07.492 17:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:07.492 17:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:07.492 17:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:07.492 17:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:07.492 17:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:10.026 17:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:10.026 17:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:10.026 17:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:10.026 17:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:27:10.026 17:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:27:10.026 17:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:27:10.026 17:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:10.026 17:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:10.026 17:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:10.026 17:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:10.026 17:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:27:10.026 17:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:27:10.026 17:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:12.556 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:27:12.556 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:12.556 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:12.556 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:12.556 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:12.556 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:12.815 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:12.815 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:12.815 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:12.815 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:12.815 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:12.815 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:12.815 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:12.815 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:12.815 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:12.815 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:12.815 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:13.749 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:27:13.749 17:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.zvV /tmp/spdk.key-null.ysp /tmp/spdk.key-sha256.0Wq /tmp/spdk.key-sha384.E7i /tmp/spdk.key-sha512.QyO /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:27:13.749 17:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:16.283 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:27:16.542 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:27:16.542 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:27:16.542 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:27:16.542 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:27:16.542 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:27:16.542 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:27:16.542 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:27:16.542 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:27:16.542 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:27:16.542 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:27:16.542 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:27:16.542 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:27:16.542 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:27:16.542 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:27:16.542 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:27:16.542 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:27:16.802 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:27:16.802 00:27:16.802 real 0m54.425s 00:27:16.802 user 0m49.103s 00:27:16.802 sys 0m12.912s 00:27:16.802 17:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:16.802 17:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.802 ************************************ 00:27:16.802 END TEST nvmf_auth_host 00:27:16.802 ************************************ 00:27:16.802 17:37:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:27:16.802 17:37:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:16.802 17:37:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:16.802 17:37:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:16.802 17:37:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.802 ************************************ 00:27:16.802 START TEST nvmf_digest 00:27:16.802 ************************************ 00:27:16.802 17:37:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:17.062 * Looking for test storage... 00:27:17.062 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:17.062 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:17.062 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:27:17.062 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:17.062 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:17.062 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:17.062 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:17.062 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:17.062 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:27:17.062 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:27:17.062 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:27:17.062 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:27:17.062 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:27:17.062 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:27:17.062 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:27:17.062 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:17.062 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:27:17.062 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:27:17.062 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:17.062 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:17.062 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:27:17.062 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:27:17.062 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:17.062 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:27:17.062 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:27:17.062 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:27:17.062 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:27:17.062 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:17.062 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:27:17.062 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:27:17.062 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:17.062 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:17.062 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:27:17.063 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:17.063 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:17.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:17.063 --rc genhtml_branch_coverage=1 00:27:17.063 --rc genhtml_function_coverage=1 00:27:17.063 --rc genhtml_legend=1 00:27:17.063 --rc geninfo_all_blocks=1 00:27:17.063 --rc geninfo_unexecuted_blocks=1 00:27:17.063 00:27:17.063 ' 00:27:17.063 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:17.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:17.063 --rc genhtml_branch_coverage=1 00:27:17.063 --rc genhtml_function_coverage=1 00:27:17.063 --rc genhtml_legend=1 00:27:17.063 --rc geninfo_all_blocks=1 00:27:17.063 --rc geninfo_unexecuted_blocks=1 00:27:17.063 00:27:17.063 ' 00:27:17.063 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:17.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:17.063 --rc genhtml_branch_coverage=1 00:27:17.063 --rc genhtml_function_coverage=1 00:27:17.063 --rc genhtml_legend=1 00:27:17.063 --rc geninfo_all_blocks=1 00:27:17.063 --rc geninfo_unexecuted_blocks=1 00:27:17.063 00:27:17.063 ' 00:27:17.063 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:17.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:17.063 --rc genhtml_branch_coverage=1 00:27:17.063 --rc genhtml_function_coverage=1 00:27:17.063 --rc genhtml_legend=1 00:27:17.063 --rc geninfo_all_blocks=1 00:27:17.063 --rc geninfo_unexecuted_blocks=1 00:27:17.063 00:27:17.063 ' 00:27:17.063 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:17.063 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:27:17.063 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:17.063 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:17.063 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:17.063 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:17.063 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:17.063 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:17.063 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:17.063 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:17.063 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:17.063 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:17.063 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:27:17.063 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:27:17.063 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:17.063 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:17.063 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:17.063 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:17.063 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:17.063 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:27:17.063 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:17.063 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:17.063 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:17.063 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.063 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.063 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.063 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:27:17.063 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.063 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:27:17.063 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:17.063 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:17.063 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:17.063 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:17.063 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:17.063 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:17.063 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:17.063 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:17.063 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:17.063 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:17.063 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:27:17.063 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:27:17.063 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:27:17.063 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:27:17.063 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:27:17.063 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:17.063 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:17.063 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:17.063 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:17.063 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:17.063 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:17.063 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:17.063 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:17.063 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:17.063 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:17.063 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:27:17.063 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:23.635 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:23.635 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:23.635 Found net devices under 0000:af:00.0: cvl_0_0 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:23.635 Found net devices under 0000:af:00.1: cvl_0_1 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:23.635 17:37:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:23.635 17:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:23.635 17:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:23.635 17:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:23.635 17:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:23.635 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:23.636 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.321 ms 00:27:23.636 00:27:23.636 --- 10.0.0.2 ping statistics --- 00:27:23.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:23.636 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:27:23.636 17:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:23.636 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:23.636 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:27:23.636 00:27:23.636 --- 10.0.0.1 ping statistics --- 00:27:23.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:23.636 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:27:23.636 17:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:23.636 17:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:27:23.636 17:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:23.636 17:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:23.636 17:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:23.636 17:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:23.636 17:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:23.636 17:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:23.636 17:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:23.636 17:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:23.636 17:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:27:23.636 17:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:27:23.636 17:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:23.636 17:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:23.636 17:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:23.636 ************************************ 00:27:23.636 START TEST nvmf_digest_clean 00:27:23.636 ************************************ 00:27:23.636 17:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:27:23.636 17:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:27:23.636 17:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:27:23.636 17:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:27:23.636 17:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:27:23.636 17:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:27:23.636 17:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:23.636 17:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:23.636 17:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:23.636 17:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=2724007 00:27:23.636 17:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 2724007 00:27:23.636 17:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:23.636 17:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2724007 ']' 00:27:23.636 17:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:23.636 17:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:23.636 17:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:23.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:23.636 17:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:23.636 17:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:23.636 [2024-12-09 17:37:52.181785] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:27:23.636 [2024-12-09 17:37:52.181831] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:23.636 [2024-12-09 17:37:52.260041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:23.636 [2024-12-09 17:37:52.299730] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:23.636 [2024-12-09 17:37:52.299765] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:23.636 [2024-12-09 17:37:52.299771] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:23.636 [2024-12-09 17:37:52.299777] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:23.636 [2024-12-09 17:37:52.299782] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:23.636 [2024-12-09 17:37:52.300324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:23.636 17:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:23.636 17:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:23.636 17:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:23.636 17:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:23.636 17:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:23.636 17:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:23.636 17:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:27:23.636 17:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:27:23.636 17:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:27:23.636 17:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.636 17:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:23.636 null0 00:27:23.636 [2024-12-09 17:37:52.456010] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:23.636 [2024-12-09 17:37:52.480178] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:23.636 17:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.636 17:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:27:23.636 17:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:23.636 17:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:23.636 17:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:23.636 17:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:23.636 17:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:23.636 17:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:23.636 17:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2724031 00:27:23.636 17:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2724031 /var/tmp/bperf.sock 00:27:23.636 17:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:23.636 17:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2724031 ']' 00:27:23.636 17:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:23.636 17:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:23.636 17:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:23.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:23.636 17:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:23.636 17:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:23.636 [2024-12-09 17:37:52.530981] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:27:23.636 [2024-12-09 17:37:52.531022] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2724031 ] 00:27:23.636 [2024-12-09 17:37:52.604422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:23.636 [2024-12-09 17:37:52.644324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:23.636 17:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:23.636 17:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:23.636 17:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:23.636 17:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:23.636 17:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:23.895 17:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:23.895 17:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:24.153 nvme0n1 00:27:24.153 17:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:24.153 17:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:24.153 Running I/O for 2 seconds... 00:27:26.469 24879.00 IOPS, 97.18 MiB/s [2024-12-09T16:37:55.648Z] 25221.50 IOPS, 98.52 MiB/s 00:27:26.469 Latency(us) 00:27:26.469 [2024-12-09T16:37:55.648Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:26.469 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:26.469 nvme0n1 : 2.05 24719.19 96.56 0.00 0.00 5072.69 2527.82 46436.94 00:27:26.469 [2024-12-09T16:37:55.648Z] =================================================================================================================== 00:27:26.469 [2024-12-09T16:37:55.648Z] Total : 24719.19 96.56 0.00 0.00 5072.69 2527.82 46436.94 00:27:26.469 { 00:27:26.469 "results": [ 00:27:26.469 { 00:27:26.469 "job": "nvme0n1", 00:27:26.469 "core_mask": "0x2", 00:27:26.469 "workload": "randread", 00:27:26.469 "status": "finished", 00:27:26.469 "queue_depth": 128, 00:27:26.469 "io_size": 4096, 00:27:26.469 "runtime": 2.049177, 00:27:26.469 "iops": 24719.19214396804, 00:27:26.469 "mibps": 96.55934431237516, 00:27:26.469 "io_failed": 0, 00:27:26.469 "io_timeout": 0, 00:27:26.469 "avg_latency_us": 5072.685453769457, 00:27:26.469 "min_latency_us": 2527.8171428571427, 00:27:26.469 "max_latency_us": 46436.93714285714 00:27:26.469 } 00:27:26.469 ], 00:27:26.469 "core_count": 1 00:27:26.469 } 00:27:26.469 17:37:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:26.469 17:37:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:26.469 17:37:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:26.469 | select(.opcode=="crc32c") 00:27:26.469 | "\(.module_name) \(.executed)"' 00:27:26.469 17:37:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:26.469 17:37:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:26.469 17:37:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:26.469 17:37:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:26.469 17:37:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:26.469 17:37:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:26.469 17:37:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2724031 00:27:26.469 17:37:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2724031 ']' 00:27:26.469 17:37:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2724031 00:27:26.469 17:37:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:26.469 17:37:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:26.469 17:37:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2724031 00:27:26.728 17:37:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:26.728 17:37:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:26.728 17:37:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2724031' 00:27:26.728 killing process with pid 2724031 00:27:26.728 17:37:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2724031 00:27:26.728 Received shutdown signal, test time was about 2.000000 seconds 00:27:26.728 00:27:26.728 Latency(us) 00:27:26.728 [2024-12-09T16:37:55.907Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:26.728 [2024-12-09T16:37:55.907Z] =================================================================================================================== 00:27:26.728 [2024-12-09T16:37:55.907Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:26.728 17:37:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2724031 00:27:26.728 17:37:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:27:26.728 17:37:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:26.728 17:37:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:26.728 17:37:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:26.728 17:37:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:27:26.728 17:37:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:27:26.728 17:37:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:26.728 17:37:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2724501 00:27:26.728 17:37:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2724501 /var/tmp/bperf.sock 00:27:26.728 17:37:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:26.728 17:37:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2724501 ']' 00:27:26.728 17:37:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:26.728 17:37:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:26.728 17:37:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:26.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:26.728 17:37:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:26.728 17:37:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:26.728 [2024-12-09 17:37:55.860420] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:27:26.728 [2024-12-09 17:37:55.860468] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2724501 ] 00:27:26.728 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:26.728 Zero copy mechanism will not be used. 00:27:26.987 [2024-12-09 17:37:55.933679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:26.987 [2024-12-09 17:37:55.972006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:26.987 17:37:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:26.987 17:37:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:26.987 17:37:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:26.987 17:37:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:26.987 17:37:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:27.245 17:37:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:27.245 17:37:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:27.503 nvme0n1 00:27:27.503 17:37:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:27.503 17:37:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:27.503 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:27.503 Zero copy mechanism will not be used. 00:27:27.503 Running I/O for 2 seconds... 00:27:29.811 6048.00 IOPS, 756.00 MiB/s [2024-12-09T16:37:58.990Z] 6112.00 IOPS, 764.00 MiB/s 00:27:29.811 Latency(us) 00:27:29.811 [2024-12-09T16:37:58.990Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:29.811 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:29.811 nvme0n1 : 2.00 6110.86 763.86 0.00 0.00 2615.59 686.57 4056.99 00:27:29.811 [2024-12-09T16:37:58.990Z] =================================================================================================================== 00:27:29.811 [2024-12-09T16:37:58.990Z] Total : 6110.86 763.86 0.00 0.00 2615.59 686.57 4056.99 00:27:29.811 { 00:27:29.811 "results": [ 00:27:29.811 { 00:27:29.811 "job": "nvme0n1", 00:27:29.811 "core_mask": "0x2", 00:27:29.811 "workload": "randread", 00:27:29.811 "status": "finished", 00:27:29.811 "queue_depth": 16, 00:27:29.811 "io_size": 131072, 00:27:29.811 "runtime": 2.002991, 00:27:29.811 "iops": 6110.861207064834, 00:27:29.811 "mibps": 763.8576508831043, 00:27:29.811 "io_failed": 0, 00:27:29.811 "io_timeout": 0, 00:27:29.811 "avg_latency_us": 2615.585065670713, 00:27:29.811 "min_latency_us": 686.567619047619, 00:27:29.811 "max_latency_us": 4056.9904761904763 00:27:29.811 } 00:27:29.811 ], 00:27:29.811 "core_count": 1 00:27:29.811 } 00:27:29.811 17:37:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:29.811 17:37:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:29.811 17:37:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:29.812 17:37:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:29.812 | select(.opcode=="crc32c") 00:27:29.812 | "\(.module_name) \(.executed)"' 00:27:29.812 17:37:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:29.812 17:37:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:29.812 17:37:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:29.812 17:37:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:29.812 17:37:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:29.812 17:37:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2724501 00:27:29.812 17:37:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2724501 ']' 00:27:29.812 17:37:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2724501 00:27:29.812 17:37:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:29.812 17:37:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:29.812 17:37:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2724501 00:27:29.812 17:37:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:29.812 17:37:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:29.812 17:37:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2724501' 00:27:29.812 killing process with pid 2724501 00:27:29.812 17:37:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2724501 00:27:29.812 Received shutdown signal, test time was about 2.000000 seconds 00:27:29.812 00:27:29.812 Latency(us) 00:27:29.812 [2024-12-09T16:37:58.991Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:29.812 [2024-12-09T16:37:58.991Z] =================================================================================================================== 00:27:29.812 [2024-12-09T16:37:58.991Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:29.812 17:37:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2724501 00:27:30.070 17:37:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:27:30.070 17:37:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:30.070 17:37:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:30.070 17:37:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:27:30.070 17:37:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:30.070 17:37:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:30.070 17:37:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:30.070 17:37:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2725117 00:27:30.070 17:37:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2725117 /var/tmp/bperf.sock 00:27:30.070 17:37:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:30.070 17:37:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2725117 ']' 00:27:30.070 17:37:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:30.070 17:37:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:30.070 17:37:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:30.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:30.070 17:37:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:30.070 17:37:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:30.070 [2024-12-09 17:37:59.160690] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:27:30.070 [2024-12-09 17:37:59.160739] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2725117 ] 00:27:30.070 [2024-12-09 17:37:59.231948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:30.329 [2024-12-09 17:37:59.272916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:30.329 17:37:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:30.329 17:37:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:30.329 17:37:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:30.329 17:37:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:30.329 17:37:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:30.587 17:37:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:30.587 17:37:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:30.845 nvme0n1 00:27:30.846 17:37:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:30.846 17:37:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:30.846 Running I/O for 2 seconds... 00:27:33.152 27453.00 IOPS, 107.24 MiB/s [2024-12-09T16:38:02.331Z] 27578.50 IOPS, 107.73 MiB/s 00:27:33.152 Latency(us) 00:27:33.152 [2024-12-09T16:38:02.331Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:33.152 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:33.152 nvme0n1 : 2.01 27581.37 107.74 0.00 0.00 4633.22 3432.84 10673.01 00:27:33.152 [2024-12-09T16:38:02.331Z] =================================================================================================================== 00:27:33.152 [2024-12-09T16:38:02.331Z] Total : 27581.37 107.74 0.00 0.00 4633.22 3432.84 10673.01 00:27:33.152 { 00:27:33.152 "results": [ 00:27:33.152 { 00:27:33.152 "job": "nvme0n1", 00:27:33.152 "core_mask": "0x2", 00:27:33.152 "workload": "randwrite", 00:27:33.152 "status": "finished", 00:27:33.152 "queue_depth": 128, 00:27:33.152 "io_size": 4096, 00:27:33.152 "runtime": 2.005593, 00:27:33.152 "iops": 27581.36870242367, 00:27:33.152 "mibps": 107.73972149384247, 00:27:33.152 "io_failed": 0, 00:27:33.152 "io_timeout": 0, 00:27:33.152 "avg_latency_us": 4633.216649114153, 00:27:33.152 "min_latency_us": 3432.8380952380953, 00:27:33.152 "max_latency_us": 10673.005714285715 00:27:33.152 } 00:27:33.152 ], 00:27:33.152 "core_count": 1 00:27:33.152 } 00:27:33.152 17:38:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:33.152 17:38:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:33.152 17:38:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:33.152 17:38:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:33.152 | select(.opcode=="crc32c") 00:27:33.152 | "\(.module_name) \(.executed)"' 00:27:33.152 17:38:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:33.152 17:38:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:33.152 17:38:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:33.152 17:38:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:33.152 17:38:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:33.152 17:38:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2725117 00:27:33.152 17:38:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2725117 ']' 00:27:33.152 17:38:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2725117 00:27:33.152 17:38:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:33.152 17:38:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:33.152 17:38:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2725117 00:27:33.152 17:38:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:33.152 17:38:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:33.152 17:38:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2725117' 00:27:33.152 killing process with pid 2725117 00:27:33.152 17:38:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2725117 00:27:33.153 Received shutdown signal, test time was about 2.000000 seconds 00:27:33.153 00:27:33.153 Latency(us) 00:27:33.153 [2024-12-09T16:38:02.332Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:33.153 [2024-12-09T16:38:02.332Z] =================================================================================================================== 00:27:33.153 [2024-12-09T16:38:02.332Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:33.153 17:38:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2725117 00:27:33.411 17:38:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:27:33.411 17:38:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:33.411 17:38:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:33.411 17:38:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:27:33.411 17:38:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:27:33.411 17:38:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:27:33.411 17:38:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:33.411 17:38:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2725649 00:27:33.411 17:38:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2725649 /var/tmp/bperf.sock 00:27:33.411 17:38:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:33.411 17:38:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2725649 ']' 00:27:33.411 17:38:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:33.411 17:38:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:33.411 17:38:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:33.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:33.411 17:38:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:33.411 17:38:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:33.411 [2024-12-09 17:38:02.416339] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:27:33.411 [2024-12-09 17:38:02.416388] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2725649 ] 00:27:33.411 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:33.411 Zero copy mechanism will not be used. 00:27:33.411 [2024-12-09 17:38:02.491431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:33.411 [2024-12-09 17:38:02.529658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:33.411 17:38:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:33.411 17:38:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:33.411 17:38:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:33.411 17:38:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:33.411 17:38:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:33.670 17:38:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:33.670 17:38:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:34.235 nvme0n1 00:27:34.235 17:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:34.235 17:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:34.235 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:34.235 Zero copy mechanism will not be used. 00:27:34.235 Running I/O for 2 seconds... 00:27:36.547 6946.00 IOPS, 868.25 MiB/s [2024-12-09T16:38:05.726Z] 6817.00 IOPS, 852.12 MiB/s 00:27:36.547 Latency(us) 00:27:36.547 [2024-12-09T16:38:05.726Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:36.547 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:36.547 nvme0n1 : 2.00 6815.61 851.95 0.00 0.00 2343.49 1412.14 4774.77 00:27:36.547 [2024-12-09T16:38:05.726Z] =================================================================================================================== 00:27:36.547 [2024-12-09T16:38:05.726Z] Total : 6815.61 851.95 0.00 0.00 2343.49 1412.14 4774.77 00:27:36.547 { 00:27:36.547 "results": [ 00:27:36.547 { 00:27:36.547 "job": "nvme0n1", 00:27:36.547 "core_mask": "0x2", 00:27:36.547 "workload": "randwrite", 00:27:36.547 "status": "finished", 00:27:36.547 "queue_depth": 16, 00:27:36.547 "io_size": 131072, 00:27:36.547 "runtime": 2.003342, 00:27:36.547 "iops": 6815.611113828792, 00:27:36.547 "mibps": 851.951389228599, 00:27:36.547 "io_failed": 0, 00:27:36.547 "io_timeout": 0, 00:27:36.547 "avg_latency_us": 2343.485571993555, 00:27:36.547 "min_latency_us": 1412.144761904762, 00:27:36.547 "max_latency_us": 4774.765714285714 00:27:36.547 } 00:27:36.547 ], 00:27:36.547 "core_count": 1 00:27:36.547 } 00:27:36.547 17:38:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:36.547 17:38:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:36.547 17:38:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:36.547 17:38:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:36.547 | select(.opcode=="crc32c") 00:27:36.547 | "\(.module_name) \(.executed)"' 00:27:36.547 17:38:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:36.547 17:38:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:36.547 17:38:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:36.547 17:38:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:36.547 17:38:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:36.547 17:38:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2725649 00:27:36.547 17:38:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2725649 ']' 00:27:36.547 17:38:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2725649 00:27:36.547 17:38:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:36.547 17:38:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:36.547 17:38:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2725649 00:27:36.547 17:38:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:36.547 17:38:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:36.547 17:38:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2725649' 00:27:36.547 killing process with pid 2725649 00:27:36.547 17:38:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2725649 00:27:36.547 Received shutdown signal, test time was about 2.000000 seconds 00:27:36.547 00:27:36.547 Latency(us) 00:27:36.547 [2024-12-09T16:38:05.726Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:36.547 [2024-12-09T16:38:05.726Z] =================================================================================================================== 00:27:36.547 [2024-12-09T16:38:05.726Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:36.547 17:38:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2725649 00:27:36.806 17:38:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2724007 00:27:36.806 17:38:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2724007 ']' 00:27:36.806 17:38:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2724007 00:27:36.806 17:38:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:36.806 17:38:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:36.806 17:38:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2724007 00:27:36.806 17:38:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:36.806 17:38:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:36.806 17:38:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2724007' 00:27:36.806 killing process with pid 2724007 00:27:36.806 17:38:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2724007 00:27:36.806 17:38:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2724007 00:27:36.806 00:27:36.806 real 0m13.830s 00:27:36.806 user 0m26.438s 00:27:36.806 sys 0m4.536s 00:27:36.806 17:38:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:36.806 17:38:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:36.806 ************************************ 00:27:36.806 END TEST nvmf_digest_clean 00:27:36.806 ************************************ 00:27:37.065 17:38:05 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:27:37.065 17:38:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:37.065 17:38:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:37.065 17:38:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:37.065 ************************************ 00:27:37.065 START TEST nvmf_digest_error 00:27:37.065 ************************************ 00:27:37.065 17:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:27:37.065 17:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:27:37.065 17:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:37.065 17:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:37.065 17:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:37.065 17:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=2726183 00:27:37.065 17:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 2726183 00:27:37.065 17:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:37.065 17:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2726183 ']' 00:27:37.065 17:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:37.065 17:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:37.065 17:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:37.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:37.065 17:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:37.065 17:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:37.065 [2024-12-09 17:38:06.085602] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:27:37.065 [2024-12-09 17:38:06.085648] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:37.065 [2024-12-09 17:38:06.164444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:37.065 [2024-12-09 17:38:06.202808] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:37.065 [2024-12-09 17:38:06.202842] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:37.065 [2024-12-09 17:38:06.202850] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:37.065 [2024-12-09 17:38:06.202855] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:37.065 [2024-12-09 17:38:06.202860] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:37.065 [2024-12-09 17:38:06.203375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:37.324 17:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:37.324 17:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:37.324 17:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:37.324 17:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:37.324 17:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:37.324 17:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:37.324 17:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:27:37.324 17:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.324 17:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:37.324 [2024-12-09 17:38:06.291876] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:27:37.324 17:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.324 17:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:27:37.324 17:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:27:37.324 17:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.324 17:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:37.324 null0 00:27:37.324 [2024-12-09 17:38:06.383762] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:37.324 [2024-12-09 17:38:06.407957] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:37.324 17:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.324 17:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:27:37.324 17:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:37.324 17:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:37.324 17:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:37.324 17:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:37.324 17:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2726377 00:27:37.324 17:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2726377 /var/tmp/bperf.sock 00:27:37.324 17:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:27:37.324 17:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2726377 ']' 00:27:37.324 17:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:37.324 17:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:37.324 17:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:37.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:37.324 17:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:37.324 17:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:37.324 [2024-12-09 17:38:06.459188] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:27:37.324 [2024-12-09 17:38:06.459233] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2726377 ] 00:27:37.583 [2024-12-09 17:38:06.533309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:37.583 [2024-12-09 17:38:06.572097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:37.583 17:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:37.583 17:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:37.583 17:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:37.583 17:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:37.841 17:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:37.841 17:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.841 17:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:37.841 17:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.841 17:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:37.841 17:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:38.099 nvme0n1 00:27:38.099 17:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:38.099 17:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.099 17:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:38.099 17:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.099 17:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:38.099 17:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:38.099 Running I/O for 2 seconds... 00:27:38.099 [2024-12-09 17:38:07.249504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.099 [2024-12-09 17:38:07.249536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.099 [2024-12-09 17:38:07.249547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.099 [2024-12-09 17:38:07.261557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.099 [2024-12-09 17:38:07.261579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:8275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.099 [2024-12-09 17:38:07.261589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.099 [2024-12-09 17:38:07.270445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.099 [2024-12-09 17:38:07.270467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:15863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.099 [2024-12-09 17:38:07.270476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.358 [2024-12-09 17:38:07.282739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.358 [2024-12-09 17:38:07.282761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.358 [2024-12-09 17:38:07.282769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.358 [2024-12-09 17:38:07.295120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.358 [2024-12-09 17:38:07.295146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:11340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.358 [2024-12-09 17:38:07.295155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.358 [2024-12-09 17:38:07.305112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.358 [2024-12-09 17:38:07.305132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.358 [2024-12-09 17:38:07.305141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.358 [2024-12-09 17:38:07.315841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.358 [2024-12-09 17:38:07.315862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:3515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.358 [2024-12-09 17:38:07.315870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.358 [2024-12-09 17:38:07.323625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.358 [2024-12-09 17:38:07.323646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:25594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.358 [2024-12-09 17:38:07.323654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.358 [2024-12-09 17:38:07.333227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.358 [2024-12-09 17:38:07.333247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.358 [2024-12-09 17:38:07.333255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.358 [2024-12-09 17:38:07.344537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.358 [2024-12-09 17:38:07.344558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:8982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.358 [2024-12-09 17:38:07.344567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.358 [2024-12-09 17:38:07.353153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.358 [2024-12-09 17:38:07.353173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:10832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.358 [2024-12-09 17:38:07.353181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.358 [2024-12-09 17:38:07.362707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.358 [2024-12-09 17:38:07.362728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:6586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.358 [2024-12-09 17:38:07.362736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.358 [2024-12-09 17:38:07.371694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.358 [2024-12-09 17:38:07.371713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:6006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.358 [2024-12-09 17:38:07.371722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.358 [2024-12-09 17:38:07.380815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.358 [2024-12-09 17:38:07.380835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:13385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.358 [2024-12-09 17:38:07.380847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.359 [2024-12-09 17:38:07.389873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.359 [2024-12-09 17:38:07.389894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:17557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.359 [2024-12-09 17:38:07.389902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.359 [2024-12-09 17:38:07.399008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.359 [2024-12-09 17:38:07.399028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:24490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.359 [2024-12-09 17:38:07.399038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.359 [2024-12-09 17:38:07.409234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.359 [2024-12-09 17:38:07.409255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:4786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.359 [2024-12-09 17:38:07.409263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.359 [2024-12-09 17:38:07.418422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.359 [2024-12-09 17:38:07.418442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:22458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.359 [2024-12-09 17:38:07.418450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.359 [2024-12-09 17:38:07.427812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.359 [2024-12-09 17:38:07.427832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:2075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.359 [2024-12-09 17:38:07.427839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.359 [2024-12-09 17:38:07.438699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.359 [2024-12-09 17:38:07.438719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:5302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.359 [2024-12-09 17:38:07.438727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.359 [2024-12-09 17:38:07.447002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.359 [2024-12-09 17:38:07.447022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:3417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.359 [2024-12-09 17:38:07.447030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.359 [2024-12-09 17:38:07.456341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.359 [2024-12-09 17:38:07.456361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.359 [2024-12-09 17:38:07.456368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.359 [2024-12-09 17:38:07.466138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.359 [2024-12-09 17:38:07.466158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:15648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.359 [2024-12-09 17:38:07.466166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.359 [2024-12-09 17:38:07.475249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.359 [2024-12-09 17:38:07.475270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:7672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.359 [2024-12-09 17:38:07.475279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.359 [2024-12-09 17:38:07.485258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.359 [2024-12-09 17:38:07.485278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:15168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.359 [2024-12-09 17:38:07.485287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.359 [2024-12-09 17:38:07.493740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.359 [2024-12-09 17:38:07.493760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.359 [2024-12-09 17:38:07.493768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.359 [2024-12-09 17:38:07.503501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.359 [2024-12-09 17:38:07.503522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.359 [2024-12-09 17:38:07.503530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.359 [2024-12-09 17:38:07.511689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.359 [2024-12-09 17:38:07.511710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:17597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.359 [2024-12-09 17:38:07.511718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.359 [2024-12-09 17:38:07.521977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.359 [2024-12-09 17:38:07.521998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:17904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.359 [2024-12-09 17:38:07.522006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.359 [2024-12-09 17:38:07.533462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.359 [2024-12-09 17:38:07.533483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:24096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.359 [2024-12-09 17:38:07.533492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.618 [2024-12-09 17:38:07.544843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.618 [2024-12-09 17:38:07.544864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.618 [2024-12-09 17:38:07.544876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.618 [2024-12-09 17:38:07.553127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.618 [2024-12-09 17:38:07.553147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:8321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.618 [2024-12-09 17:38:07.553156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.618 [2024-12-09 17:38:07.565213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.618 [2024-12-09 17:38:07.565239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:11938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.618 [2024-12-09 17:38:07.565247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.618 [2024-12-09 17:38:07.576633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.618 [2024-12-09 17:38:07.576653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:3859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.618 [2024-12-09 17:38:07.576661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.618 [2024-12-09 17:38:07.584807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.618 [2024-12-09 17:38:07.584827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:4087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.618 [2024-12-09 17:38:07.584836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.618 [2024-12-09 17:38:07.594933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.618 [2024-12-09 17:38:07.594954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:1261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.618 [2024-12-09 17:38:07.594963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.618 [2024-12-09 17:38:07.605765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.618 [2024-12-09 17:38:07.605784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:58 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.618 [2024-12-09 17:38:07.605792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.618 [2024-12-09 17:38:07.615661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.618 [2024-12-09 17:38:07.615681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:3447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.618 [2024-12-09 17:38:07.615689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.618 [2024-12-09 17:38:07.624542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.618 [2024-12-09 17:38:07.624562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:20789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.618 [2024-12-09 17:38:07.624571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.618 [2024-12-09 17:38:07.635723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.618 [2024-12-09 17:38:07.635747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:4062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.618 [2024-12-09 17:38:07.635755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.618 [2024-12-09 17:38:07.647659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.618 [2024-12-09 17:38:07.647680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:12626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.618 [2024-12-09 17:38:07.647688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.618 [2024-12-09 17:38:07.655957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.618 [2024-12-09 17:38:07.655977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.618 [2024-12-09 17:38:07.655985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.618 [2024-12-09 17:38:07.666917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.618 [2024-12-09 17:38:07.666938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.618 [2024-12-09 17:38:07.666945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.618 [2024-12-09 17:38:07.679135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.618 [2024-12-09 17:38:07.679156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:10081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.618 [2024-12-09 17:38:07.679164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.618 [2024-12-09 17:38:07.688168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.618 [2024-12-09 17:38:07.688191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:8752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.618 [2024-12-09 17:38:07.688199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.618 [2024-12-09 17:38:07.699408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.618 [2024-12-09 17:38:07.699428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.618 [2024-12-09 17:38:07.699436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.618 [2024-12-09 17:38:07.709092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.618 [2024-12-09 17:38:07.709112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:17208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.618 [2024-12-09 17:38:07.709120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.618 [2024-12-09 17:38:07.717861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.618 [2024-12-09 17:38:07.717881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:14254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.618 [2024-12-09 17:38:07.717889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.618 [2024-12-09 17:38:07.727867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.619 [2024-12-09 17:38:07.727886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:20173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.619 [2024-12-09 17:38:07.727894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.619 [2024-12-09 17:38:07.737026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.619 [2024-12-09 17:38:07.737046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:11583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.619 [2024-12-09 17:38:07.737053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.619 [2024-12-09 17:38:07.745236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.619 [2024-12-09 17:38:07.745257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:1901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.619 [2024-12-09 17:38:07.745265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.619 [2024-12-09 17:38:07.755253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.619 [2024-12-09 17:38:07.755273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:2355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.619 [2024-12-09 17:38:07.755281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.619 [2024-12-09 17:38:07.764852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.619 [2024-12-09 17:38:07.764872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:17616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.619 [2024-12-09 17:38:07.764881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.619 [2024-12-09 17:38:07.774324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.619 [2024-12-09 17:38:07.774344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:15692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.619 [2024-12-09 17:38:07.774351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.619 [2024-12-09 17:38:07.783235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.619 [2024-12-09 17:38:07.783255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:25563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.619 [2024-12-09 17:38:07.783263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.619 [2024-12-09 17:38:07.794305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.619 [2024-12-09 17:38:07.794325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.619 [2024-12-09 17:38:07.794333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.877 [2024-12-09 17:38:07.802578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.877 [2024-12-09 17:38:07.802597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:13712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.877 [2024-12-09 17:38:07.802608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.877 [2024-12-09 17:38:07.814755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.877 [2024-12-09 17:38:07.814776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:14251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.877 [2024-12-09 17:38:07.814783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.877 [2024-12-09 17:38:07.823271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.877 [2024-12-09 17:38:07.823290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:12534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.877 [2024-12-09 17:38:07.823298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.877 [2024-12-09 17:38:07.835024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.877 [2024-12-09 17:38:07.835045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:9858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.877 [2024-12-09 17:38:07.835053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.877 [2024-12-09 17:38:07.846271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.877 [2024-12-09 17:38:07.846291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:4117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.877 [2024-12-09 17:38:07.846298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.877 [2024-12-09 17:38:07.854719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.877 [2024-12-09 17:38:07.854740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:23679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.877 [2024-12-09 17:38:07.854748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.877 [2024-12-09 17:38:07.867208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.877 [2024-12-09 17:38:07.867232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:11228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.877 [2024-12-09 17:38:07.867240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.877 [2024-12-09 17:38:07.878519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.877 [2024-12-09 17:38:07.878540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:24082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.877 [2024-12-09 17:38:07.878548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.877 [2024-12-09 17:38:07.891091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.877 [2024-12-09 17:38:07.891110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.877 [2024-12-09 17:38:07.891118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.877 [2024-12-09 17:38:07.899847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.877 [2024-12-09 17:38:07.899868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:24862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.877 [2024-12-09 17:38:07.899876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.877 [2024-12-09 17:38:07.911428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.877 [2024-12-09 17:38:07.911448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:1114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.877 [2024-12-09 17:38:07.911456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.877 [2024-12-09 17:38:07.923292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.877 [2024-12-09 17:38:07.923311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.877 [2024-12-09 17:38:07.923320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.877 [2024-12-09 17:38:07.931875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.877 [2024-12-09 17:38:07.931896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:12026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.877 [2024-12-09 17:38:07.931904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.877 [2024-12-09 17:38:07.943140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.877 [2024-12-09 17:38:07.943159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:4618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.877 [2024-12-09 17:38:07.943167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.877 [2024-12-09 17:38:07.951621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.877 [2024-12-09 17:38:07.951640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:5874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.878 [2024-12-09 17:38:07.951648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.878 [2024-12-09 17:38:07.962423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.878 [2024-12-09 17:38:07.962442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:5855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.878 [2024-12-09 17:38:07.962451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.878 [2024-12-09 17:38:07.972680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.878 [2024-12-09 17:38:07.972700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:7131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.878 [2024-12-09 17:38:07.972708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.878 [2024-12-09 17:38:07.980994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.878 [2024-12-09 17:38:07.981013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:3318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.878 [2024-12-09 17:38:07.981025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.878 [2024-12-09 17:38:07.991331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.878 [2024-12-09 17:38:07.991351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:23253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.878 [2024-12-09 17:38:07.991359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.878 [2024-12-09 17:38:07.999619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.878 [2024-12-09 17:38:07.999638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.878 [2024-12-09 17:38:07.999646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.878 [2024-12-09 17:38:08.009672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.878 [2024-12-09 17:38:08.009692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.878 [2024-12-09 17:38:08.009700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.878 [2024-12-09 17:38:08.019129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.878 [2024-12-09 17:38:08.019149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:20167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.878 [2024-12-09 17:38:08.019156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.878 [2024-12-09 17:38:08.027817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.878 [2024-12-09 17:38:08.027837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:13278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.878 [2024-12-09 17:38:08.027844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.878 [2024-12-09 17:38:08.040397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.878 [2024-12-09 17:38:08.040416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:12429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.878 [2024-12-09 17:38:08.040424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.878 [2024-12-09 17:38:08.052156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:38.878 [2024-12-09 17:38:08.052177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:8027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.878 [2024-12-09 17:38:08.052185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.136 [2024-12-09 17:38:08.062960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.136 [2024-12-09 17:38:08.062979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:24975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.136 [2024-12-09 17:38:08.062987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.136 [2024-12-09 17:38:08.071225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.136 [2024-12-09 17:38:08.071247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.136 [2024-12-09 17:38:08.071255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.136 [2024-12-09 17:38:08.083765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.136 [2024-12-09 17:38:08.083785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:7532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.136 [2024-12-09 17:38:08.083793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.136 [2024-12-09 17:38:08.093445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.136 [2024-12-09 17:38:08.093465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:15002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.136 [2024-12-09 17:38:08.093473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.136 [2024-12-09 17:38:08.103468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.136 [2024-12-09 17:38:08.103488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:4755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.136 [2024-12-09 17:38:08.103496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.136 [2024-12-09 17:38:08.113403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.136 [2024-12-09 17:38:08.113427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:7259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.136 [2024-12-09 17:38:08.113436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.136 [2024-12-09 17:38:08.122288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.136 [2024-12-09 17:38:08.122308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:19898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.136 [2024-12-09 17:38:08.122316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.136 [2024-12-09 17:38:08.130639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.136 [2024-12-09 17:38:08.130658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.136 [2024-12-09 17:38:08.130666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.136 [2024-12-09 17:38:08.140351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.136 [2024-12-09 17:38:08.140370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:13382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.136 [2024-12-09 17:38:08.140377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.136 [2024-12-09 17:38:08.150612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.136 [2024-12-09 17:38:08.150632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:16635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.137 [2024-12-09 17:38:08.150640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.137 [2024-12-09 17:38:08.159976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.137 [2024-12-09 17:38:08.159999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:14785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.137 [2024-12-09 17:38:08.160009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.137 [2024-12-09 17:38:08.168559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.137 [2024-12-09 17:38:08.168580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:13387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.137 [2024-12-09 17:38:08.168588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.137 [2024-12-09 17:38:08.178222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.137 [2024-12-09 17:38:08.178241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:7242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.137 [2024-12-09 17:38:08.178250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.137 [2024-12-09 17:38:08.187038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.137 [2024-12-09 17:38:08.187057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:6435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.137 [2024-12-09 17:38:08.187065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.137 [2024-12-09 17:38:08.195967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.137 [2024-12-09 17:38:08.195987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:19455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.137 [2024-12-09 17:38:08.195995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.137 [2024-12-09 17:38:08.205100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.137 [2024-12-09 17:38:08.205119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:9085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.137 [2024-12-09 17:38:08.205127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.137 [2024-12-09 17:38:08.215314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.137 [2024-12-09 17:38:08.215334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:11636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.137 [2024-12-09 17:38:08.215341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.137 [2024-12-09 17:38:08.225252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.137 [2024-12-09 17:38:08.225272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:7712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.137 [2024-12-09 17:38:08.225280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.137 [2024-12-09 17:38:08.233062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.137 [2024-12-09 17:38:08.233089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:5918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.137 [2024-12-09 17:38:08.233100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.137 25572.00 IOPS, 99.89 MiB/s [2024-12-09T16:38:08.316Z] [2024-12-09 17:38:08.245506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.137 [2024-12-09 17:38:08.245526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:6030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.137 [2024-12-09 17:38:08.245535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.137 [2024-12-09 17:38:08.253360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.137 [2024-12-09 17:38:08.253379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:8647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.137 [2024-12-09 17:38:08.253387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.137 [2024-12-09 17:38:08.262509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.137 [2024-12-09 17:38:08.262528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:15859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.137 [2024-12-09 17:38:08.262536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.137 [2024-12-09 17:38:08.271915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.137 [2024-12-09 17:38:08.271938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.137 [2024-12-09 17:38:08.271950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.137 [2024-12-09 17:38:08.281136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.137 [2024-12-09 17:38:08.281156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:21015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.137 [2024-12-09 17:38:08.281165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.137 [2024-12-09 17:38:08.291698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.137 [2024-12-09 17:38:08.291718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:4215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.137 [2024-12-09 17:38:08.291725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.137 [2024-12-09 17:38:08.300057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.137 [2024-12-09 17:38:08.300076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:20811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.137 [2024-12-09 17:38:08.300084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.137 [2024-12-09 17:38:08.309993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.137 [2024-12-09 17:38:08.310013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:11167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.137 [2024-12-09 17:38:08.310021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.396 [2024-12-09 17:38:08.320110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.396 [2024-12-09 17:38:08.320130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:12894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.396 [2024-12-09 17:38:08.320138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.396 [2024-12-09 17:38:08.330480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.396 [2024-12-09 17:38:08.330500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:3047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.396 [2024-12-09 17:38:08.330508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.396 [2024-12-09 17:38:08.339546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.396 [2024-12-09 17:38:08.339566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:13704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.396 [2024-12-09 17:38:08.339573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.396 [2024-12-09 17:38:08.349264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.396 [2024-12-09 17:38:08.349284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.396 [2024-12-09 17:38:08.349292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.396 [2024-12-09 17:38:08.358742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.396 [2024-12-09 17:38:08.358761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:20316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.396 [2024-12-09 17:38:08.358769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.396 [2024-12-09 17:38:08.366666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.396 [2024-12-09 17:38:08.366685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:5035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.396 [2024-12-09 17:38:08.366693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.396 [2024-12-09 17:38:08.375985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.396 [2024-12-09 17:38:08.376004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.396 [2024-12-09 17:38:08.376012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.396 [2024-12-09 17:38:08.385143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.396 [2024-12-09 17:38:08.385163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:17504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.396 [2024-12-09 17:38:08.385171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.396 [2024-12-09 17:38:08.394222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.396 [2024-12-09 17:38:08.394242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:5559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.396 [2024-12-09 17:38:08.394253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.396 [2024-12-09 17:38:08.403924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.396 [2024-12-09 17:38:08.403944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:13958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.396 [2024-12-09 17:38:08.403951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.396 [2024-12-09 17:38:08.413053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.396 [2024-12-09 17:38:08.413072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.396 [2024-12-09 17:38:08.413079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.396 [2024-12-09 17:38:08.422315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.396 [2024-12-09 17:38:08.422336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:20099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.396 [2024-12-09 17:38:08.422344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.396 [2024-12-09 17:38:08.431191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.396 [2024-12-09 17:38:08.431211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.396 [2024-12-09 17:38:08.431224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.396 [2024-12-09 17:38:08.440129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.396 [2024-12-09 17:38:08.440149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.396 [2024-12-09 17:38:08.440156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.396 [2024-12-09 17:38:08.449684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.396 [2024-12-09 17:38:08.449704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:3492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.396 [2024-12-09 17:38:08.449712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.396 [2024-12-09 17:38:08.460295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.396 [2024-12-09 17:38:08.460314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.396 [2024-12-09 17:38:08.460321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.396 [2024-12-09 17:38:08.468919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.396 [2024-12-09 17:38:08.468938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:8597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.396 [2024-12-09 17:38:08.468946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.396 [2024-12-09 17:38:08.481724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.397 [2024-12-09 17:38:08.481749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:1151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.397 [2024-12-09 17:38:08.481758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.397 [2024-12-09 17:38:08.494314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.397 [2024-12-09 17:38:08.494334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:16110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.397 [2024-12-09 17:38:08.494341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.397 [2024-12-09 17:38:08.505114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.397 [2024-12-09 17:38:08.505134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:6471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.397 [2024-12-09 17:38:08.505142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.397 [2024-12-09 17:38:08.514255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.397 [2024-12-09 17:38:08.514274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:16087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.397 [2024-12-09 17:38:08.514282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.397 [2024-12-09 17:38:08.526162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.397 [2024-12-09 17:38:08.526181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:13076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.397 [2024-12-09 17:38:08.526189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.397 [2024-12-09 17:38:08.538826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.397 [2024-12-09 17:38:08.538845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:5193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.397 [2024-12-09 17:38:08.538853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.397 [2024-12-09 17:38:08.550358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.397 [2024-12-09 17:38:08.550378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.397 [2024-12-09 17:38:08.550386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.397 [2024-12-09 17:38:08.562585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.397 [2024-12-09 17:38:08.562605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:10485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.397 [2024-12-09 17:38:08.562613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.655 [2024-12-09 17:38:08.574147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.655 [2024-12-09 17:38:08.574167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:11184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.655 [2024-12-09 17:38:08.574174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.655 [2024-12-09 17:38:08.584964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.655 [2024-12-09 17:38:08.584984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.655 [2024-12-09 17:38:08.584992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.655 [2024-12-09 17:38:08.592624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.655 [2024-12-09 17:38:08.592644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:13873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.655 [2024-12-09 17:38:08.592653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.655 [2024-12-09 17:38:08.604236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.655 [2024-12-09 17:38:08.604255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:23302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.655 [2024-12-09 17:38:08.604264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.655 [2024-12-09 17:38:08.615621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.656 [2024-12-09 17:38:08.615642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:5891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.656 [2024-12-09 17:38:08.615650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.656 [2024-12-09 17:38:08.625366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.656 [2024-12-09 17:38:08.625385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.656 [2024-12-09 17:38:08.625394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.656 [2024-12-09 17:38:08.634038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.656 [2024-12-09 17:38:08.634057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:10295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.656 [2024-12-09 17:38:08.634065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.656 [2024-12-09 17:38:08.646623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.656 [2024-12-09 17:38:08.646643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:4562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.656 [2024-12-09 17:38:08.646650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.656 [2024-12-09 17:38:08.657959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.656 [2024-12-09 17:38:08.657979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:14578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.656 [2024-12-09 17:38:08.657987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.656 [2024-12-09 17:38:08.666516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.656 [2024-12-09 17:38:08.666535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:23348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.656 [2024-12-09 17:38:08.666546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.656 [2024-12-09 17:38:08.679298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.656 [2024-12-09 17:38:08.679318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:7388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.656 [2024-12-09 17:38:08.679326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.656 [2024-12-09 17:38:08.691158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.656 [2024-12-09 17:38:08.691177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:24015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.656 [2024-12-09 17:38:08.691185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.656 [2024-12-09 17:38:08.703213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.656 [2024-12-09 17:38:08.703237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:18065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.656 [2024-12-09 17:38:08.703245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.656 [2024-12-09 17:38:08.714861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.656 [2024-12-09 17:38:08.714880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:14515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.656 [2024-12-09 17:38:08.714889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.656 [2024-12-09 17:38:08.725720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.656 [2024-12-09 17:38:08.725739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:18702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.656 [2024-12-09 17:38:08.725747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.656 [2024-12-09 17:38:08.733794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.656 [2024-12-09 17:38:08.733813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.656 [2024-12-09 17:38:08.733821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.656 [2024-12-09 17:38:08.746013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.656 [2024-12-09 17:38:08.746033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:9252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.656 [2024-12-09 17:38:08.746040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.656 [2024-12-09 17:38:08.758609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.656 [2024-12-09 17:38:08.758629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:7842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.656 [2024-12-09 17:38:08.758636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.656 [2024-12-09 17:38:08.768536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.656 [2024-12-09 17:38:08.768555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:22502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.656 [2024-12-09 17:38:08.768563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.656 [2024-12-09 17:38:08.778132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.656 [2024-12-09 17:38:08.778152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.656 [2024-12-09 17:38:08.778160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.656 [2024-12-09 17:38:08.789337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.656 [2024-12-09 17:38:08.789358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:13084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.656 [2024-12-09 17:38:08.789366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.656 [2024-12-09 17:38:08.800316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.656 [2024-12-09 17:38:08.800337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:23023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.656 [2024-12-09 17:38:08.800345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.656 [2024-12-09 17:38:08.808808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.656 [2024-12-09 17:38:08.808828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:24753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.656 [2024-12-09 17:38:08.808836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.656 [2024-12-09 17:38:08.820857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.656 [2024-12-09 17:38:08.820878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:22258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.656 [2024-12-09 17:38:08.820885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.915 [2024-12-09 17:38:08.833200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.915 [2024-12-09 17:38:08.833227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:15392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.915 [2024-12-09 17:38:08.833236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.915 [2024-12-09 17:38:08.843122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.915 [2024-12-09 17:38:08.843142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:22587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.915 [2024-12-09 17:38:08.843150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.915 [2024-12-09 17:38:08.852917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.915 [2024-12-09 17:38:08.852936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:8425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.915 [2024-12-09 17:38:08.852947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.915 [2024-12-09 17:38:08.861484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.915 [2024-12-09 17:38:08.861504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:11301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.915 [2024-12-09 17:38:08.861512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.915 [2024-12-09 17:38:08.870201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.915 [2024-12-09 17:38:08.870230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:14592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.915 [2024-12-09 17:38:08.870238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.915 [2024-12-09 17:38:08.879199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.915 [2024-12-09 17:38:08.879224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:19926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.915 [2024-12-09 17:38:08.879233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.915 [2024-12-09 17:38:08.889256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.915 [2024-12-09 17:38:08.889276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:13883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.915 [2024-12-09 17:38:08.889284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.915 [2024-12-09 17:38:08.899209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.915 [2024-12-09 17:38:08.899237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:25134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.915 [2024-12-09 17:38:08.899245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.915 [2024-12-09 17:38:08.906664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.915 [2024-12-09 17:38:08.906683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:1730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.915 [2024-12-09 17:38:08.906691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.915 [2024-12-09 17:38:08.916147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.915 [2024-12-09 17:38:08.916167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:14269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.915 [2024-12-09 17:38:08.916175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.915 [2024-12-09 17:38:08.925273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.915 [2024-12-09 17:38:08.925293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.915 [2024-12-09 17:38:08.925300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.915 [2024-12-09 17:38:08.934193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.915 [2024-12-09 17:38:08.934222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.915 [2024-12-09 17:38:08.934231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.915 [2024-12-09 17:38:08.946258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.915 [2024-12-09 17:38:08.946279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:24473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.915 [2024-12-09 17:38:08.946287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.915 [2024-12-09 17:38:08.956057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.915 [2024-12-09 17:38:08.956077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.915 [2024-12-09 17:38:08.956086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.915 [2024-12-09 17:38:08.963982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.916 [2024-12-09 17:38:08.964002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:1993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.916 [2024-12-09 17:38:08.964009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.916 [2024-12-09 17:38:08.974600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.916 [2024-12-09 17:38:08.974620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:11480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.916 [2024-12-09 17:38:08.974628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.916 [2024-12-09 17:38:08.986274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.916 [2024-12-09 17:38:08.986293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:5352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.916 [2024-12-09 17:38:08.986301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.916 [2024-12-09 17:38:08.997054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.916 [2024-12-09 17:38:08.997074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:8163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.916 [2024-12-09 17:38:08.997082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.916 [2024-12-09 17:38:09.005670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.916 [2024-12-09 17:38:09.005691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:14924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.916 [2024-12-09 17:38:09.005698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.916 [2024-12-09 17:38:09.015184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.916 [2024-12-09 17:38:09.015203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:16822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.916 [2024-12-09 17:38:09.015211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.916 [2024-12-09 17:38:09.027454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.916 [2024-12-09 17:38:09.027475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:16706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.916 [2024-12-09 17:38:09.027483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.916 [2024-12-09 17:38:09.036983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.916 [2024-12-09 17:38:09.037002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:8592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.916 [2024-12-09 17:38:09.037009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.916 [2024-12-09 17:38:09.047049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.916 [2024-12-09 17:38:09.047068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:21111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.916 [2024-12-09 17:38:09.047076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.916 [2024-12-09 17:38:09.056633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.916 [2024-12-09 17:38:09.056651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:21483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.916 [2024-12-09 17:38:09.056659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.916 [2024-12-09 17:38:09.065255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.916 [2024-12-09 17:38:09.065275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.916 [2024-12-09 17:38:09.065283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.916 [2024-12-09 17:38:09.074411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.916 [2024-12-09 17:38:09.074431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:5737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.916 [2024-12-09 17:38:09.074440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.916 [2024-12-09 17:38:09.083657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:39.916 [2024-12-09 17:38:09.083677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:19684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.916 [2024-12-09 17:38:09.083685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.174 [2024-12-09 17:38:09.093791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:40.174 [2024-12-09 17:38:09.093812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:19148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.174 [2024-12-09 17:38:09.093821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.174 [2024-12-09 17:38:09.103110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:40.174 [2024-12-09 17:38:09.103130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:11097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.174 [2024-12-09 17:38:09.103141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.174 [2024-12-09 17:38:09.111627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:40.174 [2024-12-09 17:38:09.111648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:8414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.174 [2024-12-09 17:38:09.111656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.175 [2024-12-09 17:38:09.122193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:40.175 [2024-12-09 17:38:09.122214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:9825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.175 [2024-12-09 17:38:09.122228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.175 [2024-12-09 17:38:09.134931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:40.175 [2024-12-09 17:38:09.134951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:21347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.175 [2024-12-09 17:38:09.134960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.175 [2024-12-09 17:38:09.145312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:40.175 [2024-12-09 17:38:09.145331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:18987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.175 [2024-12-09 17:38:09.145339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.175 [2024-12-09 17:38:09.153317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:40.175 [2024-12-09 17:38:09.153337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:17420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.175 [2024-12-09 17:38:09.153344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.175 [2024-12-09 17:38:09.163677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:40.175 [2024-12-09 17:38:09.163697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.175 [2024-12-09 17:38:09.163704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.175 [2024-12-09 17:38:09.175821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:40.175 [2024-12-09 17:38:09.175842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:4709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.175 [2024-12-09 17:38:09.175850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.175 [2024-12-09 17:38:09.186258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:40.175 [2024-12-09 17:38:09.186281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.175 [2024-12-09 17:38:09.186291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.175 [2024-12-09 17:38:09.194499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:40.175 [2024-12-09 17:38:09.194520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:17860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.175 [2024-12-09 17:38:09.194528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.175 [2024-12-09 17:38:09.203843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:40.175 [2024-12-09 17:38:09.203863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:25251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.175 [2024-12-09 17:38:09.203871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.175 [2024-12-09 17:38:09.212960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:40.175 [2024-12-09 17:38:09.212980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:6716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.175 [2024-12-09 17:38:09.212988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.175 [2024-12-09 17:38:09.222168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:40.175 [2024-12-09 17:38:09.222188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:2560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.175 [2024-12-09 17:38:09.222196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.175 [2024-12-09 17:38:09.232193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:40.175 [2024-12-09 17:38:09.232212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:19590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.175 [2024-12-09 17:38:09.232224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.175 [2024-12-09 17:38:09.242303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dddd0) 00:27:40.175 [2024-12-09 17:38:09.242322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:19950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.175 [2024-12-09 17:38:09.242329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.175 25472.50 IOPS, 99.50 MiB/s 00:27:40.175 Latency(us) 00:27:40.175 [2024-12-09T16:38:09.354Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:40.175 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:40.175 nvme0n1 : 2.01 25472.50 99.50 0.00 0.00 5020.01 2465.40 18350.08 00:27:40.175 [2024-12-09T16:38:09.354Z] =================================================================================================================== 00:27:40.175 [2024-12-09T16:38:09.354Z] Total : 25472.50 99.50 0.00 0.00 5020.01 2465.40 18350.08 00:27:40.175 { 00:27:40.175 "results": [ 00:27:40.175 { 00:27:40.175 "job": "nvme0n1", 00:27:40.175 "core_mask": "0x2", 00:27:40.175 "workload": "randread", 00:27:40.175 "status": "finished", 00:27:40.175 "queue_depth": 128, 00:27:40.175 "io_size": 4096, 00:27:40.175 "runtime": 2.007655, 00:27:40.175 "iops": 25472.503990974546, 00:27:40.175 "mibps": 99.50196871474432, 00:27:40.175 "io_failed": 0, 00:27:40.175 "io_timeout": 0, 00:27:40.175 "avg_latency_us": 5020.011282864964, 00:27:40.175 "min_latency_us": 2465.401904761905, 00:27:40.175 "max_latency_us": 18350.08 00:27:40.175 } 00:27:40.175 ], 00:27:40.175 "core_count": 1 00:27:40.175 } 00:27:40.175 17:38:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:40.175 17:38:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:40.175 17:38:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:40.175 | .driver_specific 00:27:40.175 | .nvme_error 00:27:40.175 | .status_code 00:27:40.175 | .command_transient_transport_error' 00:27:40.175 17:38:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:40.434 17:38:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 200 > 0 )) 00:27:40.434 17:38:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2726377 00:27:40.434 17:38:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2726377 ']' 00:27:40.434 17:38:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2726377 00:27:40.434 17:38:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:40.434 17:38:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:40.434 17:38:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2726377 00:27:40.434 17:38:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:40.434 17:38:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:40.434 17:38:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2726377' 00:27:40.434 killing process with pid 2726377 00:27:40.434 17:38:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2726377 00:27:40.434 Received shutdown signal, test time was about 2.000000 seconds 00:27:40.434 00:27:40.434 Latency(us) 00:27:40.434 [2024-12-09T16:38:09.613Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:40.434 [2024-12-09T16:38:09.613Z] =================================================================================================================== 00:27:40.434 [2024-12-09T16:38:09.613Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:40.434 17:38:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2726377 00:27:40.692 17:38:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:27:40.692 17:38:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:40.692 17:38:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:40.692 17:38:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:40.692 17:38:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:40.692 17:38:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2726850 00:27:40.692 17:38:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2726850 /var/tmp/bperf.sock 00:27:40.692 17:38:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:27:40.692 17:38:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2726850 ']' 00:27:40.692 17:38:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:40.692 17:38:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:40.692 17:38:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:40.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:40.692 17:38:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:40.692 17:38:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:40.692 [2024-12-09 17:38:09.733442] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:27:40.695 [2024-12-09 17:38:09.733490] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2726850 ] 00:27:40.695 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:40.695 Zero copy mechanism will not be used. 00:27:40.695 [2024-12-09 17:38:09.806270] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:40.695 [2024-12-09 17:38:09.846775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:40.953 17:38:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:40.953 17:38:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:40.953 17:38:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:40.953 17:38:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:40.953 17:38:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:40.953 17:38:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.953 17:38:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:41.276 17:38:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.276 17:38:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:41.276 17:38:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:41.551 nvme0n1 00:27:41.551 17:38:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:41.551 17:38:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.551 17:38:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:41.551 17:38:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.551 17:38:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:41.551 17:38:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:41.551 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:41.551 Zero copy mechanism will not be used. 00:27:41.551 Running I/O for 2 seconds... 00:27:41.551 [2024-12-09 17:38:10.717776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:41.551 [2024-12-09 17:38:10.717809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.551 [2024-12-09 17:38:10.717821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:41.551 [2024-12-09 17:38:10.723823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:41.551 [2024-12-09 17:38:10.723851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.551 [2024-12-09 17:38:10.723865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:41.834 [2024-12-09 17:38:10.730187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:41.834 [2024-12-09 17:38:10.730214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.834 [2024-12-09 17:38:10.730232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:41.834 [2024-12-09 17:38:10.736816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:41.834 [2024-12-09 17:38:10.736842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.834 [2024-12-09 17:38:10.736852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:41.834 [2024-12-09 17:38:10.743494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:41.834 [2024-12-09 17:38:10.743518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.834 [2024-12-09 17:38:10.743528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:41.834 [2024-12-09 17:38:10.750028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:41.834 [2024-12-09 17:38:10.750052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.834 [2024-12-09 17:38:10.750061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:41.834 [2024-12-09 17:38:10.754310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:41.834 [2024-12-09 17:38:10.754331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.834 [2024-12-09 17:38:10.754340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:41.834 [2024-12-09 17:38:10.759357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:41.834 [2024-12-09 17:38:10.759379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.834 [2024-12-09 17:38:10.759387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:41.834 [2024-12-09 17:38:10.765700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:41.834 [2024-12-09 17:38:10.765722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.834 [2024-12-09 17:38:10.765731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:41.834 [2024-12-09 17:38:10.772390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:41.834 [2024-12-09 17:38:10.772412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.834 [2024-12-09 17:38:10.772420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:41.834 [2024-12-09 17:38:10.779031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:41.834 [2024-12-09 17:38:10.779057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.834 [2024-12-09 17:38:10.779066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:41.834 [2024-12-09 17:38:10.784540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:41.834 [2024-12-09 17:38:10.784562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.834 [2024-12-09 17:38:10.784570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:41.834 [2024-12-09 17:38:10.792941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:41.834 [2024-12-09 17:38:10.792962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.834 [2024-12-09 17:38:10.792971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:41.834 [2024-12-09 17:38:10.800526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:41.834 [2024-12-09 17:38:10.800548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.834 [2024-12-09 17:38:10.800557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:41.834 [2024-12-09 17:38:10.807534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:41.834 [2024-12-09 17:38:10.807554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.834 [2024-12-09 17:38:10.807562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:41.834 [2024-12-09 17:38:10.814257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:41.834 [2024-12-09 17:38:10.814278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.834 [2024-12-09 17:38:10.814286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:41.834 [2024-12-09 17:38:10.821074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:41.834 [2024-12-09 17:38:10.821095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.834 [2024-12-09 17:38:10.821104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:41.834 [2024-12-09 17:38:10.828332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:41.834 [2024-12-09 17:38:10.828355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.834 [2024-12-09 17:38:10.828363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:41.834 [2024-12-09 17:38:10.835426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:41.834 [2024-12-09 17:38:10.835447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.834 [2024-12-09 17:38:10.835455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:41.834 [2024-12-09 17:38:10.843282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:41.834 [2024-12-09 17:38:10.843303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.834 [2024-12-09 17:38:10.843311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:41.834 [2024-12-09 17:38:10.850949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:41.834 [2024-12-09 17:38:10.850970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.834 [2024-12-09 17:38:10.850979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:41.835 [2024-12-09 17:38:10.856644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:41.835 [2024-12-09 17:38:10.856665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.835 [2024-12-09 17:38:10.856674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:41.835 [2024-12-09 17:38:10.861937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:41.835 [2024-12-09 17:38:10.861958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.835 [2024-12-09 17:38:10.861966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:41.835 [2024-12-09 17:38:10.867282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:41.835 [2024-12-09 17:38:10.867303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.835 [2024-12-09 17:38:10.867311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:41.835 [2024-12-09 17:38:10.872703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:41.835 [2024-12-09 17:38:10.872724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.835 [2024-12-09 17:38:10.872732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:41.835 [2024-12-09 17:38:10.877900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:41.835 [2024-12-09 17:38:10.877920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.835 [2024-12-09 17:38:10.877928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:41.835 [2024-12-09 17:38:10.883229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:41.835 [2024-12-09 17:38:10.883249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.835 [2024-12-09 17:38:10.883256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:41.835 [2024-12-09 17:38:10.888608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:41.835 [2024-12-09 17:38:10.888628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.835 [2024-12-09 17:38:10.888639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:41.835 [2024-12-09 17:38:10.894098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:41.835 [2024-12-09 17:38:10.894119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.835 [2024-12-09 17:38:10.894126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:41.835 [2024-12-09 17:38:10.900038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:41.835 [2024-12-09 17:38:10.900059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.835 [2024-12-09 17:38:10.900067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:41.835 [2024-12-09 17:38:10.905339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:41.835 [2024-12-09 17:38:10.905360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.835 [2024-12-09 17:38:10.905368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:41.835 [2024-12-09 17:38:10.910884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:41.835 [2024-12-09 17:38:10.910905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.835 [2024-12-09 17:38:10.910913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:41.835 [2024-12-09 17:38:10.916270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:41.835 [2024-12-09 17:38:10.916290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.835 [2024-12-09 17:38:10.916298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:41.835 [2024-12-09 17:38:10.922044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:41.835 [2024-12-09 17:38:10.922065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.835 [2024-12-09 17:38:10.922073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:41.835 [2024-12-09 17:38:10.927330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:41.835 [2024-12-09 17:38:10.927350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.835 [2024-12-09 17:38:10.927358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:41.835 [2024-12-09 17:38:10.932700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:41.835 [2024-12-09 17:38:10.932721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.835 [2024-12-09 17:38:10.932729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:41.835 [2024-12-09 17:38:10.939065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:41.835 [2024-12-09 17:38:10.939086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.835 [2024-12-09 17:38:10.939094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:41.835 [2024-12-09 17:38:10.946606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:41.835 [2024-12-09 17:38:10.946627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.835 [2024-12-09 17:38:10.946635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:41.835 [2024-12-09 17:38:10.953650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:41.835 [2024-12-09 17:38:10.953672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.835 [2024-12-09 17:38:10.953680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:41.835 [2024-12-09 17:38:10.960297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:41.835 [2024-12-09 17:38:10.960318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.835 [2024-12-09 17:38:10.960326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:41.835 [2024-12-09 17:38:10.966871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:41.835 [2024-12-09 17:38:10.966892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.835 [2024-12-09 17:38:10.966901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:41.835 [2024-12-09 17:38:10.973691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:41.835 [2024-12-09 17:38:10.973712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.835 [2024-12-09 17:38:10.973720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:41.835 [2024-12-09 17:38:10.981178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:41.835 [2024-12-09 17:38:10.981200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.835 [2024-12-09 17:38:10.981209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:41.835 [2024-12-09 17:38:10.989144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:41.835 [2024-12-09 17:38:10.989167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.835 [2024-12-09 17:38:10.989176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:41.835 [2024-12-09 17:38:10.998036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:41.835 [2024-12-09 17:38:10.998058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.835 [2024-12-09 17:38:10.998074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:41.835 [2024-12-09 17:38:11.004885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:41.835 [2024-12-09 17:38:11.004908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.835 [2024-12-09 17:38:11.004917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.094 [2024-12-09 17:38:11.012008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.094 [2024-12-09 17:38:11.012031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.094 [2024-12-09 17:38:11.012041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.094 [2024-12-09 17:38:11.018635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.094 [2024-12-09 17:38:11.018658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.094 [2024-12-09 17:38:11.018667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.094 [2024-12-09 17:38:11.023988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.094 [2024-12-09 17:38:11.024010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.094 [2024-12-09 17:38:11.024018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.094 [2024-12-09 17:38:11.029555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.094 [2024-12-09 17:38:11.029575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.094 [2024-12-09 17:38:11.029584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.094 [2024-12-09 17:38:11.035142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.094 [2024-12-09 17:38:11.035162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.094 [2024-12-09 17:38:11.035171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.094 [2024-12-09 17:38:11.041306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.094 [2024-12-09 17:38:11.041327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.094 [2024-12-09 17:38:11.041336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.094 [2024-12-09 17:38:11.047020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.094 [2024-12-09 17:38:11.047041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.094 [2024-12-09 17:38:11.047049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.094 [2024-12-09 17:38:11.052618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.094 [2024-12-09 17:38:11.052643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.094 [2024-12-09 17:38:11.052651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.094 [2024-12-09 17:38:11.058211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.094 [2024-12-09 17:38:11.058237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.094 [2024-12-09 17:38:11.058246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.094 [2024-12-09 17:38:11.063677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.094 [2024-12-09 17:38:11.063699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.094 [2024-12-09 17:38:11.063706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.094 [2024-12-09 17:38:11.069271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.094 [2024-12-09 17:38:11.069292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.094 [2024-12-09 17:38:11.069299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.094 [2024-12-09 17:38:11.074689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.094 [2024-12-09 17:38:11.074711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.094 [2024-12-09 17:38:11.074719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.094 [2024-12-09 17:38:11.080029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.094 [2024-12-09 17:38:11.080050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.094 [2024-12-09 17:38:11.080058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.094 [2024-12-09 17:38:11.085616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.094 [2024-12-09 17:38:11.085638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.095 [2024-12-09 17:38:11.085646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.095 [2024-12-09 17:38:11.091213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.095 [2024-12-09 17:38:11.091240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.095 [2024-12-09 17:38:11.091248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.095 [2024-12-09 17:38:11.096551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.095 [2024-12-09 17:38:11.096572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.095 [2024-12-09 17:38:11.096580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.095 [2024-12-09 17:38:11.102110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.095 [2024-12-09 17:38:11.102131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.095 [2024-12-09 17:38:11.102139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.095 [2024-12-09 17:38:11.107925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.095 [2024-12-09 17:38:11.107946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.095 [2024-12-09 17:38:11.107954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.095 [2024-12-09 17:38:11.113668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.095 [2024-12-09 17:38:11.113690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.095 [2024-12-09 17:38:11.113698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.095 [2024-12-09 17:38:11.119065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.095 [2024-12-09 17:38:11.119086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.095 [2024-12-09 17:38:11.119094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.095 [2024-12-09 17:38:11.124573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.095 [2024-12-09 17:38:11.124594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.095 [2024-12-09 17:38:11.124602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.095 [2024-12-09 17:38:11.129796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.095 [2024-12-09 17:38:11.129817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.095 [2024-12-09 17:38:11.129826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.095 [2024-12-09 17:38:11.135148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.095 [2024-12-09 17:38:11.135168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.095 [2024-12-09 17:38:11.135176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.095 [2024-12-09 17:38:11.141321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.095 [2024-12-09 17:38:11.141343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.095 [2024-12-09 17:38:11.141351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.095 [2024-12-09 17:38:11.148068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.095 [2024-12-09 17:38:11.148090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.095 [2024-12-09 17:38:11.148101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.095 [2024-12-09 17:38:11.155574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.095 [2024-12-09 17:38:11.155595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.095 [2024-12-09 17:38:11.155604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.095 [2024-12-09 17:38:11.163639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.095 [2024-12-09 17:38:11.163660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.095 [2024-12-09 17:38:11.163669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.095 [2024-12-09 17:38:11.171719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.095 [2024-12-09 17:38:11.171740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.095 [2024-12-09 17:38:11.171748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.095 [2024-12-09 17:38:11.179301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.095 [2024-12-09 17:38:11.179322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.095 [2024-12-09 17:38:11.179331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.095 [2024-12-09 17:38:11.186853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.095 [2024-12-09 17:38:11.186874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.095 [2024-12-09 17:38:11.186882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.095 [2024-12-09 17:38:11.195208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.095 [2024-12-09 17:38:11.195236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.095 [2024-12-09 17:38:11.195244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.095 [2024-12-09 17:38:11.203467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.095 [2024-12-09 17:38:11.203488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.095 [2024-12-09 17:38:11.203496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.095 [2024-12-09 17:38:11.211316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.095 [2024-12-09 17:38:11.211339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.095 [2024-12-09 17:38:11.211348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.095 [2024-12-09 17:38:11.217008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.095 [2024-12-09 17:38:11.217030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.095 [2024-12-09 17:38:11.217038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.095 [2024-12-09 17:38:11.222329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.095 [2024-12-09 17:38:11.222351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.095 [2024-12-09 17:38:11.222359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.095 [2024-12-09 17:38:11.227796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.095 [2024-12-09 17:38:11.227817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.095 [2024-12-09 17:38:11.227827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.095 [2024-12-09 17:38:11.233273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.095 [2024-12-09 17:38:11.233294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.095 [2024-12-09 17:38:11.233302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.095 [2024-12-09 17:38:11.238804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.095 [2024-12-09 17:38:11.238827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.095 [2024-12-09 17:38:11.238836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.095 [2024-12-09 17:38:11.244110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.095 [2024-12-09 17:38:11.244131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.095 [2024-12-09 17:38:11.244140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.095 [2024-12-09 17:38:11.250184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.095 [2024-12-09 17:38:11.250205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.095 [2024-12-09 17:38:11.250213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.095 [2024-12-09 17:38:11.255695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.095 [2024-12-09 17:38:11.255716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.095 [2024-12-09 17:38:11.255724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.095 [2024-12-09 17:38:11.260990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.096 [2024-12-09 17:38:11.261010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.096 [2024-12-09 17:38:11.261022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.096 [2024-12-09 17:38:11.266495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.096 [2024-12-09 17:38:11.266516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.096 [2024-12-09 17:38:11.266523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.354 [2024-12-09 17:38:11.271945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.354 [2024-12-09 17:38:11.271969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.354 [2024-12-09 17:38:11.271978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.354 [2024-12-09 17:38:11.277405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.354 [2024-12-09 17:38:11.277429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.354 [2024-12-09 17:38:11.277438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.354 [2024-12-09 17:38:11.283105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.354 [2024-12-09 17:38:11.283127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.354 [2024-12-09 17:38:11.283135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.354 [2024-12-09 17:38:11.288485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.354 [2024-12-09 17:38:11.288506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.354 [2024-12-09 17:38:11.288515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.354 [2024-12-09 17:38:11.293723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.354 [2024-12-09 17:38:11.293744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.355 [2024-12-09 17:38:11.293752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.355 [2024-12-09 17:38:11.299011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.355 [2024-12-09 17:38:11.299032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.355 [2024-12-09 17:38:11.299040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.355 [2024-12-09 17:38:11.304235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.355 [2024-12-09 17:38:11.304255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.355 [2024-12-09 17:38:11.304263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.355 [2024-12-09 17:38:11.309491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.355 [2024-12-09 17:38:11.309516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.355 [2024-12-09 17:38:11.309524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.355 [2024-12-09 17:38:11.314832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.355 [2024-12-09 17:38:11.314852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.355 [2024-12-09 17:38:11.314860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.355 [2024-12-09 17:38:11.320198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.355 [2024-12-09 17:38:11.320226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.355 [2024-12-09 17:38:11.320236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.355 [2024-12-09 17:38:11.325627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.355 [2024-12-09 17:38:11.325647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.355 [2024-12-09 17:38:11.325656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.355 [2024-12-09 17:38:11.330978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.355 [2024-12-09 17:38:11.330999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.355 [2024-12-09 17:38:11.331006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.355 [2024-12-09 17:38:11.336573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.355 [2024-12-09 17:38:11.336594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.355 [2024-12-09 17:38:11.336602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.355 [2024-12-09 17:38:11.341851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.355 [2024-12-09 17:38:11.341872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.355 [2024-12-09 17:38:11.341880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.355 [2024-12-09 17:38:11.347018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.355 [2024-12-09 17:38:11.347039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.355 [2024-12-09 17:38:11.347046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.355 [2024-12-09 17:38:11.352376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.355 [2024-12-09 17:38:11.352397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.355 [2024-12-09 17:38:11.352405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.355 [2024-12-09 17:38:11.357723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.355 [2024-12-09 17:38:11.357743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.355 [2024-12-09 17:38:11.357751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.355 [2024-12-09 17:38:11.363101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.355 [2024-12-09 17:38:11.363122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.355 [2024-12-09 17:38:11.363130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.355 [2024-12-09 17:38:11.368370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.355 [2024-12-09 17:38:11.368391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.355 [2024-12-09 17:38:11.368399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.355 [2024-12-09 17:38:11.373708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.355 [2024-12-09 17:38:11.373728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.355 [2024-12-09 17:38:11.373736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.355 [2024-12-09 17:38:11.378989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.355 [2024-12-09 17:38:11.379009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.355 [2024-12-09 17:38:11.379018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.355 [2024-12-09 17:38:11.384334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.355 [2024-12-09 17:38:11.384354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.355 [2024-12-09 17:38:11.384362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.355 [2024-12-09 17:38:11.389728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.355 [2024-12-09 17:38:11.389749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.355 [2024-12-09 17:38:11.389756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.355 [2024-12-09 17:38:11.395145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.355 [2024-12-09 17:38:11.395166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.355 [2024-12-09 17:38:11.395174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.355 [2024-12-09 17:38:11.400401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.355 [2024-12-09 17:38:11.400422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.355 [2024-12-09 17:38:11.400433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.355 [2024-12-09 17:38:11.405718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.355 [2024-12-09 17:38:11.405738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.355 [2024-12-09 17:38:11.405746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.355 [2024-12-09 17:38:11.411039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.355 [2024-12-09 17:38:11.411058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.355 [2024-12-09 17:38:11.411066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.355 [2024-12-09 17:38:11.416390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.355 [2024-12-09 17:38:11.416411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.355 [2024-12-09 17:38:11.416418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.355 [2024-12-09 17:38:11.421822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.355 [2024-12-09 17:38:11.421842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.355 [2024-12-09 17:38:11.421851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.355 [2024-12-09 17:38:11.427249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.355 [2024-12-09 17:38:11.427269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.355 [2024-12-09 17:38:11.427277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.355 [2024-12-09 17:38:11.432671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.355 [2024-12-09 17:38:11.432692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.355 [2024-12-09 17:38:11.432700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.355 [2024-12-09 17:38:11.437877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.355 [2024-12-09 17:38:11.437898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.355 [2024-12-09 17:38:11.437905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.355 [2024-12-09 17:38:11.443015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.356 [2024-12-09 17:38:11.443036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.356 [2024-12-09 17:38:11.443043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.356 [2024-12-09 17:38:11.448307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.356 [2024-12-09 17:38:11.448330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.356 [2024-12-09 17:38:11.448338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.356 [2024-12-09 17:38:11.453525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.356 [2024-12-09 17:38:11.453545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.356 [2024-12-09 17:38:11.453552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.356 [2024-12-09 17:38:11.458717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.356 [2024-12-09 17:38:11.458738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.356 [2024-12-09 17:38:11.458745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.356 [2024-12-09 17:38:11.463981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.356 [2024-12-09 17:38:11.464001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.356 [2024-12-09 17:38:11.464008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.356 [2024-12-09 17:38:11.469299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.356 [2024-12-09 17:38:11.469319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.356 [2024-12-09 17:38:11.469327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.356 [2024-12-09 17:38:11.474578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.356 [2024-12-09 17:38:11.474599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.356 [2024-12-09 17:38:11.474606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.356 [2024-12-09 17:38:11.479776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.356 [2024-12-09 17:38:11.479796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.356 [2024-12-09 17:38:11.479804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.356 [2024-12-09 17:38:11.484954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.356 [2024-12-09 17:38:11.484975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.356 [2024-12-09 17:38:11.484983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.356 [2024-12-09 17:38:11.490343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.356 [2024-12-09 17:38:11.490365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.356 [2024-12-09 17:38:11.490373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.356 [2024-12-09 17:38:11.495708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.356 [2024-12-09 17:38:11.495729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.356 [2024-12-09 17:38:11.495736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.356 [2024-12-09 17:38:11.501367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.356 [2024-12-09 17:38:11.501387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.356 [2024-12-09 17:38:11.501395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.356 [2024-12-09 17:38:11.506677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.356 [2024-12-09 17:38:11.506698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.356 [2024-12-09 17:38:11.506706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.356 [2024-12-09 17:38:11.511998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.356 [2024-12-09 17:38:11.512018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.356 [2024-12-09 17:38:11.512026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.356 [2024-12-09 17:38:11.517295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.356 [2024-12-09 17:38:11.517316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.356 [2024-12-09 17:38:11.517324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.356 [2024-12-09 17:38:11.522701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.356 [2024-12-09 17:38:11.522722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.356 [2024-12-09 17:38:11.522730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.356 [2024-12-09 17:38:11.528042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.356 [2024-12-09 17:38:11.528065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.356 [2024-12-09 17:38:11.528074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.615 [2024-12-09 17:38:11.533457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.615 [2024-12-09 17:38:11.533480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.615 [2024-12-09 17:38:11.533489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.615 [2024-12-09 17:38:11.538907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.615 [2024-12-09 17:38:11.538937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.615 [2024-12-09 17:38:11.538945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.615 [2024-12-09 17:38:11.544349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.615 [2024-12-09 17:38:11.544370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.615 [2024-12-09 17:38:11.544378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.615 [2024-12-09 17:38:11.549666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.615 [2024-12-09 17:38:11.549688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.615 [2024-12-09 17:38:11.549698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.615 [2024-12-09 17:38:11.554819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.615 [2024-12-09 17:38:11.554839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.615 [2024-12-09 17:38:11.554847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.615 [2024-12-09 17:38:11.559979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.615 [2024-12-09 17:38:11.560000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.615 [2024-12-09 17:38:11.560008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.615 [2024-12-09 17:38:11.565055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.615 [2024-12-09 17:38:11.565076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.615 [2024-12-09 17:38:11.565084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.615 [2024-12-09 17:38:11.570187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.615 [2024-12-09 17:38:11.570208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.615 [2024-12-09 17:38:11.570215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.615 [2024-12-09 17:38:11.575230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.615 [2024-12-09 17:38:11.575250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.615 [2024-12-09 17:38:11.575258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.615 [2024-12-09 17:38:11.580314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.615 [2024-12-09 17:38:11.580334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.615 [2024-12-09 17:38:11.580342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.615 [2024-12-09 17:38:11.585411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.615 [2024-12-09 17:38:11.585442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.615 [2024-12-09 17:38:11.585449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.615 [2024-12-09 17:38:11.590522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.615 [2024-12-09 17:38:11.590544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.615 [2024-12-09 17:38:11.590552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.615 [2024-12-09 17:38:11.595708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.615 [2024-12-09 17:38:11.595728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.615 [2024-12-09 17:38:11.595736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.615 [2024-12-09 17:38:11.600834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.616 [2024-12-09 17:38:11.600855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.616 [2024-12-09 17:38:11.600863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.616 [2024-12-09 17:38:11.605885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.616 [2024-12-09 17:38:11.605905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.616 [2024-12-09 17:38:11.605913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.616 [2024-12-09 17:38:11.610976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.616 [2024-12-09 17:38:11.610996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.616 [2024-12-09 17:38:11.611004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.616 [2024-12-09 17:38:11.616202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.616 [2024-12-09 17:38:11.616229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.616 [2024-12-09 17:38:11.616238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.616 [2024-12-09 17:38:11.621341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.616 [2024-12-09 17:38:11.621361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.616 [2024-12-09 17:38:11.621369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.616 [2024-12-09 17:38:11.626463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.616 [2024-12-09 17:38:11.626483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.616 [2024-12-09 17:38:11.626496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.616 [2024-12-09 17:38:11.631617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.616 [2024-12-09 17:38:11.631637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.616 [2024-12-09 17:38:11.631645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.616 [2024-12-09 17:38:11.636819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.616 [2024-12-09 17:38:11.636840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.616 [2024-12-09 17:38:11.636849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.616 [2024-12-09 17:38:11.642028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.616 [2024-12-09 17:38:11.642048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.616 [2024-12-09 17:38:11.642056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.616 [2024-12-09 17:38:11.647150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.616 [2024-12-09 17:38:11.647171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.616 [2024-12-09 17:38:11.647180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.616 [2024-12-09 17:38:11.652368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.616 [2024-12-09 17:38:11.652388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.616 [2024-12-09 17:38:11.652396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.616 [2024-12-09 17:38:11.657522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.616 [2024-12-09 17:38:11.657541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.616 [2024-12-09 17:38:11.657549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.616 [2024-12-09 17:38:11.662725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.616 [2024-12-09 17:38:11.662747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.616 [2024-12-09 17:38:11.662755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.616 [2024-12-09 17:38:11.667887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.616 [2024-12-09 17:38:11.667909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.616 [2024-12-09 17:38:11.667917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.616 [2024-12-09 17:38:11.672988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.616 [2024-12-09 17:38:11.673014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.616 [2024-12-09 17:38:11.673022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.616 [2024-12-09 17:38:11.678117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.616 [2024-12-09 17:38:11.678138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.616 [2024-12-09 17:38:11.678148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.616 [2024-12-09 17:38:11.683281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.616 [2024-12-09 17:38:11.683303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.616 [2024-12-09 17:38:11.683310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.616 [2024-12-09 17:38:11.688378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.616 [2024-12-09 17:38:11.688400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.616 [2024-12-09 17:38:11.688408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.616 [2024-12-09 17:38:11.693533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.616 [2024-12-09 17:38:11.693554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.616 [2024-12-09 17:38:11.693562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.616 [2024-12-09 17:38:11.698700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.616 [2024-12-09 17:38:11.698722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.616 [2024-12-09 17:38:11.698730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.616 [2024-12-09 17:38:11.703773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.616 [2024-12-09 17:38:11.703794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.616 [2024-12-09 17:38:11.703802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.616 [2024-12-09 17:38:11.708922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.616 [2024-12-09 17:38:11.708943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.616 [2024-12-09 17:38:11.708951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.616 5361.00 IOPS, 670.12 MiB/s [2024-12-09T16:38:11.795Z] [2024-12-09 17:38:11.715426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.616 [2024-12-09 17:38:11.715447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.616 [2024-12-09 17:38:11.715455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.616 [2024-12-09 17:38:11.720660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.616 [2024-12-09 17:38:11.720681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.616 [2024-12-09 17:38:11.720689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.616 [2024-12-09 17:38:11.725820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.616 [2024-12-09 17:38:11.725842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.616 [2024-12-09 17:38:11.725850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.616 [2024-12-09 17:38:11.730969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.616 [2024-12-09 17:38:11.730989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.616 [2024-12-09 17:38:11.730997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.616 [2024-12-09 17:38:11.736194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.616 [2024-12-09 17:38:11.736215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.616 [2024-12-09 17:38:11.736231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.616 [2024-12-09 17:38:11.741434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.616 [2024-12-09 17:38:11.741455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.616 [2024-12-09 17:38:11.741463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.617 [2024-12-09 17:38:11.746625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.617 [2024-12-09 17:38:11.746646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.617 [2024-12-09 17:38:11.746654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.617 [2024-12-09 17:38:11.751825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.617 [2024-12-09 17:38:11.751846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.617 [2024-12-09 17:38:11.751854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.617 [2024-12-09 17:38:11.757035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.617 [2024-12-09 17:38:11.757056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.617 [2024-12-09 17:38:11.757064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.617 [2024-12-09 17:38:11.762156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.617 [2024-12-09 17:38:11.762177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.617 [2024-12-09 17:38:11.762188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.617 [2024-12-09 17:38:11.767243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.617 [2024-12-09 17:38:11.767264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.617 [2024-12-09 17:38:11.767271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.617 [2024-12-09 17:38:11.772352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.617 [2024-12-09 17:38:11.772373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.617 [2024-12-09 17:38:11.772381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.617 [2024-12-09 17:38:11.777523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.617 [2024-12-09 17:38:11.777543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.617 [2024-12-09 17:38:11.777551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.617 [2024-12-09 17:38:11.782664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.617 [2024-12-09 17:38:11.782684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.617 [2024-12-09 17:38:11.782692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.617 [2024-12-09 17:38:11.787749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.617 [2024-12-09 17:38:11.787770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.617 [2024-12-09 17:38:11.787779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.876 [2024-12-09 17:38:11.793074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.876 [2024-12-09 17:38:11.793099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.876 [2024-12-09 17:38:11.793108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.876 [2024-12-09 17:38:11.798151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.876 [2024-12-09 17:38:11.798173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.876 [2024-12-09 17:38:11.798182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.876 [2024-12-09 17:38:11.801041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.876 [2024-12-09 17:38:11.801063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.876 [2024-12-09 17:38:11.801071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.876 [2024-12-09 17:38:11.806227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.876 [2024-12-09 17:38:11.806248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.876 [2024-12-09 17:38:11.806257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.876 [2024-12-09 17:38:11.811417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.876 [2024-12-09 17:38:11.811438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.876 [2024-12-09 17:38:11.811445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.876 [2024-12-09 17:38:11.816521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.876 [2024-12-09 17:38:11.816541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.876 [2024-12-09 17:38:11.816549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.876 [2024-12-09 17:38:11.821638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.876 [2024-12-09 17:38:11.821659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.876 [2024-12-09 17:38:11.821667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.876 [2024-12-09 17:38:11.826729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.876 [2024-12-09 17:38:11.826750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.876 [2024-12-09 17:38:11.826758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.876 [2024-12-09 17:38:11.831894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.876 [2024-12-09 17:38:11.831915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.876 [2024-12-09 17:38:11.831923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.876 [2024-12-09 17:38:11.837011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.876 [2024-12-09 17:38:11.837032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.876 [2024-12-09 17:38:11.837040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.876 [2024-12-09 17:38:11.843032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.876 [2024-12-09 17:38:11.843053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.876 [2024-12-09 17:38:11.843061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.876 [2024-12-09 17:38:11.848654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.876 [2024-12-09 17:38:11.848674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.876 [2024-12-09 17:38:11.848685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.876 [2024-12-09 17:38:11.853746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.876 [2024-12-09 17:38:11.853766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.876 [2024-12-09 17:38:11.853774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.876 [2024-12-09 17:38:11.858749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.876 [2024-12-09 17:38:11.858770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.876 [2024-12-09 17:38:11.858777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.876 [2024-12-09 17:38:11.863812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.876 [2024-12-09 17:38:11.863831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.876 [2024-12-09 17:38:11.863839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.876 [2024-12-09 17:38:11.868998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.876 [2024-12-09 17:38:11.869019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.876 [2024-12-09 17:38:11.869026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.876 [2024-12-09 17:38:11.874207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.877 [2024-12-09 17:38:11.874234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.877 [2024-12-09 17:38:11.874242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.877 [2024-12-09 17:38:11.879415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.877 [2024-12-09 17:38:11.879435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.877 [2024-12-09 17:38:11.879443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.877 [2024-12-09 17:38:11.884504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.877 [2024-12-09 17:38:11.884524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.877 [2024-12-09 17:38:11.884532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.877 [2024-12-09 17:38:11.889582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.877 [2024-12-09 17:38:11.889602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.877 [2024-12-09 17:38:11.889610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.877 [2024-12-09 17:38:11.894681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.877 [2024-12-09 17:38:11.894704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.877 [2024-12-09 17:38:11.894712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.877 [2024-12-09 17:38:11.899725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.877 [2024-12-09 17:38:11.899744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.877 [2024-12-09 17:38:11.899751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.877 [2024-12-09 17:38:11.904856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.877 [2024-12-09 17:38:11.904876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.877 [2024-12-09 17:38:11.904885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.877 [2024-12-09 17:38:11.910096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.877 [2024-12-09 17:38:11.910116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.877 [2024-12-09 17:38:11.910123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.877 [2024-12-09 17:38:11.915251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.877 [2024-12-09 17:38:11.915272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.877 [2024-12-09 17:38:11.915280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.877 [2024-12-09 17:38:11.920546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.877 [2024-12-09 17:38:11.920567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.877 [2024-12-09 17:38:11.920576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.877 [2024-12-09 17:38:11.925697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.877 [2024-12-09 17:38:11.925718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.877 [2024-12-09 17:38:11.925726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.877 [2024-12-09 17:38:11.930775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.877 [2024-12-09 17:38:11.930796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.877 [2024-12-09 17:38:11.930803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.877 [2024-12-09 17:38:11.935925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.877 [2024-12-09 17:38:11.935945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.877 [2024-12-09 17:38:11.935952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.877 [2024-12-09 17:38:11.941022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.877 [2024-12-09 17:38:11.941042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.877 [2024-12-09 17:38:11.941050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.877 [2024-12-09 17:38:11.946131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.877 [2024-12-09 17:38:11.946152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.877 [2024-12-09 17:38:11.946160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.877 [2024-12-09 17:38:11.951255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.877 [2024-12-09 17:38:11.951275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.877 [2024-12-09 17:38:11.951283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.877 [2024-12-09 17:38:11.956382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.877 [2024-12-09 17:38:11.956401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.877 [2024-12-09 17:38:11.956409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.877 [2024-12-09 17:38:11.961519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.877 [2024-12-09 17:38:11.961539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.877 [2024-12-09 17:38:11.961547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.877 [2024-12-09 17:38:11.966574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.877 [2024-12-09 17:38:11.966593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.877 [2024-12-09 17:38:11.966601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.877 [2024-12-09 17:38:11.972196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.877 [2024-12-09 17:38:11.972216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.877 [2024-12-09 17:38:11.972231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.877 [2024-12-09 17:38:11.976719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.877 [2024-12-09 17:38:11.976740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.877 [2024-12-09 17:38:11.976748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.877 [2024-12-09 17:38:11.982257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.877 [2024-12-09 17:38:11.982279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.877 [2024-12-09 17:38:11.982290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.877 [2024-12-09 17:38:11.988056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.877 [2024-12-09 17:38:11.988078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.877 [2024-12-09 17:38:11.988086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.877 [2024-12-09 17:38:11.993781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.877 [2024-12-09 17:38:11.993801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.877 [2024-12-09 17:38:11.993809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.877 [2024-12-09 17:38:11.998956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.877 [2024-12-09 17:38:11.998976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.877 [2024-12-09 17:38:11.998984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.877 [2024-12-09 17:38:12.004121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.877 [2024-12-09 17:38:12.004142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.877 [2024-12-09 17:38:12.004150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.877 [2024-12-09 17:38:12.009305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.877 [2024-12-09 17:38:12.009326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.877 [2024-12-09 17:38:12.009334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.877 [2024-12-09 17:38:12.014508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.877 [2024-12-09 17:38:12.014529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.877 [2024-12-09 17:38:12.014537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.877 [2024-12-09 17:38:12.019672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.878 [2024-12-09 17:38:12.019693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.878 [2024-12-09 17:38:12.019701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.878 [2024-12-09 17:38:12.024824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.878 [2024-12-09 17:38:12.024845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.878 [2024-12-09 17:38:12.024852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.878 [2024-12-09 17:38:12.029940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.878 [2024-12-09 17:38:12.029964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.878 [2024-12-09 17:38:12.029972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.878 [2024-12-09 17:38:12.035176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.878 [2024-12-09 17:38:12.035197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.878 [2024-12-09 17:38:12.035205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.878 [2024-12-09 17:38:12.040720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.878 [2024-12-09 17:38:12.040741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.878 [2024-12-09 17:38:12.040749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:42.878 [2024-12-09 17:38:12.046215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.878 [2024-12-09 17:38:12.046244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.878 [2024-12-09 17:38:12.046252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.878 [2024-12-09 17:38:12.051469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:42.878 [2024-12-09 17:38:12.051493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.878 [2024-12-09 17:38:12.051502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.137 [2024-12-09 17:38:12.056633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.137 [2024-12-09 17:38:12.056661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.137 [2024-12-09 17:38:12.056673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.137 [2024-12-09 17:38:12.061759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.137 [2024-12-09 17:38:12.061782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.137 [2024-12-09 17:38:12.061790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.137 [2024-12-09 17:38:12.066894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.137 [2024-12-09 17:38:12.066914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.137 [2024-12-09 17:38:12.066923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.137 [2024-12-09 17:38:12.071973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.137 [2024-12-09 17:38:12.071994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.137 [2024-12-09 17:38:12.072001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.137 [2024-12-09 17:38:12.077118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.137 [2024-12-09 17:38:12.077138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.137 [2024-12-09 17:38:12.077146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.137 [2024-12-09 17:38:12.082157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.137 [2024-12-09 17:38:12.082177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.137 [2024-12-09 17:38:12.082185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.137 [2024-12-09 17:38:12.087281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.137 [2024-12-09 17:38:12.087301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.137 [2024-12-09 17:38:12.087309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.137 [2024-12-09 17:38:12.092453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.137 [2024-12-09 17:38:12.092474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.137 [2024-12-09 17:38:12.092482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.137 [2024-12-09 17:38:12.097614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.137 [2024-12-09 17:38:12.097634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.137 [2024-12-09 17:38:12.097642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.137 [2024-12-09 17:38:12.102729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.137 [2024-12-09 17:38:12.102750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.137 [2024-12-09 17:38:12.102758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.137 [2024-12-09 17:38:12.107914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.137 [2024-12-09 17:38:12.107935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.137 [2024-12-09 17:38:12.107942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.137 [2024-12-09 17:38:12.113113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.137 [2024-12-09 17:38:12.113133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.137 [2024-12-09 17:38:12.113141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.137 [2024-12-09 17:38:12.118304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.137 [2024-12-09 17:38:12.118328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.137 [2024-12-09 17:38:12.118336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.137 [2024-12-09 17:38:12.123444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.137 [2024-12-09 17:38:12.123464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.137 [2024-12-09 17:38:12.123472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.137 [2024-12-09 17:38:12.128594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.137 [2024-12-09 17:38:12.128615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.137 [2024-12-09 17:38:12.128623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.137 [2024-12-09 17:38:12.133799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.137 [2024-12-09 17:38:12.133819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.137 [2024-12-09 17:38:12.133826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.137 [2024-12-09 17:38:12.138929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.137 [2024-12-09 17:38:12.138948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.137 [2024-12-09 17:38:12.138956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.137 [2024-12-09 17:38:12.144044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.137 [2024-12-09 17:38:12.144064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.137 [2024-12-09 17:38:12.144071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.137 [2024-12-09 17:38:12.149141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.137 [2024-12-09 17:38:12.149161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.137 [2024-12-09 17:38:12.149169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.137 [2024-12-09 17:38:12.154253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.137 [2024-12-09 17:38:12.154273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.137 [2024-12-09 17:38:12.154281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.137 [2024-12-09 17:38:12.159385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.137 [2024-12-09 17:38:12.159406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.137 [2024-12-09 17:38:12.159413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.137 [2024-12-09 17:38:12.164585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.137 [2024-12-09 17:38:12.164604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.137 [2024-12-09 17:38:12.164612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.137 [2024-12-09 17:38:12.169691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.137 [2024-12-09 17:38:12.169711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.137 [2024-12-09 17:38:12.169718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.137 [2024-12-09 17:38:12.174772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.137 [2024-12-09 17:38:12.174792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.137 [2024-12-09 17:38:12.174799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.137 [2024-12-09 17:38:12.179902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.137 [2024-12-09 17:38:12.179922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.137 [2024-12-09 17:38:12.179930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.137 [2024-12-09 17:38:12.185034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.137 [2024-12-09 17:38:12.185054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.137 [2024-12-09 17:38:12.185063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.137 [2024-12-09 17:38:12.190136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.137 [2024-12-09 17:38:12.190154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.137 [2024-12-09 17:38:12.190161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.138 [2024-12-09 17:38:12.195228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.138 [2024-12-09 17:38:12.195249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.138 [2024-12-09 17:38:12.195257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.138 [2024-12-09 17:38:12.200298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.138 [2024-12-09 17:38:12.200317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.138 [2024-12-09 17:38:12.200326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.138 [2024-12-09 17:38:12.205379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.138 [2024-12-09 17:38:12.205400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.138 [2024-12-09 17:38:12.205411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.138 [2024-12-09 17:38:12.210502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.138 [2024-12-09 17:38:12.210522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.138 [2024-12-09 17:38:12.210530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.138 [2024-12-09 17:38:12.215569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.138 [2024-12-09 17:38:12.215589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.138 [2024-12-09 17:38:12.215597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.138 [2024-12-09 17:38:12.220686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.138 [2024-12-09 17:38:12.220706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.138 [2024-12-09 17:38:12.220715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.138 [2024-12-09 17:38:12.225801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.138 [2024-12-09 17:38:12.225820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.138 [2024-12-09 17:38:12.225828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.138 [2024-12-09 17:38:12.230905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.138 [2024-12-09 17:38:12.230925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.138 [2024-12-09 17:38:12.230932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.138 [2024-12-09 17:38:12.236061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.138 [2024-12-09 17:38:12.236081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.138 [2024-12-09 17:38:12.236089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.138 [2024-12-09 17:38:12.241236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.138 [2024-12-09 17:38:12.241257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.138 [2024-12-09 17:38:12.241265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.138 [2024-12-09 17:38:12.246354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.138 [2024-12-09 17:38:12.246375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.138 [2024-12-09 17:38:12.246383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.138 [2024-12-09 17:38:12.251461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.138 [2024-12-09 17:38:12.251487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.138 [2024-12-09 17:38:12.251494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.138 [2024-12-09 17:38:12.256711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.138 [2024-12-09 17:38:12.256732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.138 [2024-12-09 17:38:12.256740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.138 [2024-12-09 17:38:12.261853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.138 [2024-12-09 17:38:12.261874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.138 [2024-12-09 17:38:12.261882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.138 [2024-12-09 17:38:12.267028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.138 [2024-12-09 17:38:12.267050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.138 [2024-12-09 17:38:12.267057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.138 [2024-12-09 17:38:12.272182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.138 [2024-12-09 17:38:12.272203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.138 [2024-12-09 17:38:12.272211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.138 [2024-12-09 17:38:12.277364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.138 [2024-12-09 17:38:12.277385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.138 [2024-12-09 17:38:12.277392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.138 [2024-12-09 17:38:12.282432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.138 [2024-12-09 17:38:12.282452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.138 [2024-12-09 17:38:12.282460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.138 [2024-12-09 17:38:12.287537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.138 [2024-12-09 17:38:12.287557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.138 [2024-12-09 17:38:12.287564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.138 [2024-12-09 17:38:12.292615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.138 [2024-12-09 17:38:12.292635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.138 [2024-12-09 17:38:12.292642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.138 [2024-12-09 17:38:12.297746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.138 [2024-12-09 17:38:12.297766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.138 [2024-12-09 17:38:12.297774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.138 [2024-12-09 17:38:12.302906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.138 [2024-12-09 17:38:12.302926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.138 [2024-12-09 17:38:12.302934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.138 [2024-12-09 17:38:12.308030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.138 [2024-12-09 17:38:12.308050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.138 [2024-12-09 17:38:12.308060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.138 [2024-12-09 17:38:12.313272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.397 [2024-12-09 17:38:12.313296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.397 [2024-12-09 17:38:12.313305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.397 [2024-12-09 17:38:12.318485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.397 [2024-12-09 17:38:12.318508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.397 [2024-12-09 17:38:12.318517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.397 [2024-12-09 17:38:12.323710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.397 [2024-12-09 17:38:12.323731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.397 [2024-12-09 17:38:12.323739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.397 [2024-12-09 17:38:12.328849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.397 [2024-12-09 17:38:12.328871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.397 [2024-12-09 17:38:12.328879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.397 [2024-12-09 17:38:12.334046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.397 [2024-12-09 17:38:12.334066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.397 [2024-12-09 17:38:12.334075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.397 [2024-12-09 17:38:12.339144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.397 [2024-12-09 17:38:12.339165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.397 [2024-12-09 17:38:12.339176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.397 [2024-12-09 17:38:12.344228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.397 [2024-12-09 17:38:12.344248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.397 [2024-12-09 17:38:12.344256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.397 [2024-12-09 17:38:12.349319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.397 [2024-12-09 17:38:12.349339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.397 [2024-12-09 17:38:12.349346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.397 [2024-12-09 17:38:12.354387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.397 [2024-12-09 17:38:12.354407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.397 [2024-12-09 17:38:12.354414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.397 [2024-12-09 17:38:12.359454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.397 [2024-12-09 17:38:12.359475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.397 [2024-12-09 17:38:12.359483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.397 [2024-12-09 17:38:12.364556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.397 [2024-12-09 17:38:12.364576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.398 [2024-12-09 17:38:12.364584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.398 [2024-12-09 17:38:12.369685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.398 [2024-12-09 17:38:12.369705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.398 [2024-12-09 17:38:12.369713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.398 [2024-12-09 17:38:12.374794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.398 [2024-12-09 17:38:12.374814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.398 [2024-12-09 17:38:12.374822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.398 [2024-12-09 17:38:12.379897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.398 [2024-12-09 17:38:12.379916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.398 [2024-12-09 17:38:12.379924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.398 [2024-12-09 17:38:12.385031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.398 [2024-12-09 17:38:12.385057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.398 [2024-12-09 17:38:12.385065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.398 [2024-12-09 17:38:12.390169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.398 [2024-12-09 17:38:12.390190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.398 [2024-12-09 17:38:12.390197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.398 [2024-12-09 17:38:12.395277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.398 [2024-12-09 17:38:12.395297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.398 [2024-12-09 17:38:12.395305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.398 [2024-12-09 17:38:12.400362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.398 [2024-12-09 17:38:12.400382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.398 [2024-12-09 17:38:12.400390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.398 [2024-12-09 17:38:12.405460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.398 [2024-12-09 17:38:12.405480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.398 [2024-12-09 17:38:12.405489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.398 [2024-12-09 17:38:12.410534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.398 [2024-12-09 17:38:12.410555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.398 [2024-12-09 17:38:12.410563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.398 [2024-12-09 17:38:12.415652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.398 [2024-12-09 17:38:12.415672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.398 [2024-12-09 17:38:12.415680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.398 [2024-12-09 17:38:12.420744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.398 [2024-12-09 17:38:12.420764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.398 [2024-12-09 17:38:12.420772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.398 [2024-12-09 17:38:12.425869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.398 [2024-12-09 17:38:12.425889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.398 [2024-12-09 17:38:12.425900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.398 [2024-12-09 17:38:12.430966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.398 [2024-12-09 17:38:12.430986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.398 [2024-12-09 17:38:12.430994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.398 [2024-12-09 17:38:12.436080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.398 [2024-12-09 17:38:12.436101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.398 [2024-12-09 17:38:12.436108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.398 [2024-12-09 17:38:12.441206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.398 [2024-12-09 17:38:12.441234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.398 [2024-12-09 17:38:12.441242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.398 [2024-12-09 17:38:12.446243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.398 [2024-12-09 17:38:12.446263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.398 [2024-12-09 17:38:12.446271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.398 [2024-12-09 17:38:12.451319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.398 [2024-12-09 17:38:12.451340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.398 [2024-12-09 17:38:12.451348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.398 [2024-12-09 17:38:12.456400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.398 [2024-12-09 17:38:12.456420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.398 [2024-12-09 17:38:12.456427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.398 [2024-12-09 17:38:12.461446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.398 [2024-12-09 17:38:12.461466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.398 [2024-12-09 17:38:12.461474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.398 [2024-12-09 17:38:12.466528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.398 [2024-12-09 17:38:12.466548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.398 [2024-12-09 17:38:12.466556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.398 [2024-12-09 17:38:12.471671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.398 [2024-12-09 17:38:12.471694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.398 [2024-12-09 17:38:12.471702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.398 [2024-12-09 17:38:12.476833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.398 [2024-12-09 17:38:12.476853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.398 [2024-12-09 17:38:12.476860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.398 [2024-12-09 17:38:12.481964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.398 [2024-12-09 17:38:12.481983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.398 [2024-12-09 17:38:12.481990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.398 [2024-12-09 17:38:12.487099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.398 [2024-12-09 17:38:12.487120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.398 [2024-12-09 17:38:12.487127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.398 [2024-12-09 17:38:12.492231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.398 [2024-12-09 17:38:12.492252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.398 [2024-12-09 17:38:12.492259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.398 [2024-12-09 17:38:12.497337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.398 [2024-12-09 17:38:12.497357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.398 [2024-12-09 17:38:12.497365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.398 [2024-12-09 17:38:12.502467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.398 [2024-12-09 17:38:12.502487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.398 [2024-12-09 17:38:12.502495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.398 [2024-12-09 17:38:12.507637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.399 [2024-12-09 17:38:12.507657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.399 [2024-12-09 17:38:12.507665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.399 [2024-12-09 17:38:12.512770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.399 [2024-12-09 17:38:12.512789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.399 [2024-12-09 17:38:12.512797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.399 [2024-12-09 17:38:12.517870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.399 [2024-12-09 17:38:12.517890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.399 [2024-12-09 17:38:12.517898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.399 [2024-12-09 17:38:12.522992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.399 [2024-12-09 17:38:12.523012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.399 [2024-12-09 17:38:12.523020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.399 [2024-12-09 17:38:12.528115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.399 [2024-12-09 17:38:12.528135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.399 [2024-12-09 17:38:12.528142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.399 [2024-12-09 17:38:12.533282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.399 [2024-12-09 17:38:12.533301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.399 [2024-12-09 17:38:12.533309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.399 [2024-12-09 17:38:12.538447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.399 [2024-12-09 17:38:12.538468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.399 [2024-12-09 17:38:12.538476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.399 [2024-12-09 17:38:12.543537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.399 [2024-12-09 17:38:12.543557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.399 [2024-12-09 17:38:12.543564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.399 [2024-12-09 17:38:12.548652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.399 [2024-12-09 17:38:12.548673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.399 [2024-12-09 17:38:12.548681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.399 [2024-12-09 17:38:12.553737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.399 [2024-12-09 17:38:12.553757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.399 [2024-12-09 17:38:12.553765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.399 [2024-12-09 17:38:12.558865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.399 [2024-12-09 17:38:12.558885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.399 [2024-12-09 17:38:12.558896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.399 [2024-12-09 17:38:12.563957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.399 [2024-12-09 17:38:12.563977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.399 [2024-12-09 17:38:12.563984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.399 [2024-12-09 17:38:12.569103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.399 [2024-12-09 17:38:12.569123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.399 [2024-12-09 17:38:12.569131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.657 [2024-12-09 17:38:12.574252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.657 [2024-12-09 17:38:12.574275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.657 [2024-12-09 17:38:12.574284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.657 [2024-12-09 17:38:12.579332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.657 [2024-12-09 17:38:12.579355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.657 [2024-12-09 17:38:12.579363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.657 [2024-12-09 17:38:12.584416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.657 [2024-12-09 17:38:12.584437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.657 [2024-12-09 17:38:12.584445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.657 [2024-12-09 17:38:12.589492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.657 [2024-12-09 17:38:12.589512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.657 [2024-12-09 17:38:12.589520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.657 [2024-12-09 17:38:12.594619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.657 [2024-12-09 17:38:12.594639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.657 [2024-12-09 17:38:12.594647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.657 [2024-12-09 17:38:12.599769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.657 [2024-12-09 17:38:12.599790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.657 [2024-12-09 17:38:12.599797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.657 [2024-12-09 17:38:12.604865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.657 [2024-12-09 17:38:12.604889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.657 [2024-12-09 17:38:12.604896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.657 [2024-12-09 17:38:12.609934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.657 [2024-12-09 17:38:12.609953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.657 [2024-12-09 17:38:12.609960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.657 [2024-12-09 17:38:12.615076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.657 [2024-12-09 17:38:12.615095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.657 [2024-12-09 17:38:12.615103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.657 [2024-12-09 17:38:12.620183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.657 [2024-12-09 17:38:12.620204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.657 [2024-12-09 17:38:12.620212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.657 [2024-12-09 17:38:12.625338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.657 [2024-12-09 17:38:12.625359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.657 [2024-12-09 17:38:12.625367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.657 [2024-12-09 17:38:12.630472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.657 [2024-12-09 17:38:12.630492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.657 [2024-12-09 17:38:12.630500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.658 [2024-12-09 17:38:12.635563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.658 [2024-12-09 17:38:12.635584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.658 [2024-12-09 17:38:12.635591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.658 [2024-12-09 17:38:12.640629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.658 [2024-12-09 17:38:12.640650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.658 [2024-12-09 17:38:12.640658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.658 [2024-12-09 17:38:12.645699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.658 [2024-12-09 17:38:12.645719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.658 [2024-12-09 17:38:12.645727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.658 [2024-12-09 17:38:12.650797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.658 [2024-12-09 17:38:12.650816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.658 [2024-12-09 17:38:12.650824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.658 [2024-12-09 17:38:12.655881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.658 [2024-12-09 17:38:12.655900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.658 [2024-12-09 17:38:12.655908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.658 [2024-12-09 17:38:12.660966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.658 [2024-12-09 17:38:12.660985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.658 [2024-12-09 17:38:12.660993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.658 [2024-12-09 17:38:12.666255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.658 [2024-12-09 17:38:12.666275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.658 [2024-12-09 17:38:12.666283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.658 [2024-12-09 17:38:12.671623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.658 [2024-12-09 17:38:12.671644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.658 [2024-12-09 17:38:12.671652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.658 [2024-12-09 17:38:12.677146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.658 [2024-12-09 17:38:12.677167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.658 [2024-12-09 17:38:12.677175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.658 [2024-12-09 17:38:12.682505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.658 [2024-12-09 17:38:12.682526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.658 [2024-12-09 17:38:12.682534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.658 [2024-12-09 17:38:12.687920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.658 [2024-12-09 17:38:12.687940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.658 [2024-12-09 17:38:12.687948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.658 [2024-12-09 17:38:12.693248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.658 [2024-12-09 17:38:12.693268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.658 [2024-12-09 17:38:12.693279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.658 [2024-12-09 17:38:12.698659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.658 [2024-12-09 17:38:12.698680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.658 [2024-12-09 17:38:12.698688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.658 [2024-12-09 17:38:12.703985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.658 [2024-12-09 17:38:12.704005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.658 [2024-12-09 17:38:12.704012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.658 [2024-12-09 17:38:12.709335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.658 [2024-12-09 17:38:12.709355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.658 [2024-12-09 17:38:12.709363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.658 [2024-12-09 17:38:12.714537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21b1420) 00:27:43.658 [2024-12-09 17:38:12.714557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.658 [2024-12-09 17:38:12.714564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.658 5690.50 IOPS, 711.31 MiB/s 00:27:43.658 Latency(us) 00:27:43.658 [2024-12-09T16:38:12.837Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:43.658 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:43.658 nvme0n1 : 2.00 5690.34 711.29 0.00 0.00 2809.13 811.40 10048.85 00:27:43.658 [2024-12-09T16:38:12.837Z] =================================================================================================================== 00:27:43.658 [2024-12-09T16:38:12.837Z] Total : 5690.34 711.29 0.00 0.00 2809.13 811.40 10048.85 00:27:43.658 { 00:27:43.658 "results": [ 00:27:43.658 { 00:27:43.658 "job": "nvme0n1", 00:27:43.658 "core_mask": "0x2", 00:27:43.658 "workload": "randread", 00:27:43.658 "status": "finished", 00:27:43.658 "queue_depth": 16, 00:27:43.658 "io_size": 131072, 00:27:43.658 "runtime": 2.002867, 00:27:43.658 "iops": 5690.342893462222, 00:27:43.658 "mibps": 711.2928616827777, 00:27:43.658 "io_failed": 0, 00:27:43.658 "io_timeout": 0, 00:27:43.658 "avg_latency_us": 2809.134955314055, 00:27:43.658 "min_latency_us": 811.3980952380953, 00:27:43.658 "max_latency_us": 10048.853333333333 00:27:43.658 } 00:27:43.658 ], 00:27:43.658 "core_count": 1 00:27:43.658 } 00:27:43.658 17:38:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:43.658 17:38:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:43.658 17:38:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:43.658 | .driver_specific 00:27:43.658 | .nvme_error 00:27:43.658 | .status_code 00:27:43.658 | .command_transient_transport_error' 00:27:43.658 17:38:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:43.933 17:38:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 368 > 0 )) 00:27:43.933 17:38:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2726850 00:27:43.933 17:38:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2726850 ']' 00:27:43.933 17:38:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2726850 00:27:43.933 17:38:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:43.933 17:38:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:43.933 17:38:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2726850 00:27:43.933 17:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:43.933 17:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:43.933 17:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2726850' 00:27:43.933 killing process with pid 2726850 00:27:43.934 17:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2726850 00:27:43.934 Received shutdown signal, test time was about 2.000000 seconds 00:27:43.934 00:27:43.934 Latency(us) 00:27:43.934 [2024-12-09T16:38:13.113Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:43.934 [2024-12-09T16:38:13.113Z] =================================================================================================================== 00:27:43.934 [2024-12-09T16:38:13.113Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:43.934 17:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2726850 00:27:44.192 17:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:27:44.192 17:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:44.192 17:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:44.192 17:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:44.192 17:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:44.192 17:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2727358 00:27:44.192 17:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2727358 /var/tmp/bperf.sock 00:27:44.192 17:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:27:44.192 17:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2727358 ']' 00:27:44.192 17:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:44.192 17:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:44.192 17:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:44.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:44.192 17:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:44.192 17:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:44.192 [2024-12-09 17:38:13.206517] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:27:44.192 [2024-12-09 17:38:13.206566] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2727358 ] 00:27:44.192 [2024-12-09 17:38:13.284052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:44.192 [2024-12-09 17:38:13.319697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:44.449 17:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:44.449 17:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:44.449 17:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:44.449 17:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:44.449 17:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:44.449 17:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.449 17:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:44.706 17:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.706 17:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:44.706 17:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:44.964 nvme0n1 00:27:44.964 17:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:44.964 17:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.964 17:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:44.964 17:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.964 17:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:44.964 17:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:44.964 Running I/O for 2 seconds... 00:27:44.964 [2024-12-09 17:38:14.042937] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:44.965 [2024-12-09 17:38:14.043103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:17358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.965 [2024-12-09 17:38:14.043133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:44.965 [2024-12-09 17:38:14.052394] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:44.965 [2024-12-09 17:38:14.052552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:21940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.965 [2024-12-09 17:38:14.052575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:44.965 [2024-12-09 17:38:14.061813] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:44.965 [2024-12-09 17:38:14.061965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:17431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.965 [2024-12-09 17:38:14.061985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:44.965 [2024-12-09 17:38:14.071161] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:44.965 [2024-12-09 17:38:14.071323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:14969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.965 [2024-12-09 17:38:14.071344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:44.965 [2024-12-09 17:38:14.080488] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:44.965 [2024-12-09 17:38:14.080641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.965 [2024-12-09 17:38:14.080659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:44.965 [2024-12-09 17:38:14.090047] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:44.965 [2024-12-09 17:38:14.090200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.965 [2024-12-09 17:38:14.090222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:44.965 [2024-12-09 17:38:14.099342] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:44.965 [2024-12-09 17:38:14.099494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:3838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.965 [2024-12-09 17:38:14.099512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:44.965 [2024-12-09 17:38:14.108679] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:44.965 [2024-12-09 17:38:14.108834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:12177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.965 [2024-12-09 17:38:14.108853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:44.965 [2024-12-09 17:38:14.118018] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:44.965 [2024-12-09 17:38:14.118170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:17048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.965 [2024-12-09 17:38:14.118189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:44.965 [2024-12-09 17:38:14.127307] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:44.965 [2024-12-09 17:38:14.127458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:18224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.965 [2024-12-09 17:38:14.127476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:44.965 [2024-12-09 17:38:14.136658] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:44.965 [2024-12-09 17:38:14.136828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:3724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.965 [2024-12-09 17:38:14.136846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.223 [2024-12-09 17:38:14.146282] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.223 [2024-12-09 17:38:14.146465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:18103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.223 [2024-12-09 17:38:14.146485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.223 [2024-12-09 17:38:14.155580] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.223 [2024-12-09 17:38:14.155730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.223 [2024-12-09 17:38:14.155748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.223 [2024-12-09 17:38:14.164891] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.223 [2024-12-09 17:38:14.165041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:12218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.223 [2024-12-09 17:38:14.165059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.223 [2024-12-09 17:38:14.174139] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.223 [2024-12-09 17:38:14.174298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:14524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.223 [2024-12-09 17:38:14.174315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.223 [2024-12-09 17:38:14.183412] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.223 [2024-12-09 17:38:14.183563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.223 [2024-12-09 17:38:14.183580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.223 [2024-12-09 17:38:14.192722] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.223 [2024-12-09 17:38:14.192871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.223 [2024-12-09 17:38:14.192888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.223 [2024-12-09 17:38:14.201986] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.223 [2024-12-09 17:38:14.202136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:2471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.223 [2024-12-09 17:38:14.202153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.223 [2024-12-09 17:38:14.211279] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.223 [2024-12-09 17:38:14.211432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.223 [2024-12-09 17:38:14.211449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.223 [2024-12-09 17:38:14.220595] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.224 [2024-12-09 17:38:14.220749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.224 [2024-12-09 17:38:14.220769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.224 [2024-12-09 17:38:14.229958] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.224 [2024-12-09 17:38:14.230110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.224 [2024-12-09 17:38:14.230127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.224 [2024-12-09 17:38:14.239267] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.224 [2024-12-09 17:38:14.239418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.224 [2024-12-09 17:38:14.239436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.224 [2024-12-09 17:38:14.248542] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.224 [2024-12-09 17:38:14.248692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:10207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.224 [2024-12-09 17:38:14.248709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.224 [2024-12-09 17:38:14.257796] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.224 [2024-12-09 17:38:14.257947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:21522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.224 [2024-12-09 17:38:14.257964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.224 [2024-12-09 17:38:14.267110] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.224 [2024-12-09 17:38:14.267268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:4864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.224 [2024-12-09 17:38:14.267286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.224 [2024-12-09 17:38:14.276390] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.224 [2024-12-09 17:38:14.276541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:24147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.224 [2024-12-09 17:38:14.276558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.224 [2024-12-09 17:38:14.285652] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.224 [2024-12-09 17:38:14.285800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:6585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.224 [2024-12-09 17:38:14.285818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.224 [2024-12-09 17:38:14.294935] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.224 [2024-12-09 17:38:14.295099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.224 [2024-12-09 17:38:14.295116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.224 [2024-12-09 17:38:14.304500] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.224 [2024-12-09 17:38:14.304654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:4461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.224 [2024-12-09 17:38:14.304672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.224 [2024-12-09 17:38:14.313868] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.224 [2024-12-09 17:38:14.314035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:12834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.224 [2024-12-09 17:38:14.314057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.224 [2024-12-09 17:38:14.323202] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.224 [2024-12-09 17:38:14.323379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:15363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.224 [2024-12-09 17:38:14.323398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.224 [2024-12-09 17:38:14.332537] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.224 [2024-12-09 17:38:14.332686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:11356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.224 [2024-12-09 17:38:14.332704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.224 [2024-12-09 17:38:14.341842] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.224 [2024-12-09 17:38:14.341992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:21080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.224 [2024-12-09 17:38:14.342009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.224 [2024-12-09 17:38:14.351083] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.224 [2024-12-09 17:38:14.351235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:10610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.224 [2024-12-09 17:38:14.351253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.224 [2024-12-09 17:38:14.360379] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.224 [2024-12-09 17:38:14.360529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.224 [2024-12-09 17:38:14.360547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.224 [2024-12-09 17:38:14.369695] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.224 [2024-12-09 17:38:14.369843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:14061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.224 [2024-12-09 17:38:14.369860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.224 [2024-12-09 17:38:14.378967] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.224 [2024-12-09 17:38:14.379117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.224 [2024-12-09 17:38:14.379134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.224 [2024-12-09 17:38:14.388243] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.224 [2024-12-09 17:38:14.388395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:21633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.224 [2024-12-09 17:38:14.388412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.224 [2024-12-09 17:38:14.397577] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.224 [2024-12-09 17:38:14.397730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:3014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.224 [2024-12-09 17:38:14.397749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.482 [2024-12-09 17:38:14.407071] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.482 [2024-12-09 17:38:14.407226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:14078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.482 [2024-12-09 17:38:14.407262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.482 [2024-12-09 17:38:14.416461] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.482 [2024-12-09 17:38:14.416612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:23020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.482 [2024-12-09 17:38:14.416630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.482 [2024-12-09 17:38:14.425766] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.482 [2024-12-09 17:38:14.425913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:8767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.482 [2024-12-09 17:38:14.425931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.482 [2024-12-09 17:38:14.435033] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.482 [2024-12-09 17:38:14.435182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.482 [2024-12-09 17:38:14.435200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.482 [2024-12-09 17:38:14.444347] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.482 [2024-12-09 17:38:14.444497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.482 [2024-12-09 17:38:14.444514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.482 [2024-12-09 17:38:14.453623] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.482 [2024-12-09 17:38:14.453775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.482 [2024-12-09 17:38:14.453792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.482 [2024-12-09 17:38:14.462906] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.482 [2024-12-09 17:38:14.463056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:15948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.482 [2024-12-09 17:38:14.463072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.482 [2024-12-09 17:38:14.472177] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.482 [2024-12-09 17:38:14.472340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.482 [2024-12-09 17:38:14.472357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.482 [2024-12-09 17:38:14.481460] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.482 [2024-12-09 17:38:14.481629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:19828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.482 [2024-12-09 17:38:14.481648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.482 [2024-12-09 17:38:14.490834] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.482 [2024-12-09 17:38:14.490986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.482 [2024-12-09 17:38:14.491003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.482 [2024-12-09 17:38:14.500095] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.482 [2024-12-09 17:38:14.500253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.482 [2024-12-09 17:38:14.500270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.482 [2024-12-09 17:38:14.509382] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.482 [2024-12-09 17:38:14.509534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.482 [2024-12-09 17:38:14.509551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.482 [2024-12-09 17:38:14.518675] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.482 [2024-12-09 17:38:14.518825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:12278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.482 [2024-12-09 17:38:14.518842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.482 [2024-12-09 17:38:14.528008] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.482 [2024-12-09 17:38:14.528163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:10040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.482 [2024-12-09 17:38:14.528180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.482 [2024-12-09 17:38:14.537587] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.482 [2024-12-09 17:38:14.537744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:22334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.482 [2024-12-09 17:38:14.537762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.482 [2024-12-09 17:38:14.546932] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.482 [2024-12-09 17:38:14.547102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.482 [2024-12-09 17:38:14.547120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.482 [2024-12-09 17:38:14.556470] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.482 [2024-12-09 17:38:14.556626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:14392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.482 [2024-12-09 17:38:14.556646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.482 [2024-12-09 17:38:14.565862] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.482 [2024-12-09 17:38:14.566033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:10811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.482 [2024-12-09 17:38:14.566051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.482 [2024-12-09 17:38:14.575191] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.482 [2024-12-09 17:38:14.575367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:18644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.482 [2024-12-09 17:38:14.575385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.482 [2024-12-09 17:38:14.584520] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.482 [2024-12-09 17:38:14.584672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.482 [2024-12-09 17:38:14.584689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.482 [2024-12-09 17:38:14.593820] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.482 [2024-12-09 17:38:14.593973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:6785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.482 [2024-12-09 17:38:14.593991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.482 [2024-12-09 17:38:14.603079] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.482 [2024-12-09 17:38:14.603235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:23175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.482 [2024-12-09 17:38:14.603253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.482 [2024-12-09 17:38:14.612411] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.482 [2024-12-09 17:38:14.612563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:18965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.483 [2024-12-09 17:38:14.612581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.483 [2024-12-09 17:38:14.621714] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.483 [2024-12-09 17:38:14.621882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:17367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.483 [2024-12-09 17:38:14.621899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.483 [2024-12-09 17:38:14.631005] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.483 [2024-12-09 17:38:14.631156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:14023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.483 [2024-12-09 17:38:14.631173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.483 [2024-12-09 17:38:14.640312] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.483 [2024-12-09 17:38:14.640464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:3246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.483 [2024-12-09 17:38:14.640481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.483 [2024-12-09 17:38:14.649544] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.483 [2024-12-09 17:38:14.649694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:22548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.483 [2024-12-09 17:38:14.649711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.483 [2024-12-09 17:38:14.658934] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.483 [2024-12-09 17:38:14.659089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.483 [2024-12-09 17:38:14.659113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.741 [2024-12-09 17:38:14.668417] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.741 [2024-12-09 17:38:14.668571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:20148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.741 [2024-12-09 17:38:14.668590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.741 [2024-12-09 17:38:14.677697] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.741 [2024-12-09 17:38:14.677847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:6260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.741 [2024-12-09 17:38:14.677865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.741 [2024-12-09 17:38:14.686991] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.741 [2024-12-09 17:38:14.687143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:20620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.741 [2024-12-09 17:38:14.687160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.741 [2024-12-09 17:38:14.696301] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.741 [2024-12-09 17:38:14.696454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:14377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.741 [2024-12-09 17:38:14.696471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.741 [2024-12-09 17:38:14.705573] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.741 [2024-12-09 17:38:14.705727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.741 [2024-12-09 17:38:14.705745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.741 [2024-12-09 17:38:14.714879] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.741 [2024-12-09 17:38:14.715029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.741 [2024-12-09 17:38:14.715047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.741 [2024-12-09 17:38:14.724155] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.741 [2024-12-09 17:38:14.724333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:9202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.741 [2024-12-09 17:38:14.724352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.741 [2024-12-09 17:38:14.733483] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.741 [2024-12-09 17:38:14.733633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:4921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.741 [2024-12-09 17:38:14.733650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.741 [2024-12-09 17:38:14.742794] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.741 [2024-12-09 17:38:14.742946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.741 [2024-12-09 17:38:14.742963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.741 [2024-12-09 17:38:14.752065] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.741 [2024-12-09 17:38:14.752214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:15324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.741 [2024-12-09 17:38:14.752238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.741 [2024-12-09 17:38:14.761366] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.741 [2024-12-09 17:38:14.761517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:21344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.741 [2024-12-09 17:38:14.761534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.741 [2024-12-09 17:38:14.770646] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.741 [2024-12-09 17:38:14.770796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:11812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.741 [2024-12-09 17:38:14.770813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.741 [2024-12-09 17:38:14.779938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.741 [2024-12-09 17:38:14.780088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:21560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.741 [2024-12-09 17:38:14.780105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.741 [2024-12-09 17:38:14.789421] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.741 [2024-12-09 17:38:14.789575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.741 [2024-12-09 17:38:14.789593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.741 [2024-12-09 17:38:14.798743] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.741 [2024-12-09 17:38:14.798910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.741 [2024-12-09 17:38:14.798931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.741 [2024-12-09 17:38:14.808261] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.741 [2024-12-09 17:38:14.808416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:6676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.741 [2024-12-09 17:38:14.808434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.741 [2024-12-09 17:38:14.817655] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.741 [2024-12-09 17:38:14.817805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:6919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.741 [2024-12-09 17:38:14.817823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.741 [2024-12-09 17:38:14.826925] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.741 [2024-12-09 17:38:14.827076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.741 [2024-12-09 17:38:14.827093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.741 [2024-12-09 17:38:14.836228] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.741 [2024-12-09 17:38:14.836378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.741 [2024-12-09 17:38:14.836396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.741 [2024-12-09 17:38:14.845558] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.741 [2024-12-09 17:38:14.845708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.741 [2024-12-09 17:38:14.845725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.741 [2024-12-09 17:38:14.854849] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.741 [2024-12-09 17:38:14.855001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.741 [2024-12-09 17:38:14.855019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.741 [2024-12-09 17:38:14.864164] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.741 [2024-12-09 17:38:14.864322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.741 [2024-12-09 17:38:14.864340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.741 [2024-12-09 17:38:14.873435] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.742 [2024-12-09 17:38:14.873587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:12663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.742 [2024-12-09 17:38:14.873603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.742 [2024-12-09 17:38:14.882650] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.742 [2024-12-09 17:38:14.882819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:12962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.742 [2024-12-09 17:38:14.882837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.742 [2024-12-09 17:38:14.892010] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.742 [2024-12-09 17:38:14.892160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:24605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.742 [2024-12-09 17:38:14.892177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.742 [2024-12-09 17:38:14.901306] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.742 [2024-12-09 17:38:14.901457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:8587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.742 [2024-12-09 17:38:14.901474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:45.742 [2024-12-09 17:38:14.910603] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:45.742 [2024-12-09 17:38:14.910754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:18443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.742 [2024-12-09 17:38:14.910771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.000 [2024-12-09 17:38:14.920098] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.000 [2024-12-09 17:38:14.920274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:21639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.000 [2024-12-09 17:38:14.920295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.000 [2024-12-09 17:38:14.929542] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.000 [2024-12-09 17:38:14.929692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:13733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.000 [2024-12-09 17:38:14.929711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.000 [2024-12-09 17:38:14.938857] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.000 [2024-12-09 17:38:14.939007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.000 [2024-12-09 17:38:14.939024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.000 [2024-12-09 17:38:14.948117] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.000 [2024-12-09 17:38:14.948275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:3224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.000 [2024-12-09 17:38:14.948293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.000 [2024-12-09 17:38:14.957385] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.000 [2024-12-09 17:38:14.957538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:25087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.000 [2024-12-09 17:38:14.957555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.000 [2024-12-09 17:38:14.966702] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.000 [2024-12-09 17:38:14.966854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.000 [2024-12-09 17:38:14.966872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.000 [2024-12-09 17:38:14.975960] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.000 [2024-12-09 17:38:14.976111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:17548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.000 [2024-12-09 17:38:14.976128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.000 [2024-12-09 17:38:14.985242] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.000 [2024-12-09 17:38:14.985395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.000 [2024-12-09 17:38:14.985412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.000 [2024-12-09 17:38:14.994550] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.000 [2024-12-09 17:38:14.994703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.000 [2024-12-09 17:38:14.994720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.000 [2024-12-09 17:38:15.003814] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.000 [2024-12-09 17:38:15.003964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:3184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.000 [2024-12-09 17:38:15.003981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.000 [2024-12-09 17:38:15.013169] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.000 [2024-12-09 17:38:15.013331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.000 [2024-12-09 17:38:15.013349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.000 [2024-12-09 17:38:15.022445] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.000 [2024-12-09 17:38:15.022596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:19271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.000 [2024-12-09 17:38:15.022613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.000 [2024-12-09 17:38:15.031715] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.000 [2024-12-09 17:38:15.031867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:18505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.001 [2024-12-09 17:38:15.031884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.001 27311.00 IOPS, 106.68 MiB/s [2024-12-09T16:38:15.180Z] [2024-12-09 17:38:15.041018] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.001 [2024-12-09 17:38:15.041173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.001 [2024-12-09 17:38:15.041191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.001 [2024-12-09 17:38:15.050288] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.001 [2024-12-09 17:38:15.050457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:11894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.001 [2024-12-09 17:38:15.050475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.001 [2024-12-09 17:38:15.059860] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.001 [2024-12-09 17:38:15.060015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:14840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.001 [2024-12-09 17:38:15.060034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.001 [2024-12-09 17:38:15.069211] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.001 [2024-12-09 17:38:15.069369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:17377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.001 [2024-12-09 17:38:15.069386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.001 [2024-12-09 17:38:15.078476] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.001 [2024-12-09 17:38:15.078627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:9967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.001 [2024-12-09 17:38:15.078644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.001 [2024-12-09 17:38:15.087775] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.001 [2024-12-09 17:38:15.087924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:14874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.001 [2024-12-09 17:38:15.087942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.001 [2024-12-09 17:38:15.097210] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.001 [2024-12-09 17:38:15.097370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:15295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.001 [2024-12-09 17:38:15.097387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.001 [2024-12-09 17:38:15.106496] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.001 [2024-12-09 17:38:15.106648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:10394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.001 [2024-12-09 17:38:15.106665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.001 [2024-12-09 17:38:15.115797] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.001 [2024-12-09 17:38:15.115949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:11242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.001 [2024-12-09 17:38:15.115966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.001 [2024-12-09 17:38:15.125066] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.001 [2024-12-09 17:38:15.125237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:12584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.001 [2024-12-09 17:38:15.125255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.001 [2024-12-09 17:38:15.134594] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.001 [2024-12-09 17:38:15.134745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.001 [2024-12-09 17:38:15.134763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.001 [2024-12-09 17:38:15.143895] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.001 [2024-12-09 17:38:15.144043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:25424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.001 [2024-12-09 17:38:15.144061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.001 [2024-12-09 17:38:15.153164] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.001 [2024-12-09 17:38:15.153339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:16797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.001 [2024-12-09 17:38:15.153357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.001 [2024-12-09 17:38:15.162502] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.001 [2024-12-09 17:38:15.162654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.001 [2024-12-09 17:38:15.162671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.001 [2024-12-09 17:38:15.171765] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.001 [2024-12-09 17:38:15.171915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:3903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.001 [2024-12-09 17:38:15.171933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.259 [2024-12-09 17:38:15.181247] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.259 [2024-12-09 17:38:15.181405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:15630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.259 [2024-12-09 17:38:15.181425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.259 [2024-12-09 17:38:15.190628] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.259 [2024-12-09 17:38:15.190779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:18254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.259 [2024-12-09 17:38:15.190797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.259 [2024-12-09 17:38:15.199869] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.259 [2024-12-09 17:38:15.200020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.259 [2024-12-09 17:38:15.200041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.259 [2024-12-09 17:38:15.209171] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.259 [2024-12-09 17:38:15.209331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:17157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.259 [2024-12-09 17:38:15.209349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.259 [2024-12-09 17:38:15.218449] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.259 [2024-12-09 17:38:15.218600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.259 [2024-12-09 17:38:15.218617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.259 [2024-12-09 17:38:15.227882] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.259 [2024-12-09 17:38:15.228032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:6134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.259 [2024-12-09 17:38:15.228051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.259 [2024-12-09 17:38:15.237154] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.259 [2024-12-09 17:38:15.237333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:11449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.259 [2024-12-09 17:38:15.237351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.259 [2024-12-09 17:38:15.246491] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.259 [2024-12-09 17:38:15.246642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.259 [2024-12-09 17:38:15.246659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.259 [2024-12-09 17:38:15.255760] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.259 [2024-12-09 17:38:15.255912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:3779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.259 [2024-12-09 17:38:15.255929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.259 [2024-12-09 17:38:15.265065] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.259 [2024-12-09 17:38:15.265220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:2107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.259 [2024-12-09 17:38:15.265238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.259 [2024-12-09 17:38:15.274324] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.259 [2024-12-09 17:38:15.274476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:21326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.259 [2024-12-09 17:38:15.274494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.259 [2024-12-09 17:38:15.283594] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.259 [2024-12-09 17:38:15.283755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:17645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.259 [2024-12-09 17:38:15.283773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.259 [2024-12-09 17:38:15.292915] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.259 [2024-12-09 17:38:15.293066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.259 [2024-12-09 17:38:15.293083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.259 [2024-12-09 17:38:15.302175] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.259 [2024-12-09 17:38:15.302335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.259 [2024-12-09 17:38:15.302352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.259 [2024-12-09 17:38:15.311743] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.259 [2024-12-09 17:38:15.311900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.259 [2024-12-09 17:38:15.311918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.259 [2024-12-09 17:38:15.321045] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.259 [2024-12-09 17:38:15.321201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:16191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.259 [2024-12-09 17:38:15.321248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.259 [2024-12-09 17:38:15.330372] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.259 [2024-12-09 17:38:15.330524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:3539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.259 [2024-12-09 17:38:15.330542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.259 [2024-12-09 17:38:15.339679] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.259 [2024-12-09 17:38:15.339834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:9902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.259 [2024-12-09 17:38:15.339852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.259 [2024-12-09 17:38:15.348949] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.259 [2024-12-09 17:38:15.349103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:10620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.259 [2024-12-09 17:38:15.349120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.259 [2024-12-09 17:38:15.358236] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.259 [2024-12-09 17:38:15.358391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.259 [2024-12-09 17:38:15.358408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.259 [2024-12-09 17:38:15.367511] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.259 [2024-12-09 17:38:15.367667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.259 [2024-12-09 17:38:15.367684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.259 [2024-12-09 17:38:15.376784] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.259 [2024-12-09 17:38:15.376937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.259 [2024-12-09 17:38:15.376955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.259 [2024-12-09 17:38:15.386075] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.259 [2024-12-09 17:38:15.386229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.259 [2024-12-09 17:38:15.386247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.259 [2024-12-09 17:38:15.395353] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.259 [2024-12-09 17:38:15.395506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:13625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.259 [2024-12-09 17:38:15.395523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.259 [2024-12-09 17:38:15.404600] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.259 [2024-12-09 17:38:15.404754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.259 [2024-12-09 17:38:15.404772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.259 [2024-12-09 17:38:15.413898] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.259 [2024-12-09 17:38:15.414051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:6561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.259 [2024-12-09 17:38:15.414069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.259 [2024-12-09 17:38:15.423167] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.259 [2024-12-09 17:38:15.423329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:6624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.259 [2024-12-09 17:38:15.423347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.259 [2024-12-09 17:38:15.432424] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.259 [2024-12-09 17:38:15.432593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:3888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.259 [2024-12-09 17:38:15.432616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.517 [2024-12-09 17:38:15.441977] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.517 [2024-12-09 17:38:15.442127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.517 [2024-12-09 17:38:15.442150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.517 [2024-12-09 17:38:15.451315] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.517 [2024-12-09 17:38:15.451470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:16603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.517 [2024-12-09 17:38:15.451488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.517 [2024-12-09 17:38:15.460592] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.517 [2024-12-09 17:38:15.460744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.517 [2024-12-09 17:38:15.460762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.517 [2024-12-09 17:38:15.469858] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.517 [2024-12-09 17:38:15.470009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:24872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.517 [2024-12-09 17:38:15.470025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.517 [2024-12-09 17:38:15.479118] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.517 [2024-12-09 17:38:15.479280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:11127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.517 [2024-12-09 17:38:15.479297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.517 [2024-12-09 17:38:15.488434] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.517 [2024-12-09 17:38:15.488589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:21265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.517 [2024-12-09 17:38:15.488607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.517 [2024-12-09 17:38:15.497694] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.517 [2024-12-09 17:38:15.497847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.517 [2024-12-09 17:38:15.497865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.517 [2024-12-09 17:38:15.506958] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.517 [2024-12-09 17:38:15.507111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:10753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.518 [2024-12-09 17:38:15.507128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.518 [2024-12-09 17:38:15.516289] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.518 [2024-12-09 17:38:15.516442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:3190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.518 [2024-12-09 17:38:15.516460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.518 [2024-12-09 17:38:15.525549] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.518 [2024-12-09 17:38:15.525703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.518 [2024-12-09 17:38:15.525721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.518 [2024-12-09 17:38:15.534832] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.518 [2024-12-09 17:38:15.534985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.518 [2024-12-09 17:38:15.535002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.518 [2024-12-09 17:38:15.544112] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.518 [2024-12-09 17:38:15.544271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:22346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.518 [2024-12-09 17:38:15.544288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.518 [2024-12-09 17:38:15.553402] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.518 [2024-12-09 17:38:15.553551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:10561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.518 [2024-12-09 17:38:15.553569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.518 [2024-12-09 17:38:15.562949] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.518 [2024-12-09 17:38:15.563102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.518 [2024-12-09 17:38:15.563119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.518 [2024-12-09 17:38:15.572243] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.518 [2024-12-09 17:38:15.572397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.518 [2024-12-09 17:38:15.572414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.518 [2024-12-09 17:38:15.581512] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.518 [2024-12-09 17:38:15.581662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:15171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.518 [2024-12-09 17:38:15.581679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.518 [2024-12-09 17:38:15.590858] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.518 [2024-12-09 17:38:15.591013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:23795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.518 [2024-12-09 17:38:15.591031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.518 [2024-12-09 17:38:15.600283] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.518 [2024-12-09 17:38:15.600436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.518 [2024-12-09 17:38:15.600454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.518 [2024-12-09 17:38:15.609579] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.518 [2024-12-09 17:38:15.609731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:12463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.518 [2024-12-09 17:38:15.609749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.518 [2024-12-09 17:38:15.618856] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.518 [2024-12-09 17:38:15.619009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:11994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.518 [2024-12-09 17:38:15.619027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.518 [2024-12-09 17:38:15.628134] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.518 [2024-12-09 17:38:15.628296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.518 [2024-12-09 17:38:15.628314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.518 [2024-12-09 17:38:15.637486] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.518 [2024-12-09 17:38:15.637643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:91 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.518 [2024-12-09 17:38:15.637660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.518 [2024-12-09 17:38:15.646896] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.518 [2024-12-09 17:38:15.647047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.518 [2024-12-09 17:38:15.647066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.518 [2024-12-09 17:38:15.656155] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.518 [2024-12-09 17:38:15.656314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:13233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.518 [2024-12-09 17:38:15.656332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.518 [2024-12-09 17:38:15.665463] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.518 [2024-12-09 17:38:15.665613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.518 [2024-12-09 17:38:15.665630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.518 [2024-12-09 17:38:15.674721] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.518 [2024-12-09 17:38:15.674874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:6199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.518 [2024-12-09 17:38:15.674891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.518 [2024-12-09 17:38:15.684006] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.518 [2024-12-09 17:38:15.684161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:21319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.518 [2024-12-09 17:38:15.684182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.518 [2024-12-09 17:38:15.693419] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.518 [2024-12-09 17:38:15.693573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:9748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.518 [2024-12-09 17:38:15.693593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.776 [2024-12-09 17:38:15.702862] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.776 [2024-12-09 17:38:15.703019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:25431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.776 [2024-12-09 17:38:15.703039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.776 [2024-12-09 17:38:15.712166] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.776 [2024-12-09 17:38:15.712323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:11051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.776 [2024-12-09 17:38:15.712342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.776 [2024-12-09 17:38:15.721450] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.776 [2024-12-09 17:38:15.721606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:21443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.776 [2024-12-09 17:38:15.721624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.776 [2024-12-09 17:38:15.730708] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.776 [2024-12-09 17:38:15.730860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.776 [2024-12-09 17:38:15.730879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.776 [2024-12-09 17:38:15.740002] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.776 [2024-12-09 17:38:15.740156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:8056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.776 [2024-12-09 17:38:15.740174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.776 [2024-12-09 17:38:15.749270] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.776 [2024-12-09 17:38:15.749423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:12750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.776 [2024-12-09 17:38:15.749442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.776 [2024-12-09 17:38:15.758537] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.776 [2024-12-09 17:38:15.758693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:15251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.776 [2024-12-09 17:38:15.758710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.776 [2024-12-09 17:38:15.767820] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.776 [2024-12-09 17:38:15.767974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:16949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.776 [2024-12-09 17:38:15.767992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.776 [2024-12-09 17:38:15.777084] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.776 [2024-12-09 17:38:15.777241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.776 [2024-12-09 17:38:15.777259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.776 [2024-12-09 17:38:15.786395] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.776 [2024-12-09 17:38:15.786546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:18024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.776 [2024-12-09 17:38:15.786564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.776 [2024-12-09 17:38:15.795850] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.776 [2024-12-09 17:38:15.796007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:28 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.776 [2024-12-09 17:38:15.796028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.776 [2024-12-09 17:38:15.805150] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.776 [2024-12-09 17:38:15.805325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:22529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.776 [2024-12-09 17:38:15.805343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.776 [2024-12-09 17:38:15.814731] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.776 [2024-12-09 17:38:15.814887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:12726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.776 [2024-12-09 17:38:15.814905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.776 [2024-12-09 17:38:15.824120] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.776 [2024-12-09 17:38:15.824280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.776 [2024-12-09 17:38:15.824299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.776 [2024-12-09 17:38:15.833462] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.776 [2024-12-09 17:38:15.833614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.776 [2024-12-09 17:38:15.833632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.776 [2024-12-09 17:38:15.842779] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.776 [2024-12-09 17:38:15.842932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:24440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.776 [2024-12-09 17:38:15.842949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.777 [2024-12-09 17:38:15.852038] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.777 [2024-12-09 17:38:15.852188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:14459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.777 [2024-12-09 17:38:15.852205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.777 [2024-12-09 17:38:15.861318] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.777 [2024-12-09 17:38:15.861472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.777 [2024-12-09 17:38:15.861489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.777 [2024-12-09 17:38:15.870593] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.777 [2024-12-09 17:38:15.870744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.777 [2024-12-09 17:38:15.870762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.777 [2024-12-09 17:38:15.879869] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.777 [2024-12-09 17:38:15.880022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.777 [2024-12-09 17:38:15.880039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.777 [2024-12-09 17:38:15.889160] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.777 [2024-12-09 17:38:15.889316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.777 [2024-12-09 17:38:15.889334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.777 [2024-12-09 17:38:15.898433] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.777 [2024-12-09 17:38:15.898584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.777 [2024-12-09 17:38:15.898601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.777 [2024-12-09 17:38:15.907709] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.777 [2024-12-09 17:38:15.907861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:10683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.777 [2024-12-09 17:38:15.907879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.777 [2024-12-09 17:38:15.917012] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.777 [2024-12-09 17:38:15.917164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.777 [2024-12-09 17:38:15.917182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.777 [2024-12-09 17:38:15.926285] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.777 [2024-12-09 17:38:15.926436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:8793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.777 [2024-12-09 17:38:15.926457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.777 [2024-12-09 17:38:15.935589] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.777 [2024-12-09 17:38:15.935742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.777 [2024-12-09 17:38:15.935761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:46.777 [2024-12-09 17:38:15.944887] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:46.777 [2024-12-09 17:38:15.945040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:3272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.777 [2024-12-09 17:38:15.945057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:47.034 [2024-12-09 17:38:15.954341] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:47.034 [2024-12-09 17:38:15.954499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:15569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.034 [2024-12-09 17:38:15.954519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:47.034 [2024-12-09 17:38:15.963913] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:47.034 [2024-12-09 17:38:15.964074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.034 [2024-12-09 17:38:15.964094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:47.034 [2024-12-09 17:38:15.973322] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:47.034 [2024-12-09 17:38:15.973473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.034 [2024-12-09 17:38:15.973491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:47.034 [2024-12-09 17:38:15.982599] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:47.034 [2024-12-09 17:38:15.982751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.034 [2024-12-09 17:38:15.982768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:47.034 [2024-12-09 17:38:15.991903] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:47.034 [2024-12-09 17:38:15.992056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:18772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.034 [2024-12-09 17:38:15.992074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:47.034 [2024-12-09 17:38:16.001149] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:47.034 [2024-12-09 17:38:16.001312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:8590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.034 [2024-12-09 17:38:16.001331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:47.034 [2024-12-09 17:38:16.010488] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:47.034 [2024-12-09 17:38:16.010644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:21383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.034 [2024-12-09 17:38:16.010662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:47.034 [2024-12-09 17:38:16.019749] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:47.034 [2024-12-09 17:38:16.019901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.034 [2024-12-09 17:38:16.019919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:47.034 [2024-12-09 17:38:16.029035] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:47.034 [2024-12-09 17:38:16.029187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.034 [2024-12-09 17:38:16.029205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:47.034 27370.00 IOPS, 106.91 MiB/s [2024-12-09T16:38:16.213Z] [2024-12-09 17:38:16.038360] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1f6c0) with pdu=0x200016efd640 00:27:47.034 [2024-12-09 17:38:16.038514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:21684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.034 [2024-12-09 17:38:16.038532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:47.034 00:27:47.034 Latency(us) 00:27:47.034 [2024-12-09T16:38:16.213Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:47.034 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:47.034 nvme0n1 : 2.01 27371.25 106.92 0.00 0.00 4668.51 3464.05 9611.95 00:27:47.035 [2024-12-09T16:38:16.214Z] =================================================================================================================== 00:27:47.035 [2024-12-09T16:38:16.214Z] Total : 27371.25 106.92 0.00 0.00 4668.51 3464.05 9611.95 00:27:47.035 { 00:27:47.035 "results": [ 00:27:47.035 { 00:27:47.035 "job": "nvme0n1", 00:27:47.035 "core_mask": "0x2", 00:27:47.035 "workload": "randwrite", 00:27:47.035 "status": "finished", 00:27:47.035 "queue_depth": 128, 00:27:47.035 "io_size": 4096, 00:27:47.035 "runtime": 2.005754, 00:27:47.035 "iops": 27371.25290539119, 00:27:47.035 "mibps": 106.91895666168433, 00:27:47.035 "io_failed": 0, 00:27:47.035 "io_timeout": 0, 00:27:47.035 "avg_latency_us": 4668.510876988464, 00:27:47.035 "min_latency_us": 3464.0457142857144, 00:27:47.035 "max_latency_us": 9611.946666666667 00:27:47.035 } 00:27:47.035 ], 00:27:47.035 "core_count": 1 00:27:47.035 } 00:27:47.035 17:38:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:47.035 17:38:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:47.035 17:38:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:47.035 | .driver_specific 00:27:47.035 | .nvme_error 00:27:47.035 | .status_code 00:27:47.035 | .command_transient_transport_error' 00:27:47.035 17:38:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:47.292 17:38:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 215 > 0 )) 00:27:47.292 17:38:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2727358 00:27:47.292 17:38:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2727358 ']' 00:27:47.292 17:38:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2727358 00:27:47.292 17:38:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:47.292 17:38:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:47.292 17:38:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2727358 00:27:47.292 17:38:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:47.292 17:38:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:47.292 17:38:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2727358' 00:27:47.292 killing process with pid 2727358 00:27:47.292 17:38:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2727358 00:27:47.292 Received shutdown signal, test time was about 2.000000 seconds 00:27:47.292 00:27:47.292 Latency(us) 00:27:47.292 [2024-12-09T16:38:16.471Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:47.292 [2024-12-09T16:38:16.471Z] =================================================================================================================== 00:27:47.292 [2024-12-09T16:38:16.471Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:47.292 17:38:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2727358 00:27:47.550 17:38:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:27:47.550 17:38:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:47.550 17:38:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:47.550 17:38:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:47.550 17:38:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:47.550 17:38:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2728004 00:27:47.550 17:38:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2728004 /var/tmp/bperf.sock 00:27:47.550 17:38:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:27:47.550 17:38:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2728004 ']' 00:27:47.550 17:38:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:47.550 17:38:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:47.550 17:38:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:47.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:47.550 17:38:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:47.550 17:38:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:47.550 [2024-12-09 17:38:16.517320] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:27:47.550 [2024-12-09 17:38:16.517366] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2728004 ] 00:27:47.550 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:47.550 Zero copy mechanism will not be used. 00:27:47.550 [2024-12-09 17:38:16.591242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:47.550 [2024-12-09 17:38:16.631331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:47.550 17:38:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:47.550 17:38:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:47.550 17:38:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:47.550 17:38:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:47.808 17:38:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:47.808 17:38:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.808 17:38:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:47.808 17:38:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.808 17:38:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:47.808 17:38:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:48.373 nvme0n1 00:27:48.373 17:38:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:48.373 17:38:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.373 17:38:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:48.373 17:38:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.373 17:38:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:48.373 17:38:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:48.373 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:48.373 Zero copy mechanism will not be used. 00:27:48.373 Running I/O for 2 seconds... 00:27:48.373 [2024-12-09 17:38:17.440880] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.373 [2024-12-09 17:38:17.441012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.373 [2024-12-09 17:38:17.441040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:48.373 [2024-12-09 17:38:17.446608] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.373 [2024-12-09 17:38:17.446688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.373 [2024-12-09 17:38:17.446710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:48.373 [2024-12-09 17:38:17.451138] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.373 [2024-12-09 17:38:17.451208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.373 [2024-12-09 17:38:17.451235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:48.373 [2024-12-09 17:38:17.455623] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.373 [2024-12-09 17:38:17.455691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.373 [2024-12-09 17:38:17.455711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:48.373 [2024-12-09 17:38:17.460086] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.373 [2024-12-09 17:38:17.460157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.373 [2024-12-09 17:38:17.460176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:48.373 [2024-12-09 17:38:17.464515] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.373 [2024-12-09 17:38:17.464579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.373 [2024-12-09 17:38:17.464599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:48.373 [2024-12-09 17:38:17.469148] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.373 [2024-12-09 17:38:17.469204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.373 [2024-12-09 17:38:17.469229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:48.373 [2024-12-09 17:38:17.473715] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.373 [2024-12-09 17:38:17.473773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.373 [2024-12-09 17:38:17.473791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:48.373 [2024-12-09 17:38:17.478168] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.373 [2024-12-09 17:38:17.478230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.373 [2024-12-09 17:38:17.478249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:48.373 [2024-12-09 17:38:17.482735] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.374 [2024-12-09 17:38:17.482795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.374 [2024-12-09 17:38:17.482813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:48.374 [2024-12-09 17:38:17.487185] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.374 [2024-12-09 17:38:17.487250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.374 [2024-12-09 17:38:17.487269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:48.374 [2024-12-09 17:38:17.491574] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.374 [2024-12-09 17:38:17.491647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.374 [2024-12-09 17:38:17.491667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:48.374 [2024-12-09 17:38:17.495975] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.374 [2024-12-09 17:38:17.496039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.374 [2024-12-09 17:38:17.496061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:48.374 [2024-12-09 17:38:17.500492] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.374 [2024-12-09 17:38:17.500555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.374 [2024-12-09 17:38:17.500574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:48.374 [2024-12-09 17:38:17.505257] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.374 [2024-12-09 17:38:17.505316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.374 [2024-12-09 17:38:17.505334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:48.374 [2024-12-09 17:38:17.509895] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.374 [2024-12-09 17:38:17.509956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.374 [2024-12-09 17:38:17.509975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:48.374 [2024-12-09 17:38:17.514546] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.374 [2024-12-09 17:38:17.514601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.374 [2024-12-09 17:38:17.514619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:48.374 [2024-12-09 17:38:17.519721] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.374 [2024-12-09 17:38:17.519805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.374 [2024-12-09 17:38:17.519824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:48.374 [2024-12-09 17:38:17.525207] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.374 [2024-12-09 17:38:17.525359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.374 [2024-12-09 17:38:17.525378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:48.374 [2024-12-09 17:38:17.530855] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.374 [2024-12-09 17:38:17.530928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.374 [2024-12-09 17:38:17.530948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:48.374 [2024-12-09 17:38:17.535887] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.374 [2024-12-09 17:38:17.535968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.374 [2024-12-09 17:38:17.535987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:48.374 [2024-12-09 17:38:17.541008] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.374 [2024-12-09 17:38:17.541150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.374 [2024-12-09 17:38:17.541169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:48.374 [2024-12-09 17:38:17.546423] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.374 [2024-12-09 17:38:17.546561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.374 [2024-12-09 17:38:17.546586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:48.633 [2024-12-09 17:38:17.551454] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.633 [2024-12-09 17:38:17.551557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.633 [2024-12-09 17:38:17.551579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:48.633 [2024-12-09 17:38:17.557576] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.633 [2024-12-09 17:38:17.557704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.633 [2024-12-09 17:38:17.557725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:48.633 [2024-12-09 17:38:17.562276] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.633 [2024-12-09 17:38:17.562402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.633 [2024-12-09 17:38:17.562422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:48.633 [2024-12-09 17:38:17.567403] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.633 [2024-12-09 17:38:17.567459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.633 [2024-12-09 17:38:17.567479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:48.633 [2024-12-09 17:38:17.572578] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.633 [2024-12-09 17:38:17.572649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.633 [2024-12-09 17:38:17.572668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:48.633 [2024-12-09 17:38:17.577870] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.633 [2024-12-09 17:38:17.577962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.633 [2024-12-09 17:38:17.577981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:48.633 [2024-12-09 17:38:17.583193] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.633 [2024-12-09 17:38:17.583271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.633 [2024-12-09 17:38:17.583290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:48.633 [2024-12-09 17:38:17.588425] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.633 [2024-12-09 17:38:17.588479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.633 [2024-12-09 17:38:17.588498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:48.633 [2024-12-09 17:38:17.593592] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.633 [2024-12-09 17:38:17.593646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.633 [2024-12-09 17:38:17.593665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:48.633 [2024-12-09 17:38:17.598802] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.633 [2024-12-09 17:38:17.598855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.633 [2024-12-09 17:38:17.598873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:48.633 [2024-12-09 17:38:17.603807] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.633 [2024-12-09 17:38:17.603866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.633 [2024-12-09 17:38:17.603884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:48.633 [2024-12-09 17:38:17.609067] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.633 [2024-12-09 17:38:17.609140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.633 [2024-12-09 17:38:17.609159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:48.633 [2024-12-09 17:38:17.614967] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.633 [2024-12-09 17:38:17.615027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.633 [2024-12-09 17:38:17.615045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:48.633 [2024-12-09 17:38:17.619832] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.633 [2024-12-09 17:38:17.619901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.633 [2024-12-09 17:38:17.619919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:48.633 [2024-12-09 17:38:17.624975] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.633 [2024-12-09 17:38:17.625031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.633 [2024-12-09 17:38:17.625050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:48.633 [2024-12-09 17:38:17.629971] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.633 [2024-12-09 17:38:17.630088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.633 [2024-12-09 17:38:17.630110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:48.633 [2024-12-09 17:38:17.635282] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.633 [2024-12-09 17:38:17.635335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.633 [2024-12-09 17:38:17.635353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:48.633 [2024-12-09 17:38:17.640493] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.633 [2024-12-09 17:38:17.640557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.633 [2024-12-09 17:38:17.640576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:48.633 [2024-12-09 17:38:17.645683] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.633 [2024-12-09 17:38:17.645740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.633 [2024-12-09 17:38:17.645758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:48.633 [2024-12-09 17:38:17.651084] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.633 [2024-12-09 17:38:17.651139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.633 [2024-12-09 17:38:17.651157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:48.633 [2024-12-09 17:38:17.656021] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.633 [2024-12-09 17:38:17.656077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.633 [2024-12-09 17:38:17.656095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:48.633 [2024-12-09 17:38:17.660795] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.633 [2024-12-09 17:38:17.660857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.633 [2024-12-09 17:38:17.660875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:48.633 [2024-12-09 17:38:17.665709] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.633 [2024-12-09 17:38:17.665767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.633 [2024-12-09 17:38:17.665785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:48.634 [2024-12-09 17:38:17.670898] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.634 [2024-12-09 17:38:17.670986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.634 [2024-12-09 17:38:17.671005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:48.634 [2024-12-09 17:38:17.676360] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.634 [2024-12-09 17:38:17.676420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.634 [2024-12-09 17:38:17.676439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:48.634 [2024-12-09 17:38:17.681733] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.634 [2024-12-09 17:38:17.681794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.634 [2024-12-09 17:38:17.681813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:48.634 [2024-12-09 17:38:17.686906] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.634 [2024-12-09 17:38:17.686970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.634 [2024-12-09 17:38:17.686989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:48.634 [2024-12-09 17:38:17.692016] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.634 [2024-12-09 17:38:17.692086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.634 [2024-12-09 17:38:17.692105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:48.634 [2024-12-09 17:38:17.697377] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.634 [2024-12-09 17:38:17.697433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.634 [2024-12-09 17:38:17.697452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:48.634 [2024-12-09 17:38:17.702782] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.634 [2024-12-09 17:38:17.702845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.634 [2024-12-09 17:38:17.702864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:48.634 [2024-12-09 17:38:17.708110] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.634 [2024-12-09 17:38:17.708186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.634 [2024-12-09 17:38:17.708205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:48.634 [2024-12-09 17:38:17.713684] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.634 [2024-12-09 17:38:17.713744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.634 [2024-12-09 17:38:17.713762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:48.634 [2024-12-09 17:38:17.719103] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.634 [2024-12-09 17:38:17.719160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.634 [2024-12-09 17:38:17.719178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:48.634 [2024-12-09 17:38:17.724122] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.634 [2024-12-09 17:38:17.724240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.634 [2024-12-09 17:38:17.724259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:48.634 [2024-12-09 17:38:17.729852] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.634 [2024-12-09 17:38:17.729972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.634 [2024-12-09 17:38:17.729991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:48.634 [2024-12-09 17:38:17.735378] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.634 [2024-12-09 17:38:17.735521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.634 [2024-12-09 17:38:17.735540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:48.634 [2024-12-09 17:38:17.741985] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.634 [2024-12-09 17:38:17.742135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.634 [2024-12-09 17:38:17.742154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:48.634 [2024-12-09 17:38:17.749298] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.634 [2024-12-09 17:38:17.749357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.634 [2024-12-09 17:38:17.749375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:48.634 [2024-12-09 17:38:17.756388] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.634 [2024-12-09 17:38:17.756520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.634 [2024-12-09 17:38:17.756540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:48.634 [2024-12-09 17:38:17.763691] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.634 [2024-12-09 17:38:17.763770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.634 [2024-12-09 17:38:17.763788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:48.634 [2024-12-09 17:38:17.770452] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.634 [2024-12-09 17:38:17.770610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.634 [2024-12-09 17:38:17.770630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:48.634 [2024-12-09 17:38:17.777635] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.634 [2024-12-09 17:38:17.777763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.634 [2024-12-09 17:38:17.777789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:48.634 [2024-12-09 17:38:17.785770] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.634 [2024-12-09 17:38:17.785871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.634 [2024-12-09 17:38:17.785890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:48.634 [2024-12-09 17:38:17.791725] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.634 [2024-12-09 17:38:17.791783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.634 [2024-12-09 17:38:17.791802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:48.634 [2024-12-09 17:38:17.796231] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.634 [2024-12-09 17:38:17.796303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.634 [2024-12-09 17:38:17.796322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:48.634 [2024-12-09 17:38:17.800784] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.634 [2024-12-09 17:38:17.800854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.634 [2024-12-09 17:38:17.800873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:48.634 [2024-12-09 17:38:17.805195] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.634 [2024-12-09 17:38:17.805310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.634 [2024-12-09 17:38:17.805344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:48.893 [2024-12-09 17:38:17.809839] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.894 [2024-12-09 17:38:17.809904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.894 [2024-12-09 17:38:17.809926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:48.894 [2024-12-09 17:38:17.814300] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.894 [2024-12-09 17:38:17.814370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.894 [2024-12-09 17:38:17.814391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:48.894 [2024-12-09 17:38:17.818628] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.894 [2024-12-09 17:38:17.818698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.894 [2024-12-09 17:38:17.818717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:48.894 [2024-12-09 17:38:17.822966] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.894 [2024-12-09 17:38:17.823022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.894 [2024-12-09 17:38:17.823045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:48.894 [2024-12-09 17:38:17.827328] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.894 [2024-12-09 17:38:17.827431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.894 [2024-12-09 17:38:17.827450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:48.894 [2024-12-09 17:38:17.831660] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.894 [2024-12-09 17:38:17.831737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.894 [2024-12-09 17:38:17.831756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:48.894 [2024-12-09 17:38:17.836001] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.894 [2024-12-09 17:38:17.836069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.894 [2024-12-09 17:38:17.836088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:48.894 [2024-12-09 17:38:17.840327] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.894 [2024-12-09 17:38:17.840435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.894 [2024-12-09 17:38:17.840453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:48.894 [2024-12-09 17:38:17.844627] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.894 [2024-12-09 17:38:17.844702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.894 [2024-12-09 17:38:17.844721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:48.894 [2024-12-09 17:38:17.849083] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.894 [2024-12-09 17:38:17.849172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.894 [2024-12-09 17:38:17.849191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:48.894 [2024-12-09 17:38:17.853512] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.894 [2024-12-09 17:38:17.853590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.894 [2024-12-09 17:38:17.853609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:48.894 [2024-12-09 17:38:17.858036] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.894 [2024-12-09 17:38:17.858087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.894 [2024-12-09 17:38:17.858105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:48.894 [2024-12-09 17:38:17.862518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.894 [2024-12-09 17:38:17.862632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.894 [2024-12-09 17:38:17.862650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:48.894 [2024-12-09 17:38:17.866841] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.894 [2024-12-09 17:38:17.866911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.894 [2024-12-09 17:38:17.866930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:48.894 [2024-12-09 17:38:17.871208] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.894 [2024-12-09 17:38:17.871283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.894 [2024-12-09 17:38:17.871301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:48.894 [2024-12-09 17:38:17.875674] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.894 [2024-12-09 17:38:17.875732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.894 [2024-12-09 17:38:17.875751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:48.894 [2024-12-09 17:38:17.880051] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.894 [2024-12-09 17:38:17.880115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.894 [2024-12-09 17:38:17.880133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:48.894 [2024-12-09 17:38:17.884360] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.894 [2024-12-09 17:38:17.884450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.894 [2024-12-09 17:38:17.884469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:48.894 [2024-12-09 17:38:17.888698] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.894 [2024-12-09 17:38:17.888758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.894 [2024-12-09 17:38:17.888776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:48.894 [2024-12-09 17:38:17.893002] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.894 [2024-12-09 17:38:17.893076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.894 [2024-12-09 17:38:17.893095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:48.894 [2024-12-09 17:38:17.897283] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.894 [2024-12-09 17:38:17.897351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.894 [2024-12-09 17:38:17.897369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:48.894 [2024-12-09 17:38:17.901603] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.894 [2024-12-09 17:38:17.901658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.894 [2024-12-09 17:38:17.901676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:48.894 [2024-12-09 17:38:17.905865] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.894 [2024-12-09 17:38:17.905940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.894 [2024-12-09 17:38:17.905958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:48.894 [2024-12-09 17:38:17.910199] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.894 [2024-12-09 17:38:17.910264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.894 [2024-12-09 17:38:17.910283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:48.894 [2024-12-09 17:38:17.914427] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.894 [2024-12-09 17:38:17.914500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.894 [2024-12-09 17:38:17.914517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:48.894 [2024-12-09 17:38:17.918638] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.894 [2024-12-09 17:38:17.918698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.894 [2024-12-09 17:38:17.918716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:48.894 [2024-12-09 17:38:17.922881] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.894 [2024-12-09 17:38:17.922952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.894 [2024-12-09 17:38:17.922971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:48.894 [2024-12-09 17:38:17.927308] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.894 [2024-12-09 17:38:17.927403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.895 [2024-12-09 17:38:17.927422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:48.895 [2024-12-09 17:38:17.931561] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.895 [2024-12-09 17:38:17.931618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.895 [2024-12-09 17:38:17.931637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:48.895 [2024-12-09 17:38:17.935820] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.895 [2024-12-09 17:38:17.935889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.895 [2024-12-09 17:38:17.935911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:48.895 [2024-12-09 17:38:17.940094] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.895 [2024-12-09 17:38:17.940156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.895 [2024-12-09 17:38:17.940174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:48.895 [2024-12-09 17:38:17.944382] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.895 [2024-12-09 17:38:17.944447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.895 [2024-12-09 17:38:17.944465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:48.895 [2024-12-09 17:38:17.948735] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.895 [2024-12-09 17:38:17.948790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.895 [2024-12-09 17:38:17.948809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:48.895 [2024-12-09 17:38:17.953048] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.895 [2024-12-09 17:38:17.953108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.895 [2024-12-09 17:38:17.953127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:48.895 [2024-12-09 17:38:17.957308] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.895 [2024-12-09 17:38:17.957366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.895 [2024-12-09 17:38:17.957384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:48.895 [2024-12-09 17:38:17.961616] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.895 [2024-12-09 17:38:17.961672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.895 [2024-12-09 17:38:17.961690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:48.895 [2024-12-09 17:38:17.965846] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.895 [2024-12-09 17:38:17.965900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.895 [2024-12-09 17:38:17.965918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:48.895 [2024-12-09 17:38:17.970063] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.895 [2024-12-09 17:38:17.970120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.895 [2024-12-09 17:38:17.970138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:48.895 [2024-12-09 17:38:17.974253] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.895 [2024-12-09 17:38:17.974316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.895 [2024-12-09 17:38:17.974334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:48.895 [2024-12-09 17:38:17.978489] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.895 [2024-12-09 17:38:17.978548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.895 [2024-12-09 17:38:17.978566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:48.895 [2024-12-09 17:38:17.982911] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.895 [2024-12-09 17:38:17.982969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.895 [2024-12-09 17:38:17.982987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:48.895 [2024-12-09 17:38:17.987581] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.895 [2024-12-09 17:38:17.987651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.895 [2024-12-09 17:38:17.987669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:48.895 [2024-12-09 17:38:17.991865] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.895 [2024-12-09 17:38:17.991931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.895 [2024-12-09 17:38:17.991949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:48.895 [2024-12-09 17:38:17.996135] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.895 [2024-12-09 17:38:17.996201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.895 [2024-12-09 17:38:17.996224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:48.895 [2024-12-09 17:38:18.000901] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.895 [2024-12-09 17:38:18.001001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.895 [2024-12-09 17:38:18.001020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:48.895 [2024-12-09 17:38:18.007123] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.895 [2024-12-09 17:38:18.007305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.895 [2024-12-09 17:38:18.007324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:48.895 [2024-12-09 17:38:18.013851] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.895 [2024-12-09 17:38:18.014015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.895 [2024-12-09 17:38:18.014035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:48.895 [2024-12-09 17:38:18.020629] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.895 [2024-12-09 17:38:18.020782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.895 [2024-12-09 17:38:18.020801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:48.895 [2024-12-09 17:38:18.027047] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.895 [2024-12-09 17:38:18.027213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.895 [2024-12-09 17:38:18.027239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:48.895 [2024-12-09 17:38:18.033619] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.895 [2024-12-09 17:38:18.033784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.895 [2024-12-09 17:38:18.033802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:48.895 [2024-12-09 17:38:18.040605] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.895 [2024-12-09 17:38:18.040757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.895 [2024-12-09 17:38:18.040776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:48.895 [2024-12-09 17:38:18.047672] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.895 [2024-12-09 17:38:18.047813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.895 [2024-12-09 17:38:18.047832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:48.895 [2024-12-09 17:38:18.055004] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.895 [2024-12-09 17:38:18.055151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.895 [2024-12-09 17:38:18.055169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:48.895 [2024-12-09 17:38:18.062555] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.895 [2024-12-09 17:38:18.062725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.895 [2024-12-09 17:38:18.062743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:48.895 [2024-12-09 17:38:18.069254] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:48.895 [2024-12-09 17:38:18.069390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.895 [2024-12-09 17:38:18.069412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.154 [2024-12-09 17:38:18.075922] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.154 [2024-12-09 17:38:18.076045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.154 [2024-12-09 17:38:18.076071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.154 [2024-12-09 17:38:18.083139] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.154 [2024-12-09 17:38:18.083296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.154 [2024-12-09 17:38:18.083315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.154 [2024-12-09 17:38:18.090202] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.154 [2024-12-09 17:38:18.090341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.154 [2024-12-09 17:38:18.090361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.154 [2024-12-09 17:38:18.097244] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.154 [2024-12-09 17:38:18.097300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.154 [2024-12-09 17:38:18.097318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.154 [2024-12-09 17:38:18.104134] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.154 [2024-12-09 17:38:18.104316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.154 [2024-12-09 17:38:18.104335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.154 [2024-12-09 17:38:18.110668] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.154 [2024-12-09 17:38:18.110841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.154 [2024-12-09 17:38:18.110861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.154 [2024-12-09 17:38:18.118011] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.154 [2024-12-09 17:38:18.118133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.154 [2024-12-09 17:38:18.118153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.154 [2024-12-09 17:38:18.123935] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.154 [2024-12-09 17:38:18.124006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.154 [2024-12-09 17:38:18.124025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.154 [2024-12-09 17:38:18.129200] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.154 [2024-12-09 17:38:18.129289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.154 [2024-12-09 17:38:18.129308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.154 [2024-12-09 17:38:18.133899] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.154 [2024-12-09 17:38:18.133962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.154 [2024-12-09 17:38:18.133980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.154 [2024-12-09 17:38:18.138562] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.154 [2024-12-09 17:38:18.138626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.154 [2024-12-09 17:38:18.138645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.154 [2024-12-09 17:38:18.143190] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.154 [2024-12-09 17:38:18.143356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.154 [2024-12-09 17:38:18.143375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.154 [2024-12-09 17:38:18.148492] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.154 [2024-12-09 17:38:18.148568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.154 [2024-12-09 17:38:18.148586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.154 [2024-12-09 17:38:18.154825] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.155 [2024-12-09 17:38:18.154996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.155 [2024-12-09 17:38:18.155015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.155 [2024-12-09 17:38:18.160970] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.155 [2024-12-09 17:38:18.161127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.155 [2024-12-09 17:38:18.161146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.155 [2024-12-09 17:38:18.167526] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.155 [2024-12-09 17:38:18.167678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.155 [2024-12-09 17:38:18.167697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.155 [2024-12-09 17:38:18.174461] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.155 [2024-12-09 17:38:18.174614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.155 [2024-12-09 17:38:18.174632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.155 [2024-12-09 17:38:18.180784] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.155 [2024-12-09 17:38:18.180927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.155 [2024-12-09 17:38:18.180946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.155 [2024-12-09 17:38:18.187176] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.155 [2024-12-09 17:38:18.187330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.155 [2024-12-09 17:38:18.187349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.155 [2024-12-09 17:38:18.193355] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.155 [2024-12-09 17:38:18.193522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.155 [2024-12-09 17:38:18.193541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.155 [2024-12-09 17:38:18.199670] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.155 [2024-12-09 17:38:18.199841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.155 [2024-12-09 17:38:18.199860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.155 [2024-12-09 17:38:18.206028] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.155 [2024-12-09 17:38:18.206185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.155 [2024-12-09 17:38:18.206204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.155 [2024-12-09 17:38:18.212455] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.155 [2024-12-09 17:38:18.212623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.155 [2024-12-09 17:38:18.212643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.155 [2024-12-09 17:38:18.218577] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.155 [2024-12-09 17:38:18.218740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.155 [2024-12-09 17:38:18.218759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.155 [2024-12-09 17:38:18.224806] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.155 [2024-12-09 17:38:18.224979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.155 [2024-12-09 17:38:18.224998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.155 [2024-12-09 17:38:18.231066] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.155 [2024-12-09 17:38:18.231236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.155 [2024-12-09 17:38:18.231257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.155 [2024-12-09 17:38:18.237327] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.155 [2024-12-09 17:38:18.237486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.155 [2024-12-09 17:38:18.237510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.155 [2024-12-09 17:38:18.243253] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.155 [2024-12-09 17:38:18.243394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.155 [2024-12-09 17:38:18.243415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.155 [2024-12-09 17:38:18.248369] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.155 [2024-12-09 17:38:18.248512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.155 [2024-12-09 17:38:18.248534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.155 [2024-12-09 17:38:18.253445] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.155 [2024-12-09 17:38:18.253649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.155 [2024-12-09 17:38:18.253671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.155 [2024-12-09 17:38:18.258644] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.155 [2024-12-09 17:38:18.258740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.155 [2024-12-09 17:38:18.258761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.155 [2024-12-09 17:38:18.263824] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.155 [2024-12-09 17:38:18.263953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.155 [2024-12-09 17:38:18.263975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.155 [2024-12-09 17:38:18.268734] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.155 [2024-12-09 17:38:18.268868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.155 [2024-12-09 17:38:18.268888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.155 [2024-12-09 17:38:18.273665] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.155 [2024-12-09 17:38:18.273835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.155 [2024-12-09 17:38:18.273853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.155 [2024-12-09 17:38:18.278526] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.155 [2024-12-09 17:38:18.278614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.155 [2024-12-09 17:38:18.278633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.155 [2024-12-09 17:38:18.283515] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.155 [2024-12-09 17:38:18.283691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.155 [2024-12-09 17:38:18.283710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.155 [2024-12-09 17:38:18.288330] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.155 [2024-12-09 17:38:18.288496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.155 [2024-12-09 17:38:18.288516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.155 [2024-12-09 17:38:18.294010] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.155 [2024-12-09 17:38:18.294109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.155 [2024-12-09 17:38:18.294129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.155 [2024-12-09 17:38:18.299665] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.155 [2024-12-09 17:38:18.299754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.155 [2024-12-09 17:38:18.299772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.155 [2024-12-09 17:38:18.305593] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.155 [2024-12-09 17:38:18.305663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.155 [2024-12-09 17:38:18.305682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.155 [2024-12-09 17:38:18.312632] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.155 [2024-12-09 17:38:18.312709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.155 [2024-12-09 17:38:18.312729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.155 [2024-12-09 17:38:18.317891] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.156 [2024-12-09 17:38:18.317973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.156 [2024-12-09 17:38:18.317994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.156 [2024-12-09 17:38:18.322565] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.156 [2024-12-09 17:38:18.322668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.156 [2024-12-09 17:38:18.322690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.156 [2024-12-09 17:38:18.327228] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.156 [2024-12-09 17:38:18.327337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.156 [2024-12-09 17:38:18.327360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.414 [2024-12-09 17:38:18.331891] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.414 [2024-12-09 17:38:18.331944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.414 [2024-12-09 17:38:18.331965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.414 [2024-12-09 17:38:18.336531] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.414 [2024-12-09 17:38:18.336593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.414 [2024-12-09 17:38:18.336614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.414 [2024-12-09 17:38:18.341183] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.414 [2024-12-09 17:38:18.341258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.414 [2024-12-09 17:38:18.341277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.415 [2024-12-09 17:38:18.345876] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.415 [2024-12-09 17:38:18.345978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.415 [2024-12-09 17:38:18.345997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.415 [2024-12-09 17:38:18.350376] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.415 [2024-12-09 17:38:18.350502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.415 [2024-12-09 17:38:18.350520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.415 [2024-12-09 17:38:18.354912] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.415 [2024-12-09 17:38:18.354972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.415 [2024-12-09 17:38:18.354991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.415 [2024-12-09 17:38:18.359329] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.415 [2024-12-09 17:38:18.359397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.415 [2024-12-09 17:38:18.359416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.415 [2024-12-09 17:38:18.363917] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.415 [2024-12-09 17:38:18.364026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.415 [2024-12-09 17:38:18.364045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.415 [2024-12-09 17:38:18.368465] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.415 [2024-12-09 17:38:18.368522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.415 [2024-12-09 17:38:18.368545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.415 [2024-12-09 17:38:18.373472] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.415 [2024-12-09 17:38:18.373612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.415 [2024-12-09 17:38:18.373631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.415 [2024-12-09 17:38:18.378673] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.415 [2024-12-09 17:38:18.378744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.415 [2024-12-09 17:38:18.378762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.415 [2024-12-09 17:38:18.383868] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.415 [2024-12-09 17:38:18.383928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.415 [2024-12-09 17:38:18.383946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.415 [2024-12-09 17:38:18.389064] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.415 [2024-12-09 17:38:18.389194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.415 [2024-12-09 17:38:18.389212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.415 [2024-12-09 17:38:18.394506] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.415 [2024-12-09 17:38:18.394615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.415 [2024-12-09 17:38:18.394634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.415 [2024-12-09 17:38:18.399170] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.415 [2024-12-09 17:38:18.399242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.415 [2024-12-09 17:38:18.399261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.415 [2024-12-09 17:38:18.403761] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.415 [2024-12-09 17:38:18.403817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.415 [2024-12-09 17:38:18.403835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.415 [2024-12-09 17:38:18.408721] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.415 [2024-12-09 17:38:18.408856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.415 [2024-12-09 17:38:18.408877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.415 [2024-12-09 17:38:18.413509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.415 [2024-12-09 17:38:18.413584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.415 [2024-12-09 17:38:18.413602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.415 [2024-12-09 17:38:18.418143] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.415 [2024-12-09 17:38:18.418254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.415 [2024-12-09 17:38:18.418272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.415 [2024-12-09 17:38:18.422880] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.415 [2024-12-09 17:38:18.422937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.415 [2024-12-09 17:38:18.422956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.415 [2024-12-09 17:38:18.428212] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.415 [2024-12-09 17:38:18.428287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.415 [2024-12-09 17:38:18.428306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.415 [2024-12-09 17:38:18.433518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.415 [2024-12-09 17:38:18.433575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.415 [2024-12-09 17:38:18.433593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.415 5893.00 IOPS, 736.62 MiB/s [2024-12-09T16:38:18.594Z] [2024-12-09 17:38:18.440088] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.415 [2024-12-09 17:38:18.440146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.415 [2024-12-09 17:38:18.440165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.415 [2024-12-09 17:38:18.445334] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.415 [2024-12-09 17:38:18.445542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.415 [2024-12-09 17:38:18.445561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.415 [2024-12-09 17:38:18.450108] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.415 [2024-12-09 17:38:18.450359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.415 [2024-12-09 17:38:18.450396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.415 [2024-12-09 17:38:18.454629] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.415 [2024-12-09 17:38:18.454867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.415 [2024-12-09 17:38:18.454887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.415 [2024-12-09 17:38:18.459148] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.415 [2024-12-09 17:38:18.459413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.415 [2024-12-09 17:38:18.459434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.415 [2024-12-09 17:38:18.463506] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.415 [2024-12-09 17:38:18.463768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.415 [2024-12-09 17:38:18.463788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.415 [2024-12-09 17:38:18.467994] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.415 [2024-12-09 17:38:18.468240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.415 [2024-12-09 17:38:18.468261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.415 [2024-12-09 17:38:18.472637] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.415 [2024-12-09 17:38:18.472899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.415 [2024-12-09 17:38:18.472919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.415 [2024-12-09 17:38:18.477237] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.416 [2024-12-09 17:38:18.477488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.416 [2024-12-09 17:38:18.477508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.416 [2024-12-09 17:38:18.481668] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.416 [2024-12-09 17:38:18.481921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.416 [2024-12-09 17:38:18.481941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.416 [2024-12-09 17:38:18.486065] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.416 [2024-12-09 17:38:18.486324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.416 [2024-12-09 17:38:18.486344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.416 [2024-12-09 17:38:18.490261] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.416 [2024-12-09 17:38:18.490499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.416 [2024-12-09 17:38:18.490519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.416 [2024-12-09 17:38:18.494681] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.416 [2024-12-09 17:38:18.494933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.416 [2024-12-09 17:38:18.494956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.416 [2024-12-09 17:38:18.499184] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.416 [2024-12-09 17:38:18.499430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.416 [2024-12-09 17:38:18.499451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.416 [2024-12-09 17:38:18.503996] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.416 [2024-12-09 17:38:18.504261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.416 [2024-12-09 17:38:18.504281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.416 [2024-12-09 17:38:18.509663] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.416 [2024-12-09 17:38:18.509903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.416 [2024-12-09 17:38:18.509923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.416 [2024-12-09 17:38:18.514062] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.416 [2024-12-09 17:38:18.514320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.416 [2024-12-09 17:38:18.514340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.416 [2024-12-09 17:38:18.518541] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.416 [2024-12-09 17:38:18.518789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.416 [2024-12-09 17:38:18.518809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.416 [2024-12-09 17:38:18.522924] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.416 [2024-12-09 17:38:18.523164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.416 [2024-12-09 17:38:18.523185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.416 [2024-12-09 17:38:18.527292] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.416 [2024-12-09 17:38:18.527550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.416 [2024-12-09 17:38:18.527570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.416 [2024-12-09 17:38:18.531632] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.416 [2024-12-09 17:38:18.531883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.416 [2024-12-09 17:38:18.531904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.416 [2024-12-09 17:38:18.536058] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.416 [2024-12-09 17:38:18.536319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.416 [2024-12-09 17:38:18.536339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.416 [2024-12-09 17:38:18.540264] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.416 [2024-12-09 17:38:18.540525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.416 [2024-12-09 17:38:18.540546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.416 [2024-12-09 17:38:18.544414] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.416 [2024-12-09 17:38:18.544678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.416 [2024-12-09 17:38:18.544698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.416 [2024-12-09 17:38:18.548559] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.416 [2024-12-09 17:38:18.548813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.416 [2024-12-09 17:38:18.548833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.416 [2024-12-09 17:38:18.552725] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.416 [2024-12-09 17:38:18.552983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.416 [2024-12-09 17:38:18.553003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.416 [2024-12-09 17:38:18.556886] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.416 [2024-12-09 17:38:18.557144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.416 [2024-12-09 17:38:18.557164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.416 [2024-12-09 17:38:18.561034] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.416 [2024-12-09 17:38:18.561294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.416 [2024-12-09 17:38:18.561314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.416 [2024-12-09 17:38:18.565198] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.416 [2024-12-09 17:38:18.565466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.416 [2024-12-09 17:38:18.565486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.416 [2024-12-09 17:38:18.569347] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.416 [2024-12-09 17:38:18.569621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.416 [2024-12-09 17:38:18.569641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.416 [2024-12-09 17:38:18.573508] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.416 [2024-12-09 17:38:18.573763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.416 [2024-12-09 17:38:18.573783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.416 [2024-12-09 17:38:18.577613] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.416 [2024-12-09 17:38:18.577872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.416 [2024-12-09 17:38:18.577891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.416 [2024-12-09 17:38:18.581750] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.416 [2024-12-09 17:38:18.582004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.416 [2024-12-09 17:38:18.582025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.416 [2024-12-09 17:38:18.585872] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.416 [2024-12-09 17:38:18.586132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.416 [2024-12-09 17:38:18.586152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.416 [2024-12-09 17:38:18.590354] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.416 [2024-12-09 17:38:18.590617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.416 [2024-12-09 17:38:18.590641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.676 [2024-12-09 17:38:18.596054] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.676 [2024-12-09 17:38:18.596439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.676 [2024-12-09 17:38:18.596467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.676 [2024-12-09 17:38:18.601367] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.676 [2024-12-09 17:38:18.601603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.676 [2024-12-09 17:38:18.601624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.676 [2024-12-09 17:38:18.605845] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.676 [2024-12-09 17:38:18.606079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.676 [2024-12-09 17:38:18.606100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.676 [2024-12-09 17:38:18.610385] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.676 [2024-12-09 17:38:18.610629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.676 [2024-12-09 17:38:18.610654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.676 [2024-12-09 17:38:18.615034] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.676 [2024-12-09 17:38:18.615271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.676 [2024-12-09 17:38:18.615292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.676 [2024-12-09 17:38:18.619615] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.676 [2024-12-09 17:38:18.619852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.676 [2024-12-09 17:38:18.619872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.676 [2024-12-09 17:38:18.624189] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.676 [2024-12-09 17:38:18.624426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.676 [2024-12-09 17:38:18.624447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.676 [2024-12-09 17:38:18.628750] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.676 [2024-12-09 17:38:18.629019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.676 [2024-12-09 17:38:18.629039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.676 [2024-12-09 17:38:18.633416] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.676 [2024-12-09 17:38:18.633660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.676 [2024-12-09 17:38:18.633681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.676 [2024-12-09 17:38:18.637965] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.676 [2024-12-09 17:38:18.638207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.676 [2024-12-09 17:38:18.638234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.676 [2024-12-09 17:38:18.642622] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.676 [2024-12-09 17:38:18.642869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.676 [2024-12-09 17:38:18.642889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.676 [2024-12-09 17:38:18.647412] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.676 [2024-12-09 17:38:18.647665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.676 [2024-12-09 17:38:18.647686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.676 [2024-12-09 17:38:18.651869] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.676 [2024-12-09 17:38:18.652105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.676 [2024-12-09 17:38:18.652125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.676 [2024-12-09 17:38:18.656396] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.676 [2024-12-09 17:38:18.656636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.676 [2024-12-09 17:38:18.656656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.676 [2024-12-09 17:38:18.660943] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.676 [2024-12-09 17:38:18.661187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.676 [2024-12-09 17:38:18.661207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.676 [2024-12-09 17:38:18.665531] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.676 [2024-12-09 17:38:18.665774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.676 [2024-12-09 17:38:18.665794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.676 [2024-12-09 17:38:18.670003] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.676 [2024-12-09 17:38:18.670246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.676 [2024-12-09 17:38:18.670266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.676 [2024-12-09 17:38:18.674704] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.676 [2024-12-09 17:38:18.674955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.676 [2024-12-09 17:38:18.674975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.676 [2024-12-09 17:38:18.679187] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.676 [2024-12-09 17:38:18.679431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.676 [2024-12-09 17:38:18.679451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.676 [2024-12-09 17:38:18.683688] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.676 [2024-12-09 17:38:18.683941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.676 [2024-12-09 17:38:18.683961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.676 [2024-12-09 17:38:18.688284] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.676 [2024-12-09 17:38:18.688532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.676 [2024-12-09 17:38:18.688552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.676 [2024-12-09 17:38:18.692752] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.676 [2024-12-09 17:38:18.692995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.676 [2024-12-09 17:38:18.693015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.676 [2024-12-09 17:38:18.697469] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.676 [2024-12-09 17:38:18.697577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.676 [2024-12-09 17:38:18.697596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.676 [2024-12-09 17:38:18.701975] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.676 [2024-12-09 17:38:18.702269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.676 [2024-12-09 17:38:18.702289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.676 [2024-12-09 17:38:18.707178] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.676 [2024-12-09 17:38:18.707434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.676 [2024-12-09 17:38:18.707455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.676 [2024-12-09 17:38:18.712529] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.676 [2024-12-09 17:38:18.712778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.676 [2024-12-09 17:38:18.712799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.676 [2024-12-09 17:38:18.718346] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.676 [2024-12-09 17:38:18.718575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.676 [2024-12-09 17:38:18.718595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.676 [2024-12-09 17:38:18.724608] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.677 [2024-12-09 17:38:18.724847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.677 [2024-12-09 17:38:18.724868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.677 [2024-12-09 17:38:18.731682] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.677 [2024-12-09 17:38:18.732020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.677 [2024-12-09 17:38:18.732040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.677 [2024-12-09 17:38:18.738499] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.677 [2024-12-09 17:38:18.738797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.677 [2024-12-09 17:38:18.738822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.677 [2024-12-09 17:38:18.745459] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.677 [2024-12-09 17:38:18.745783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.677 [2024-12-09 17:38:18.745804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.677 [2024-12-09 17:38:18.752644] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.677 [2024-12-09 17:38:18.752966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.677 [2024-12-09 17:38:18.752987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.677 [2024-12-09 17:38:18.759421] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.677 [2024-12-09 17:38:18.759740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.677 [2024-12-09 17:38:18.759761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.677 [2024-12-09 17:38:18.766914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.677 [2024-12-09 17:38:18.767120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.677 [2024-12-09 17:38:18.767141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.677 [2024-12-09 17:38:18.773795] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.677 [2024-12-09 17:38:18.774026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.677 [2024-12-09 17:38:18.774046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.677 [2024-12-09 17:38:18.780640] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.677 [2024-12-09 17:38:18.780964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.677 [2024-12-09 17:38:18.780984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.677 [2024-12-09 17:38:18.788145] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.677 [2024-12-09 17:38:18.788481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.677 [2024-12-09 17:38:18.788501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.677 [2024-12-09 17:38:18.794994] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.677 [2024-12-09 17:38:18.795343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.677 [2024-12-09 17:38:18.795364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.677 [2024-12-09 17:38:18.801590] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.677 [2024-12-09 17:38:18.801913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.677 [2024-12-09 17:38:18.801938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.677 [2024-12-09 17:38:18.808908] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.677 [2024-12-09 17:38:18.809248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.677 [2024-12-09 17:38:18.809269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.677 [2024-12-09 17:38:18.815775] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.677 [2024-12-09 17:38:18.816075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.677 [2024-12-09 17:38:18.816096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.677 [2024-12-09 17:38:18.822948] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.677 [2024-12-09 17:38:18.823080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.677 [2024-12-09 17:38:18.823099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.677 [2024-12-09 17:38:18.829177] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.677 [2024-12-09 17:38:18.829399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.677 [2024-12-09 17:38:18.829420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.677 [2024-12-09 17:38:18.835712] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.677 [2024-12-09 17:38:18.835943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.677 [2024-12-09 17:38:18.835964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.677 [2024-12-09 17:38:18.841577] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.677 [2024-12-09 17:38:18.841779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.677 [2024-12-09 17:38:18.841801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.677 [2024-12-09 17:38:18.846084] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.677 [2024-12-09 17:38:18.846251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.677 [2024-12-09 17:38:18.846271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.677 [2024-12-09 17:38:18.850643] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.677 [2024-12-09 17:38:18.850799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.677 [2024-12-09 17:38:18.850820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.936 [2024-12-09 17:38:18.855190] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.936 [2024-12-09 17:38:18.855361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.936 [2024-12-09 17:38:18.855382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.936 [2024-12-09 17:38:18.859824] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.936 [2024-12-09 17:38:18.859979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.936 [2024-12-09 17:38:18.860000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.936 [2024-12-09 17:38:18.864601] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.936 [2024-12-09 17:38:18.864795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.936 [2024-12-09 17:38:18.864815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.936 [2024-12-09 17:38:18.868950] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.936 [2024-12-09 17:38:18.869100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.936 [2024-12-09 17:38:18.869120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.936 [2024-12-09 17:38:18.873677] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.936 [2024-12-09 17:38:18.873832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.936 [2024-12-09 17:38:18.873850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.936 [2024-12-09 17:38:18.878416] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.936 [2024-12-09 17:38:18.878595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.937 [2024-12-09 17:38:18.878614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.937 [2024-12-09 17:38:18.882918] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.937 [2024-12-09 17:38:18.883084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.937 [2024-12-09 17:38:18.883106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.937 [2024-12-09 17:38:18.887382] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.937 [2024-12-09 17:38:18.887557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.937 [2024-12-09 17:38:18.887578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.937 [2024-12-09 17:38:18.892138] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.937 [2024-12-09 17:38:18.892365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.937 [2024-12-09 17:38:18.892386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.937 [2024-12-09 17:38:18.897388] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.937 [2024-12-09 17:38:18.897554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.937 [2024-12-09 17:38:18.897575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.937 [2024-12-09 17:38:18.902013] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.937 [2024-12-09 17:38:18.902158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.937 [2024-12-09 17:38:18.902176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.937 [2024-12-09 17:38:18.907467] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.937 [2024-12-09 17:38:18.907597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.937 [2024-12-09 17:38:18.907616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.937 [2024-12-09 17:38:18.912190] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.937 [2024-12-09 17:38:18.912354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.937 [2024-12-09 17:38:18.912373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.937 [2024-12-09 17:38:18.916804] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.937 [2024-12-09 17:38:18.916959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.937 [2024-12-09 17:38:18.916977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.937 [2024-12-09 17:38:18.921535] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.937 [2024-12-09 17:38:18.921703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.937 [2024-12-09 17:38:18.921724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.937 [2024-12-09 17:38:18.925792] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.937 [2024-12-09 17:38:18.925962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.937 [2024-12-09 17:38:18.925980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.937 [2024-12-09 17:38:18.930245] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.937 [2024-12-09 17:38:18.930420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.937 [2024-12-09 17:38:18.930438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.937 [2024-12-09 17:38:18.934732] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.937 [2024-12-09 17:38:18.934901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.937 [2024-12-09 17:38:18.934923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.937 [2024-12-09 17:38:18.939309] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.937 [2024-12-09 17:38:18.939474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.937 [2024-12-09 17:38:18.939493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.937 [2024-12-09 17:38:18.943757] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.937 [2024-12-09 17:38:18.943935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.937 [2024-12-09 17:38:18.943953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.937 [2024-12-09 17:38:18.948561] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.937 [2024-12-09 17:38:18.948729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.937 [2024-12-09 17:38:18.948748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.937 [2024-12-09 17:38:18.952630] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.937 [2024-12-09 17:38:18.952795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.937 [2024-12-09 17:38:18.952814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.937 [2024-12-09 17:38:18.956606] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.937 [2024-12-09 17:38:18.956776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.937 [2024-12-09 17:38:18.956795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.937 [2024-12-09 17:38:18.960571] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.937 [2024-12-09 17:38:18.960751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.937 [2024-12-09 17:38:18.960771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.937 [2024-12-09 17:38:18.964638] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.937 [2024-12-09 17:38:18.964809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.937 [2024-12-09 17:38:18.964827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.937 [2024-12-09 17:38:18.968985] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.937 [2024-12-09 17:38:18.969153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.937 [2024-12-09 17:38:18.969172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.937 [2024-12-09 17:38:18.973056] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.937 [2024-12-09 17:38:18.973225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.937 [2024-12-09 17:38:18.973245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.937 [2024-12-09 17:38:18.977151] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.937 [2024-12-09 17:38:18.977322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.937 [2024-12-09 17:38:18.977341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.937 [2024-12-09 17:38:18.981179] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.937 [2024-12-09 17:38:18.981351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.937 [2024-12-09 17:38:18.981370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.937 [2024-12-09 17:38:18.985255] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.937 [2024-12-09 17:38:18.985417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.937 [2024-12-09 17:38:18.985435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.937 [2024-12-09 17:38:18.989061] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.937 [2024-12-09 17:38:18.989250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.937 [2024-12-09 17:38:18.989269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.937 [2024-12-09 17:38:18.992941] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.937 [2024-12-09 17:38:18.993103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.937 [2024-12-09 17:38:18.993122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.937 [2024-12-09 17:38:18.997099] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.937 [2024-12-09 17:38:18.997272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.937 [2024-12-09 17:38:18.997293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.937 [2024-12-09 17:38:19.001582] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.937 [2024-12-09 17:38:19.001753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.938 [2024-12-09 17:38:19.001772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.938 [2024-12-09 17:38:19.005795] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.938 [2024-12-09 17:38:19.005961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.938 [2024-12-09 17:38:19.005982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.938 [2024-12-09 17:38:19.009883] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.938 [2024-12-09 17:38:19.010037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.938 [2024-12-09 17:38:19.010055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.938 [2024-12-09 17:38:19.014272] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.938 [2024-12-09 17:38:19.014444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.938 [2024-12-09 17:38:19.014463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.938 [2024-12-09 17:38:19.018774] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.938 [2024-12-09 17:38:19.018942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.938 [2024-12-09 17:38:19.018960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.938 [2024-12-09 17:38:19.023489] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.938 [2024-12-09 17:38:19.023643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.938 [2024-12-09 17:38:19.023662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.938 [2024-12-09 17:38:19.027785] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.938 [2024-12-09 17:38:19.028075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.938 [2024-12-09 17:38:19.028096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.938 [2024-12-09 17:38:19.032052] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.938 [2024-12-09 17:38:19.032245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.938 [2024-12-09 17:38:19.032264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.938 [2024-12-09 17:38:19.036366] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.938 [2024-12-09 17:38:19.036522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.938 [2024-12-09 17:38:19.036541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.938 [2024-12-09 17:38:19.041057] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.938 [2024-12-09 17:38:19.041213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.938 [2024-12-09 17:38:19.041238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.938 [2024-12-09 17:38:19.045746] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.938 [2024-12-09 17:38:19.045893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.938 [2024-12-09 17:38:19.045915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.938 [2024-12-09 17:38:19.050199] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.938 [2024-12-09 17:38:19.050344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.938 [2024-12-09 17:38:19.050363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.938 [2024-12-09 17:38:19.054239] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.938 [2024-12-09 17:38:19.054409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.938 [2024-12-09 17:38:19.054428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.938 [2024-12-09 17:38:19.058240] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.938 [2024-12-09 17:38:19.058377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.938 [2024-12-09 17:38:19.058395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.938 [2024-12-09 17:38:19.062565] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.938 [2024-12-09 17:38:19.062763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.938 [2024-12-09 17:38:19.062784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.938 [2024-12-09 17:38:19.067193] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.938 [2024-12-09 17:38:19.067339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.938 [2024-12-09 17:38:19.067358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.938 [2024-12-09 17:38:19.071412] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.938 [2024-12-09 17:38:19.071564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.938 [2024-12-09 17:38:19.071583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.938 [2024-12-09 17:38:19.076063] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.938 [2024-12-09 17:38:19.076189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.938 [2024-12-09 17:38:19.076208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.938 [2024-12-09 17:38:19.080792] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.938 [2024-12-09 17:38:19.080943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.938 [2024-12-09 17:38:19.080962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.938 [2024-12-09 17:38:19.085836] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.938 [2024-12-09 17:38:19.085971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.938 [2024-12-09 17:38:19.085990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.938 [2024-12-09 17:38:19.090042] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.938 [2024-12-09 17:38:19.090173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.938 [2024-12-09 17:38:19.090192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.938 [2024-12-09 17:38:19.093994] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.938 [2024-12-09 17:38:19.094131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.938 [2024-12-09 17:38:19.094150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.938 [2024-12-09 17:38:19.097915] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.938 [2024-12-09 17:38:19.098058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.938 [2024-12-09 17:38:19.098077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.938 [2024-12-09 17:38:19.102071] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.938 [2024-12-09 17:38:19.102270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.938 [2024-12-09 17:38:19.102288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.938 [2024-12-09 17:38:19.106904] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:49.938 [2024-12-09 17:38:19.107160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.938 [2024-12-09 17:38:19.107180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.938 [2024-12-09 17:38:19.112762] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:50.197 [2024-12-09 17:38:19.112890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.198 [2024-12-09 17:38:19.112911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.198 [2024-12-09 17:38:19.117524] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:50.198 [2024-12-09 17:38:19.117686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.198 [2024-12-09 17:38:19.117707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.198 [2024-12-09 17:38:19.121832] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:50.198 [2024-12-09 17:38:19.122003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.198 [2024-12-09 17:38:19.122024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.198 [2024-12-09 17:38:19.125879] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:50.198 [2024-12-09 17:38:19.126032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.198 [2024-12-09 17:38:19.126051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.198 [2024-12-09 17:38:19.130062] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:50.198 [2024-12-09 17:38:19.130241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.198 [2024-12-09 17:38:19.130260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.198 [2024-12-09 17:38:19.134948] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:50.198 [2024-12-09 17:38:19.135128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.198 [2024-12-09 17:38:19.135147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.198 [2024-12-09 17:38:19.140178] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:50.198 [2024-12-09 17:38:19.140341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.198 [2024-12-09 17:38:19.140360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.198 [2024-12-09 17:38:19.145284] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:50.198 [2024-12-09 17:38:19.145480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.198 [2024-12-09 17:38:19.145498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.198 [2024-12-09 17:38:19.150593] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:50.198 [2024-12-09 17:38:19.150753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.198 [2024-12-09 17:38:19.150772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.198 [2024-12-09 17:38:19.155645] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:50.198 [2024-12-09 17:38:19.155831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.198 [2024-12-09 17:38:19.155850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.198 [2024-12-09 17:38:19.161036] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:50.198 [2024-12-09 17:38:19.161272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.198 [2024-12-09 17:38:19.161292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.198 [2024-12-09 17:38:19.166066] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:50.198 [2024-12-09 17:38:19.166199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.198 [2024-12-09 17:38:19.166228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.198 [2024-12-09 17:38:19.171242] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:50.198 [2024-12-09 17:38:19.171414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.198 [2024-12-09 17:38:19.171433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.198 [2024-12-09 17:38:19.177099] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:50.198 [2024-12-09 17:38:19.177384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.198 [2024-12-09 17:38:19.177405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.198 [2024-12-09 17:38:19.182646] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:50.198 [2024-12-09 17:38:19.182776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.198 [2024-12-09 17:38:19.182794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.198 [2024-12-09 17:38:19.187816] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:50.198 [2024-12-09 17:38:19.187964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.198 [2024-12-09 17:38:19.187983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.198 [2024-12-09 17:38:19.193683] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:50.198 [2024-12-09 17:38:19.193822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.198 [2024-12-09 17:38:19.193840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.198 [2024-12-09 17:38:19.198599] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:50.198 [2024-12-09 17:38:19.198721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.198 [2024-12-09 17:38:19.198739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.198 [2024-12-09 17:38:19.202756] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:50.198 [2024-12-09 17:38:19.202856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.198 [2024-12-09 17:38:19.202874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.198 [2024-12-09 17:38:19.206917] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:50.198 [2024-12-09 17:38:19.207030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.198 [2024-12-09 17:38:19.207049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.198 [2024-12-09 17:38:19.211081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:50.198 [2024-12-09 17:38:19.211202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.198 [2024-12-09 17:38:19.211228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.198 [2024-12-09 17:38:19.215199] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:50.198 [2024-12-09 17:38:19.215388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.198 [2024-12-09 17:38:19.215406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.198 [2024-12-09 17:38:19.219381] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:50.198 [2024-12-09 17:38:19.219515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.198 [2024-12-09 17:38:19.219533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.198 [2024-12-09 17:38:19.223784] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:50.198 [2024-12-09 17:38:19.223900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.198 [2024-12-09 17:38:19.223920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.198 [2024-12-09 17:38:19.227859] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:50.198 [2024-12-09 17:38:19.227983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.198 [2024-12-09 17:38:19.228001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.198 [2024-12-09 17:38:19.232376] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:50.198 [2024-12-09 17:38:19.232496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.198 [2024-12-09 17:38:19.232515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.198 [2024-12-09 17:38:19.236407] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:50.198 [2024-12-09 17:38:19.236540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.198 [2024-12-09 17:38:19.236559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.198 [2024-12-09 17:38:19.240305] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:50.198 [2024-12-09 17:38:19.240423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.198 [2024-12-09 17:38:19.240441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.199 [2024-12-09 17:38:19.244151] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:50.199 [2024-12-09 17:38:19.244268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.199 [2024-12-09 17:38:19.244287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.199 [2024-12-09 17:38:19.248134] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:50.199 [2024-12-09 17:38:19.248252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.199 [2024-12-09 17:38:19.248270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.199 [2024-12-09 17:38:19.252049] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:50.199 [2024-12-09 17:38:19.252206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.199 [2024-12-09 17:38:19.252231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.199 [2024-12-09 17:38:19.256040] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:50.199 [2024-12-09 17:38:19.256155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.199 [2024-12-09 17:38:19.256173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.199 [2024-12-09 17:38:19.259962] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:50.199 [2024-12-09 17:38:19.260097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.199 [2024-12-09 17:38:19.260116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.199 [2024-12-09 17:38:19.263800] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:50.199 [2024-12-09 17:38:19.263920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.199 [2024-12-09 17:38:19.263939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.199 [2024-12-09 17:38:19.267780] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:50.199 [2024-12-09 17:38:19.267955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.199 [2024-12-09 17:38:19.267975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.199 [2024-12-09 17:38:19.271771] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:50.199 [2024-12-09 17:38:19.271873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.199 [2024-12-09 17:38:19.271891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.199 [2024-12-09 17:38:19.275670] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:50.199 [2024-12-09 17:38:19.275777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.199 [2024-12-09 17:38:19.275795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.199 [2024-12-09 17:38:19.279543] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:50.199 [2024-12-09 17:38:19.279646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.199 [2024-12-09 17:38:19.279668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.199 [2024-12-09 17:38:19.283340] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:50.199 [2024-12-09 17:38:19.283479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.199 [2024-12-09 17:38:19.283497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.199 [2024-12-09 17:38:19.287539] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:50.199 [2024-12-09 17:38:19.287699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.199 [2024-12-09 17:38:19.287717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.199 [2024-12-09 17:38:19.292827] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:50.199 [2024-12-09 17:38:19.292993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.199 [2024-12-09 17:38:19.293012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.199 [2024-12-09 17:38:19.297406] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:50.199 [2024-12-09 17:38:19.297555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.199 [2024-12-09 17:38:19.297574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.199 [2024-12-09 17:38:19.302142] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:50.199 [2024-12-09 17:38:19.302295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.199 [2024-12-09 17:38:19.302314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.199 [2024-12-09 17:38:19.307465] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:50.199 [2024-12-09 17:38:19.307628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.199 [2024-12-09 17:38:19.307647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.199 [2024-12-09 17:38:19.313186] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:50.199 [2024-12-09 17:38:19.313334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.199 [2024-12-09 17:38:19.313353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.199 [2024-12-09 17:38:19.319569] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:50.199 [2024-12-09 17:38:19.319693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.199 [2024-12-09 17:38:19.319712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.199 [2024-12-09 17:38:19.325912] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:50.199 [2024-12-09 17:38:19.326074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.199 [2024-12-09 17:38:19.326094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.199 [2024-12-09 17:38:19.330609] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:50.199 [2024-12-09 17:38:19.330751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.199 [2024-12-09 17:38:19.330769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.199 [2024-12-09 17:38:19.336004] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:50.199 [2024-12-09 17:38:19.336089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.199 [2024-12-09 17:38:19.336108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.199 [2024-12-09 17:38:19.340964] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:50.199 [2024-12-09 17:38:19.341042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.199 [2024-12-09 17:38:19.341062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.199 [2024-12-09 17:38:19.345426] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:50.199 [2024-12-09 17:38:19.345509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.199 [2024-12-09 17:38:19.345528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.199 [2024-12-09 17:38:19.349777] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:50.199 [2024-12-09 17:38:19.349853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.199 [2024-12-09 17:38:19.349872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.199 [2024-12-09 17:38:19.354513] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:50.199 [2024-12-09 17:38:19.354602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.199 [2024-12-09 17:38:19.354624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.199 [2024-12-09 17:38:19.359136] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:50.199 [2024-12-09 17:38:19.359232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.199 [2024-12-09 17:38:19.359251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.199 [2024-12-09 17:38:19.363855] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:50.199 [2024-12-09 17:38:19.363933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.199 [2024-12-09 17:38:19.363954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.199 [2024-12-09 17:38:19.368250] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:50.199 [2024-12-09 17:38:19.368344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.199 [2024-12-09 17:38:19.368362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.200 [2024-12-09 17:38:19.373067] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:50.200 [2024-12-09 17:38:19.373157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.200 [2024-12-09 17:38:19.373179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.457 [2024-12-09 17:38:19.377799] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:50.457 [2024-12-09 17:38:19.377864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.457 [2024-12-09 17:38:19.377885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.457 [2024-12-09 17:38:19.382315] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:50.457 [2024-12-09 17:38:19.382393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.457 [2024-12-09 17:38:19.382414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.457 [2024-12-09 17:38:19.386834] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:50.457 [2024-12-09 17:38:19.386916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.457 [2024-12-09 17:38:19.386935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.457 [2024-12-09 17:38:19.391372] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:50.457 [2024-12-09 17:38:19.391466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.457 [2024-12-09 17:38:19.391485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.457 [2024-12-09 17:38:19.395702] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:50.457 [2024-12-09 17:38:19.395775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.458 [2024-12-09 17:38:19.395794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.458 [2024-12-09 17:38:19.400346] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:50.458 [2024-12-09 17:38:19.400435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.458 [2024-12-09 17:38:19.400454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.458 [2024-12-09 17:38:19.405134] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:50.458 [2024-12-09 17:38:19.405214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.458 [2024-12-09 17:38:19.405244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.458 [2024-12-09 17:38:19.409906] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:50.458 [2024-12-09 17:38:19.409998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.458 [2024-12-09 17:38:19.410017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.458 [2024-12-09 17:38:19.413892] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:50.458 [2024-12-09 17:38:19.413987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.458 [2024-12-09 17:38:19.414006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.458 [2024-12-09 17:38:19.417750] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:50.458 [2024-12-09 17:38:19.417873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.458 [2024-12-09 17:38:19.417892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.458 [2024-12-09 17:38:19.421666] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:50.458 [2024-12-09 17:38:19.421746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.458 [2024-12-09 17:38:19.421766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.458 [2024-12-09 17:38:19.425615] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:50.458 [2024-12-09 17:38:19.425712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.458 [2024-12-09 17:38:19.425731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.458 [2024-12-09 17:38:19.429527] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:50.458 [2024-12-09 17:38:19.429610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.458 [2024-12-09 17:38:19.429629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.458 [2024-12-09 17:38:19.433493] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:50.458 [2024-12-09 17:38:19.433590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.458 [2024-12-09 17:38:19.433609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.458 6227.50 IOPS, 778.44 MiB/s [2024-12-09T16:38:19.637Z] [2024-12-09 17:38:19.438379] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1fa00) with pdu=0x200016eff3c8 00:27:50.458 [2024-12-09 17:38:19.438502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.458 [2024-12-09 17:38:19.438522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.458 00:27:50.458 Latency(us) 00:27:50.458 [2024-12-09T16:38:19.637Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:50.458 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:50.458 nvme0n1 : 2.00 6226.04 778.25 0.00 0.00 2565.47 1677.41 12420.63 00:27:50.458 [2024-12-09T16:38:19.637Z] =================================================================================================================== 00:27:50.458 [2024-12-09T16:38:19.637Z] Total : 6226.04 778.25 0.00 0.00 2565.47 1677.41 12420.63 00:27:50.458 { 00:27:50.458 "results": [ 00:27:50.458 { 00:27:50.458 "job": "nvme0n1", 00:27:50.458 "core_mask": "0x2", 00:27:50.458 "workload": "randwrite", 00:27:50.458 "status": "finished", 00:27:50.458 "queue_depth": 16, 00:27:50.458 "io_size": 131072, 00:27:50.458 "runtime": 2.003521, 00:27:50.458 "iops": 6226.039058237972, 00:27:50.458 "mibps": 778.2548822797465, 00:27:50.458 "io_failed": 0, 00:27:50.458 "io_timeout": 0, 00:27:50.458 "avg_latency_us": 2565.4688075005533, 00:27:50.458 "min_latency_us": 1677.4095238095238, 00:27:50.458 "max_latency_us": 12420.63238095238 00:27:50.458 } 00:27:50.458 ], 00:27:50.458 "core_count": 1 00:27:50.458 } 00:27:50.458 17:38:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:50.458 17:38:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:50.458 17:38:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:50.458 | .driver_specific 00:27:50.458 | .nvme_error 00:27:50.458 | .status_code 00:27:50.458 | .command_transient_transport_error' 00:27:50.458 17:38:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:50.716 17:38:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 403 > 0 )) 00:27:50.716 17:38:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2728004 00:27:50.716 17:38:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2728004 ']' 00:27:50.716 17:38:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2728004 00:27:50.716 17:38:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:50.716 17:38:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:50.716 17:38:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2728004 00:27:50.716 17:38:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:50.716 17:38:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:50.716 17:38:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2728004' 00:27:50.716 killing process with pid 2728004 00:27:50.716 17:38:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2728004 00:27:50.716 Received shutdown signal, test time was about 2.000000 seconds 00:27:50.716 00:27:50.716 Latency(us) 00:27:50.716 [2024-12-09T16:38:19.895Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:50.716 [2024-12-09T16:38:19.895Z] =================================================================================================================== 00:27:50.716 [2024-12-09T16:38:19.895Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:50.716 17:38:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2728004 00:27:50.716 17:38:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2726183 00:27:50.716 17:38:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2726183 ']' 00:27:50.716 17:38:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2726183 00:27:50.716 17:38:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:50.716 17:38:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:50.716 17:38:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2726183 00:27:50.974 17:38:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:50.974 17:38:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:50.974 17:38:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2726183' 00:27:50.974 killing process with pid 2726183 00:27:50.974 17:38:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2726183 00:27:50.974 17:38:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2726183 00:27:50.974 00:27:50.974 real 0m14.065s 00:27:50.974 user 0m26.877s 00:27:50.974 sys 0m4.660s 00:27:50.974 17:38:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:50.974 17:38:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:50.974 ************************************ 00:27:50.974 END TEST nvmf_digest_error 00:27:50.974 ************************************ 00:27:50.974 17:38:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:27:50.974 17:38:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:27:50.974 17:38:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:50.974 17:38:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:27:50.974 17:38:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:50.974 17:38:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:27:50.974 17:38:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:50.974 17:38:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:50.974 rmmod nvme_tcp 00:27:51.233 rmmod nvme_fabrics 00:27:51.233 rmmod nvme_keyring 00:27:51.233 17:38:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:51.233 17:38:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:27:51.233 17:38:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:27:51.233 17:38:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 2726183 ']' 00:27:51.233 17:38:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 2726183 00:27:51.234 17:38:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 2726183 ']' 00:27:51.234 17:38:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 2726183 00:27:51.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2726183) - No such process 00:27:51.234 17:38:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 2726183 is not found' 00:27:51.234 Process with pid 2726183 is not found 00:27:51.234 17:38:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:51.234 17:38:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:51.234 17:38:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:51.234 17:38:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:27:51.234 17:38:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:27:51.234 17:38:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:51.234 17:38:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:27:51.234 17:38:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:51.234 17:38:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:51.234 17:38:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:51.234 17:38:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:51.234 17:38:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:53.138 17:38:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:53.138 00:27:53.138 real 0m36.336s 00:27:53.138 user 0m55.153s 00:27:53.138 sys 0m13.781s 00:27:53.138 17:38:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:53.138 17:38:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:53.138 ************************************ 00:27:53.138 END TEST nvmf_digest 00:27:53.138 ************************************ 00:27:53.138 17:38:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:27:53.138 17:38:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:27:53.139 17:38:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:27:53.139 17:38:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:53.139 17:38:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:53.139 17:38:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:53.139 17:38:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.398 ************************************ 00:27:53.398 START TEST nvmf_bdevperf 00:27:53.398 ************************************ 00:27:53.398 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:53.398 * Looking for test storage... 00:27:53.398 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:53.398 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:53.398 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:27:53.398 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:53.398 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:53.398 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:53.398 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:53.398 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:53.398 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:27:53.398 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:27:53.398 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:27:53.398 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:27:53.398 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:27:53.398 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:27:53.398 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:27:53.398 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:53.398 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:27:53.398 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:27:53.398 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:53.398 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:53.398 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:27:53.398 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:27:53.398 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:53.398 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:27:53.398 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:27:53.398 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:27:53.398 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:27:53.398 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:53.398 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:27:53.398 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:27:53.398 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:53.398 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:53.398 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:27:53.398 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:53.398 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:53.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:53.398 --rc genhtml_branch_coverage=1 00:27:53.398 --rc genhtml_function_coverage=1 00:27:53.398 --rc genhtml_legend=1 00:27:53.398 --rc geninfo_all_blocks=1 00:27:53.398 --rc geninfo_unexecuted_blocks=1 00:27:53.398 00:27:53.398 ' 00:27:53.398 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:53.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:53.398 --rc genhtml_branch_coverage=1 00:27:53.398 --rc genhtml_function_coverage=1 00:27:53.398 --rc genhtml_legend=1 00:27:53.398 --rc geninfo_all_blocks=1 00:27:53.398 --rc geninfo_unexecuted_blocks=1 00:27:53.398 00:27:53.398 ' 00:27:53.398 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:53.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:53.398 --rc genhtml_branch_coverage=1 00:27:53.398 --rc genhtml_function_coverage=1 00:27:53.399 --rc genhtml_legend=1 00:27:53.399 --rc geninfo_all_blocks=1 00:27:53.399 --rc geninfo_unexecuted_blocks=1 00:27:53.399 00:27:53.399 ' 00:27:53.399 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:53.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:53.399 --rc genhtml_branch_coverage=1 00:27:53.399 --rc genhtml_function_coverage=1 00:27:53.399 --rc genhtml_legend=1 00:27:53.399 --rc geninfo_all_blocks=1 00:27:53.399 --rc geninfo_unexecuted_blocks=1 00:27:53.399 00:27:53.399 ' 00:27:53.399 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:53.399 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:27:53.399 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:53.399 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:53.399 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:53.399 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:53.399 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:53.399 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:53.399 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:53.399 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:53.399 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:53.399 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:53.399 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:27:53.399 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:27:53.399 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:53.399 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:53.399 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:53.399 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:53.399 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:53.399 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:27:53.399 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:53.399 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:53.399 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:53.399 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.399 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.399 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.399 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:27:53.399 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.399 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:27:53.399 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:53.399 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:53.399 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:53.399 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:53.399 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:53.399 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:53.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:53.399 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:53.399 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:53.399 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:53.399 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:53.399 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:53.399 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:27:53.399 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:53.399 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:53.399 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:53.399 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:53.399 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:53.399 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:53.399 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:53.399 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:53.399 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:53.399 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:53.399 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:27:53.399 17:38:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:59.964 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:59.964 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:27:59.964 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:59.964 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:59.964 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:59.964 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:59.964 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:59.964 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:27:59.964 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:59.964 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:27:59.964 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:27:59.964 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:27:59.964 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:27:59.964 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:27:59.964 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:27:59.964 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:59.964 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:59.964 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:59.964 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:59.964 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:59.964 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:59.964 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:59.964 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:59.964 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:59.964 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:59.964 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:59.964 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:59.964 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:59.965 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:59.965 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:59.965 Found net devices under 0000:af:00.0: cvl_0_0 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:59.965 Found net devices under 0000:af:00.1: cvl_0_1 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:59.965 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:59.965 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.302 ms 00:27:59.965 00:27:59.965 --- 10.0.0.2 ping statistics --- 00:27:59.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:59.965 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:59.965 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:59.965 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:27:59.965 00:27:59.965 --- 10.0.0.1 ping statistics --- 00:27:59.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:59.965 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2731975 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2731975 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2731975 ']' 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:59.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:59.965 17:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:59.965 [2024-12-09 17:38:28.510519] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:27:59.965 [2024-12-09 17:38:28.510570] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:59.965 [2024-12-09 17:38:28.588655] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:59.965 [2024-12-09 17:38:28.630900] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:59.965 [2024-12-09 17:38:28.630935] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:59.965 [2024-12-09 17:38:28.630943] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:59.965 [2024-12-09 17:38:28.630949] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:59.965 [2024-12-09 17:38:28.630954] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:59.965 [2024-12-09 17:38:28.632361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:59.965 [2024-12-09 17:38:28.632462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:59.965 [2024-12-09 17:38:28.632463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:00.222 17:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:00.222 17:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:28:00.222 17:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:00.222 17:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:00.223 17:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:00.223 17:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:00.223 17:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:00.223 17:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.223 17:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:00.223 [2024-12-09 17:38:29.395053] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:00.223 17:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.223 17:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:00.480 17:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.480 17:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:00.480 Malloc0 00:28:00.480 17:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.480 17:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:00.480 17:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.480 17:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:00.480 17:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.480 17:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:00.480 17:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.480 17:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:00.480 17:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.480 17:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:00.480 17:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.480 17:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:00.480 [2024-12-09 17:38:29.464930] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:00.480 17:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.480 17:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:28:00.480 17:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:28:00.480 17:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:28:00.480 17:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:28:00.480 17:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:00.480 17:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:00.480 { 00:28:00.480 "params": { 00:28:00.480 "name": "Nvme$subsystem", 00:28:00.480 "trtype": "$TEST_TRANSPORT", 00:28:00.480 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:00.480 "adrfam": "ipv4", 00:28:00.480 "trsvcid": "$NVMF_PORT", 00:28:00.480 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:00.480 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:00.480 "hdgst": ${hdgst:-false}, 00:28:00.480 "ddgst": ${ddgst:-false} 00:28:00.480 }, 00:28:00.480 "method": "bdev_nvme_attach_controller" 00:28:00.480 } 00:28:00.480 EOF 00:28:00.480 )") 00:28:00.480 17:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:28:00.480 17:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:28:00.480 17:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:28:00.480 17:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:00.480 "params": { 00:28:00.480 "name": "Nvme1", 00:28:00.480 "trtype": "tcp", 00:28:00.480 "traddr": "10.0.0.2", 00:28:00.480 "adrfam": "ipv4", 00:28:00.480 "trsvcid": "4420", 00:28:00.480 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:00.480 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:00.480 "hdgst": false, 00:28:00.480 "ddgst": false 00:28:00.480 }, 00:28:00.480 "method": "bdev_nvme_attach_controller" 00:28:00.480 }' 00:28:00.480 [2024-12-09 17:38:29.517027] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:28:00.480 [2024-12-09 17:38:29.517069] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2732221 ] 00:28:00.480 [2024-12-09 17:38:29.590054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:00.480 [2024-12-09 17:38:29.632080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:00.738 Running I/O for 1 seconds... 00:28:01.670 11447.00 IOPS, 44.71 MiB/s 00:28:01.670 Latency(us) 00:28:01.670 [2024-12-09T16:38:30.849Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:01.670 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:01.670 Verification LBA range: start 0x0 length 0x4000 00:28:01.670 Nvme1n1 : 1.01 11483.10 44.86 0.00 0.00 11103.87 2215.74 13544.11 00:28:01.670 [2024-12-09T16:38:30.849Z] =================================================================================================================== 00:28:01.670 [2024-12-09T16:38:30.849Z] Total : 11483.10 44.86 0.00 0.00 11103.87 2215.74 13544.11 00:28:01.928 17:38:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2732457 00:28:01.928 17:38:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:28:01.928 17:38:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:28:01.928 17:38:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:28:01.928 17:38:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:28:01.928 17:38:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:28:01.928 17:38:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:01.928 17:38:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:01.928 { 00:28:01.928 "params": { 00:28:01.928 "name": "Nvme$subsystem", 00:28:01.928 "trtype": "$TEST_TRANSPORT", 00:28:01.928 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:01.928 "adrfam": "ipv4", 00:28:01.928 "trsvcid": "$NVMF_PORT", 00:28:01.928 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:01.928 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:01.928 "hdgst": ${hdgst:-false}, 00:28:01.928 "ddgst": ${ddgst:-false} 00:28:01.928 }, 00:28:01.928 "method": "bdev_nvme_attach_controller" 00:28:01.928 } 00:28:01.928 EOF 00:28:01.928 )") 00:28:01.928 17:38:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:28:01.928 17:38:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:28:01.928 17:38:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:28:01.928 17:38:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:01.928 "params": { 00:28:01.928 "name": "Nvme1", 00:28:01.928 "trtype": "tcp", 00:28:01.928 "traddr": "10.0.0.2", 00:28:01.928 "adrfam": "ipv4", 00:28:01.928 "trsvcid": "4420", 00:28:01.928 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:01.928 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:01.928 "hdgst": false, 00:28:01.928 "ddgst": false 00:28:01.928 }, 00:28:01.928 "method": "bdev_nvme_attach_controller" 00:28:01.928 }' 00:28:01.928 [2024-12-09 17:38:31.010880] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:28:01.928 [2024-12-09 17:38:31.010929] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2732457 ] 00:28:01.928 [2024-12-09 17:38:31.085409] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:02.185 [2024-12-09 17:38:31.122478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:02.442 Running I/O for 15 seconds... 00:28:04.306 11336.00 IOPS, 44.28 MiB/s [2024-12-09T16:38:34.059Z] 11346.00 IOPS, 44.32 MiB/s [2024-12-09T16:38:34.059Z] 17:38:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2731975 00:28:04.880 17:38:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:28:04.880 [2024-12-09 17:38:33.986227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:100976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.880 [2024-12-09 17:38:33.986265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.880 [2024-12-09 17:38:33.986281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:100984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.880 [2024-12-09 17:38:33.986289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.880 [2024-12-09 17:38:33.986298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:100992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.880 [2024-12-09 17:38:33.986308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.880 [2024-12-09 17:38:33.986321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:101000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.880 [2024-12-09 17:38:33.986328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.880 [2024-12-09 17:38:33.986337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:101008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.880 [2024-12-09 17:38:33.986345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.880 [2024-12-09 17:38:33.986353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:101016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.880 [2024-12-09 17:38:33.986360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.880 [2024-12-09 17:38:33.986372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:101024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.880 [2024-12-09 17:38:33.986381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.880 [2024-12-09 17:38:33.986389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:101032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.880 [2024-12-09 17:38:33.986398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.880 [2024-12-09 17:38:33.986407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:101040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.880 [2024-12-09 17:38:33.986416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.880 [2024-12-09 17:38:33.986425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:101048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.880 [2024-12-09 17:38:33.986434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.880 [2024-12-09 17:38:33.986442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:101056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.880 [2024-12-09 17:38:33.986450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.880 [2024-12-09 17:38:33.986459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:101064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.880 [2024-12-09 17:38:33.986471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.880 [2024-12-09 17:38:33.986479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:101072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.880 [2024-12-09 17:38:33.986488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.880 [2024-12-09 17:38:33.986497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:101080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.880 [2024-12-09 17:38:33.986505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.880 [2024-12-09 17:38:33.986513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:101088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.880 [2024-12-09 17:38:33.986523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.880 [2024-12-09 17:38:33.986533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:101096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.880 [2024-12-09 17:38:33.986547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.880 [2024-12-09 17:38:33.986557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:101104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.880 [2024-12-09 17:38:33.986566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.880 [2024-12-09 17:38:33.986576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:101112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.880 [2024-12-09 17:38:33.986585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.880 [2024-12-09 17:38:33.986596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:101120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.880 [2024-12-09 17:38:33.986604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.880 [2024-12-09 17:38:33.986614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:101128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.880 [2024-12-09 17:38:33.986622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.880 [2024-12-09 17:38:33.986630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:101136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.880 [2024-12-09 17:38:33.986637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.880 [2024-12-09 17:38:33.986645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:101144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.880 [2024-12-09 17:38:33.986656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.880 [2024-12-09 17:38:33.986665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.880 [2024-12-09 17:38:33.986676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.880 [2024-12-09 17:38:33.986686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:101160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.880 [2024-12-09 17:38:33.986697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.880 [2024-12-09 17:38:33.986707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:101168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.880 [2024-12-09 17:38:33.986717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.880 [2024-12-09 17:38:33.986726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:101176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.880 [2024-12-09 17:38:33.986742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.880 [2024-12-09 17:38:33.986750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:101184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.880 [2024-12-09 17:38:33.986758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.880 [2024-12-09 17:38:33.986766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:101192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.880 [2024-12-09 17:38:33.986773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.880 [2024-12-09 17:38:33.986784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:101200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.880 [2024-12-09 17:38:33.986793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.880 [2024-12-09 17:38:33.986803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:101208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.880 [2024-12-09 17:38:33.986809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.880 [2024-12-09 17:38:33.986818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:101216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.880 [2024-12-09 17:38:33.986826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.880 [2024-12-09 17:38:33.986834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:101224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.880 [2024-12-09 17:38:33.986841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.880 [2024-12-09 17:38:33.986850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:101232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.880 [2024-12-09 17:38:33.986857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.880 [2024-12-09 17:38:33.986866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:101240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.880 [2024-12-09 17:38:33.986874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.880 [2024-12-09 17:38:33.986882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:101248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.880 [2024-12-09 17:38:33.986891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.880 [2024-12-09 17:38:33.986903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:101256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.880 [2024-12-09 17:38:33.986910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.880 [2024-12-09 17:38:33.986921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:101264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.880 [2024-12-09 17:38:33.986929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.880 [2024-12-09 17:38:33.986938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.880 [2024-12-09 17:38:33.986945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.880 [2024-12-09 17:38:33.986954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:101280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.880 [2024-12-09 17:38:33.986961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.880 [2024-12-09 17:38:33.986971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:101288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.880 [2024-12-09 17:38:33.986978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.880 [2024-12-09 17:38:33.986986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:101296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.880 [2024-12-09 17:38:33.986993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.880 [2024-12-09 17:38:33.987003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:101304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.880 [2024-12-09 17:38:33.987014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.880 [2024-12-09 17:38:33.987027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:101312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.880 [2024-12-09 17:38:33.987037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.880 [2024-12-09 17:38:33.987048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:101320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.880 [2024-12-09 17:38:33.987057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.880 [2024-12-09 17:38:33.987069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:101328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.880 [2024-12-09 17:38:33.987079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.880 [2024-12-09 17:38:33.987090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:101336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.880 [2024-12-09 17:38:33.987103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.880 [2024-12-09 17:38:33.987115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:101344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.880 [2024-12-09 17:38:33.987123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.880 [2024-12-09 17:38:33.987134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:101352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.880 [2024-12-09 17:38:33.987145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.880 [2024-12-09 17:38:33.987155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:101360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.880 [2024-12-09 17:38:33.987164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.880 [2024-12-09 17:38:33.987175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:101368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.880 [2024-12-09 17:38:33.987185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.880 [2024-12-09 17:38:33.987197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:101376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.880 [2024-12-09 17:38:33.987207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.880 [2024-12-09 17:38:33.987222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:101384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.880 [2024-12-09 17:38:33.987233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.880 [2024-12-09 17:38:33.987244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:101392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.880 [2024-12-09 17:38:33.987252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.880 [2024-12-09 17:38:33.987262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:101400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.880 [2024-12-09 17:38:33.987270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.880 [2024-12-09 17:38:33.987278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:101408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.880 [2024-12-09 17:38:33.987285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.880 [2024-12-09 17:38:33.987293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:101416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.880 [2024-12-09 17:38:33.987299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.880 [2024-12-09 17:38:33.987308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:101424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.880 [2024-12-09 17:38:33.987314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.880 [2024-12-09 17:38:33.987322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:101432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.880 [2024-12-09 17:38:33.987329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.880 [2024-12-09 17:38:33.987337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:101440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.880 [2024-12-09 17:38:33.987344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.880 [2024-12-09 17:38:33.987353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:101448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.880 [2024-12-09 17:38:33.987360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.880 [2024-12-09 17:38:33.987368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:101456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.880 [2024-12-09 17:38:33.987375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.880 [2024-12-09 17:38:33.987383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:101464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.880 [2024-12-09 17:38:33.987389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.880 [2024-12-09 17:38:33.987397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:101472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.881 [2024-12-09 17:38:33.987404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.881 [2024-12-09 17:38:33.987411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:101480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.881 [2024-12-09 17:38:33.987418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.881 [2024-12-09 17:38:33.987426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:101488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.881 [2024-12-09 17:38:33.987433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.881 [2024-12-09 17:38:33.987441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:101496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.881 [2024-12-09 17:38:33.987448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.881 [2024-12-09 17:38:33.987457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:101504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.881 [2024-12-09 17:38:33.987464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.881 [2024-12-09 17:38:33.987472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.881 [2024-12-09 17:38:33.987478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.881 [2024-12-09 17:38:33.987487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:101520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.881 [2024-12-09 17:38:33.987494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.881 [2024-12-09 17:38:33.987502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:101528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.881 [2024-12-09 17:38:33.987509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.881 [2024-12-09 17:38:33.987516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:101536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.881 [2024-12-09 17:38:33.987523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.881 [2024-12-09 17:38:33.987531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:101544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.881 [2024-12-09 17:38:33.987539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.881 [2024-12-09 17:38:33.987547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:101552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.881 [2024-12-09 17:38:33.987553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.881 [2024-12-09 17:38:33.987561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:101560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.881 [2024-12-09 17:38:33.987568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.881 [2024-12-09 17:38:33.987575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:101568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.881 [2024-12-09 17:38:33.987582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.881 [2024-12-09 17:38:33.987590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:101576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.881 [2024-12-09 17:38:33.987598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.881 [2024-12-09 17:38:33.987605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:101584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.881 [2024-12-09 17:38:33.987612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.881 [2024-12-09 17:38:33.987620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:101592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.881 [2024-12-09 17:38:33.987626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.881 [2024-12-09 17:38:33.987633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.881 [2024-12-09 17:38:33.987646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.881 [2024-12-09 17:38:33.987655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:101608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.881 [2024-12-09 17:38:33.987661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.881 [2024-12-09 17:38:33.987670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:101616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.881 [2024-12-09 17:38:33.987676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.881 [2024-12-09 17:38:33.987690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:101624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.881 [2024-12-09 17:38:33.987697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.881 [2024-12-09 17:38:33.987705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:101632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.881 [2024-12-09 17:38:33.987712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.881 [2024-12-09 17:38:33.987720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:101640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.881 [2024-12-09 17:38:33.987727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.881 [2024-12-09 17:38:33.987735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:101648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.881 [2024-12-09 17:38:33.987741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.881 [2024-12-09 17:38:33.987749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:101656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.881 [2024-12-09 17:38:33.987756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.881 [2024-12-09 17:38:33.987764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:101664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.881 [2024-12-09 17:38:33.987771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.881 [2024-12-09 17:38:33.987779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:101672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.881 [2024-12-09 17:38:33.987785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.881 [2024-12-09 17:38:33.987793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:101680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.881 [2024-12-09 17:38:33.987800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.881 [2024-12-09 17:38:33.987809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:101688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.881 [2024-12-09 17:38:33.987816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.881 [2024-12-09 17:38:33.987824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:101696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.881 [2024-12-09 17:38:33.987831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.881 [2024-12-09 17:38:33.987840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:101704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.881 [2024-12-09 17:38:33.987848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.881 [2024-12-09 17:38:33.987856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:101712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.881 [2024-12-09 17:38:33.987862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.881 [2024-12-09 17:38:33.987871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:101720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.881 [2024-12-09 17:38:33.987878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.881 [2024-12-09 17:38:33.987886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:101728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.881 [2024-12-09 17:38:33.987893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.881 [2024-12-09 17:38:33.987901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.881 [2024-12-09 17:38:33.987907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.881 [2024-12-09 17:38:33.987915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:101744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.881 [2024-12-09 17:38:33.987922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.881 [2024-12-09 17:38:33.987932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:101752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.881 [2024-12-09 17:38:33.987939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.881 [2024-12-09 17:38:33.987946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:101760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.881 [2024-12-09 17:38:33.987953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.881 [2024-12-09 17:38:33.987961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:101768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.881 [2024-12-09 17:38:33.987967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.881 [2024-12-09 17:38:33.987976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:101776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.881 [2024-12-09 17:38:33.987984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.881 [2024-12-09 17:38:33.987992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:101784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.881 [2024-12-09 17:38:33.987998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.881 [2024-12-09 17:38:33.988007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:101792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.881 [2024-12-09 17:38:33.988014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.881 [2024-12-09 17:38:33.988022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:101800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.881 [2024-12-09 17:38:33.988031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.881 [2024-12-09 17:38:33.988039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:101808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.881 [2024-12-09 17:38:33.988046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.881 [2024-12-09 17:38:33.988054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:101816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.881 [2024-12-09 17:38:33.988060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.881 [2024-12-09 17:38:33.988068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:101824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.881 [2024-12-09 17:38:33.988074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.881 [2024-12-09 17:38:33.988083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:101832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.881 [2024-12-09 17:38:33.988090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.881 [2024-12-09 17:38:33.988098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:101840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.881 [2024-12-09 17:38:33.988104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.881 [2024-12-09 17:38:33.988112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:101848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.881 [2024-12-09 17:38:33.988118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.881 [2024-12-09 17:38:33.988126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:101856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.881 [2024-12-09 17:38:33.988133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.881 [2024-12-09 17:38:33.988141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:101864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.881 [2024-12-09 17:38:33.988148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.881 [2024-12-09 17:38:33.988156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.881 [2024-12-09 17:38:33.988162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.881 [2024-12-09 17:38:33.988171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:101880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.881 [2024-12-09 17:38:33.988177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.881 [2024-12-09 17:38:33.988187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:101888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.881 [2024-12-09 17:38:33.988193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.881 [2024-12-09 17:38:33.988201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:101896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.881 [2024-12-09 17:38:33.988208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.881 [2024-12-09 17:38:33.988221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:101904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.881 [2024-12-09 17:38:33.988228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.881 [2024-12-09 17:38:33.988236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:101912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.881 [2024-12-09 17:38:33.988242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.881 [2024-12-09 17:38:33.988252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:101920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.881 [2024-12-09 17:38:33.988259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.881 [2024-12-09 17:38:33.988267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:101928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.881 [2024-12-09 17:38:33.988273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.881 [2024-12-09 17:38:33.988281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:101936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.881 [2024-12-09 17:38:33.988287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.881 [2024-12-09 17:38:33.988295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:101944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.881 [2024-12-09 17:38:33.988302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.881 [2024-12-09 17:38:33.988310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:101952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.881 [2024-12-09 17:38:33.988317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.881 [2024-12-09 17:38:33.988325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:101960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.881 [2024-12-09 17:38:33.988331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.881 [2024-12-09 17:38:33.988339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:101968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.881 [2024-12-09 17:38:33.988345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.881 [2024-12-09 17:38:33.988353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:101976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.881 [2024-12-09 17:38:33.988360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.881 [2024-12-09 17:38:33.988368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.881 [2024-12-09 17:38:33.988375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.881 [2024-12-09 17:38:33.988382] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5cb7f0 is same with the state(6) to be set 00:28:04.881 [2024-12-09 17:38:33.988390] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:04.881 [2024-12-09 17:38:33.988395] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:04.881 [2024-12-09 17:38:33.988401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101992 len:8 PRP1 0x0 PRP2 0x0 00:28:04.881 [2024-12-09 17:38:33.988412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.881 [2024-12-09 17:38:33.991269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.881 [2024-12-09 17:38:33.991320] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:04.881 [2024-12-09 17:38:33.991922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.881 [2024-12-09 17:38:33.991939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:04.881 [2024-12-09 17:38:33.991947] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:04.881 [2024-12-09 17:38:33.992117] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:04.881 [2024-12-09 17:38:33.992312] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.881 [2024-12-09 17:38:33.992321] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.881 [2024-12-09 17:38:33.992329] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.881 [2024-12-09 17:38:33.992339] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.881 [2024-12-09 17:38:34.004357] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.881 [2024-12-09 17:38:34.004798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.881 [2024-12-09 17:38:34.004846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:04.881 [2024-12-09 17:38:34.004872] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:04.881 [2024-12-09 17:38:34.005331] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:04.881 [2024-12-09 17:38:34.005507] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.881 [2024-12-09 17:38:34.005517] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.881 [2024-12-09 17:38:34.005524] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.881 [2024-12-09 17:38:34.005532] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.881 [2024-12-09 17:38:34.017159] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.881 [2024-12-09 17:38:34.017562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.881 [2024-12-09 17:38:34.017579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:04.881 [2024-12-09 17:38:34.017587] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:04.881 [2024-12-09 17:38:34.017748] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:04.881 [2024-12-09 17:38:34.017910] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.881 [2024-12-09 17:38:34.017920] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.881 [2024-12-09 17:38:34.017926] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.881 [2024-12-09 17:38:34.017932] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.881 [2024-12-09 17:38:34.029899] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.881 [2024-12-09 17:38:34.030240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.881 [2024-12-09 17:38:34.030287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:04.881 [2024-12-09 17:38:34.030312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:04.881 [2024-12-09 17:38:34.030897] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:04.881 [2024-12-09 17:38:34.031366] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.881 [2024-12-09 17:38:34.031376] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.881 [2024-12-09 17:38:34.031383] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.881 [2024-12-09 17:38:34.031391] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.881 [2024-12-09 17:38:34.042718] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.881 [2024-12-09 17:38:34.043132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.881 [2024-12-09 17:38:34.043150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:04.881 [2024-12-09 17:38:34.043157] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:04.881 [2024-12-09 17:38:34.043343] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:04.881 [2024-12-09 17:38:34.043515] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.881 [2024-12-09 17:38:34.043525] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.881 [2024-12-09 17:38:34.043531] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.881 [2024-12-09 17:38:34.043538] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.881 [2024-12-09 17:38:34.055743] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.140 [2024-12-09 17:38:34.056155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.140 [2024-12-09 17:38:34.056173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.140 [2024-12-09 17:38:34.056181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.140 [2024-12-09 17:38:34.056363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.140 [2024-12-09 17:38:34.056539] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.140 [2024-12-09 17:38:34.056548] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.140 [2024-12-09 17:38:34.056555] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.140 [2024-12-09 17:38:34.056562] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.140 [2024-12-09 17:38:34.068508] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.140 [2024-12-09 17:38:34.068863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.140 [2024-12-09 17:38:34.068909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.140 [2024-12-09 17:38:34.068942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.140 [2024-12-09 17:38:34.069445] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.140 [2024-12-09 17:38:34.069840] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.140 [2024-12-09 17:38:34.069859] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.140 [2024-12-09 17:38:34.069873] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.140 [2024-12-09 17:38:34.069887] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.140 [2024-12-09 17:38:34.083280] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.140 [2024-12-09 17:38:34.083697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.140 [2024-12-09 17:38:34.083721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.140 [2024-12-09 17:38:34.083732] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.140 [2024-12-09 17:38:34.083988] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.140 [2024-12-09 17:38:34.084255] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.140 [2024-12-09 17:38:34.084269] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.140 [2024-12-09 17:38:34.084279] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.140 [2024-12-09 17:38:34.084289] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.140 [2024-12-09 17:38:34.096215] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.140 [2024-12-09 17:38:34.096663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.141 [2024-12-09 17:38:34.096681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.141 [2024-12-09 17:38:34.096688] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.141 [2024-12-09 17:38:34.096858] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.141 [2024-12-09 17:38:34.097030] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.141 [2024-12-09 17:38:34.097040] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.141 [2024-12-09 17:38:34.097047] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.141 [2024-12-09 17:38:34.097054] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.141 [2024-12-09 17:38:34.109112] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.141 [2024-12-09 17:38:34.109552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.141 [2024-12-09 17:38:34.109598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.141 [2024-12-09 17:38:34.109623] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.141 [2024-12-09 17:38:34.110209] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.141 [2024-12-09 17:38:34.110641] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.141 [2024-12-09 17:38:34.110660] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.141 [2024-12-09 17:38:34.110674] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.141 [2024-12-09 17:38:34.110688] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.141 [2024-12-09 17:38:34.124035] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.141 [2024-12-09 17:38:34.124547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.141 [2024-12-09 17:38:34.124593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.141 [2024-12-09 17:38:34.124618] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.141 [2024-12-09 17:38:34.125203] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.141 [2024-12-09 17:38:34.125766] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.141 [2024-12-09 17:38:34.125780] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.141 [2024-12-09 17:38:34.125790] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.141 [2024-12-09 17:38:34.125800] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.141 [2024-12-09 17:38:34.137210] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.141 [2024-12-09 17:38:34.137581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.141 [2024-12-09 17:38:34.137627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.141 [2024-12-09 17:38:34.137653] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.141 [2024-12-09 17:38:34.138253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.141 [2024-12-09 17:38:34.138663] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.141 [2024-12-09 17:38:34.138673] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.141 [2024-12-09 17:38:34.138680] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.141 [2024-12-09 17:38:34.138686] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.141 [2024-12-09 17:38:34.150078] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.141 [2024-12-09 17:38:34.150523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.141 [2024-12-09 17:38:34.150541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.141 [2024-12-09 17:38:34.150548] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.141 [2024-12-09 17:38:34.150718] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.141 [2024-12-09 17:38:34.150890] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.141 [2024-12-09 17:38:34.150900] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.141 [2024-12-09 17:38:34.150910] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.141 [2024-12-09 17:38:34.150918] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.141 [2024-12-09 17:38:34.162813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.141 [2024-12-09 17:38:34.163157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.141 [2024-12-09 17:38:34.163174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.141 [2024-12-09 17:38:34.163181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.141 [2024-12-09 17:38:34.163369] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.141 [2024-12-09 17:38:34.163540] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.141 [2024-12-09 17:38:34.163551] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.141 [2024-12-09 17:38:34.163557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.141 [2024-12-09 17:38:34.163564] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.141 [2024-12-09 17:38:34.175563] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.141 [2024-12-09 17:38:34.175973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.141 [2024-12-09 17:38:34.175990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.141 [2024-12-09 17:38:34.175997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.141 [2024-12-09 17:38:34.176158] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.141 [2024-12-09 17:38:34.176346] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.141 [2024-12-09 17:38:34.176357] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.141 [2024-12-09 17:38:34.176363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.141 [2024-12-09 17:38:34.176370] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.141 [2024-12-09 17:38:34.188304] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.141 [2024-12-09 17:38:34.188720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.141 [2024-12-09 17:38:34.188767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.141 [2024-12-09 17:38:34.188791] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.141 [2024-12-09 17:38:34.189391] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.141 [2024-12-09 17:38:34.189908] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.141 [2024-12-09 17:38:34.189918] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.141 [2024-12-09 17:38:34.189924] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.141 [2024-12-09 17:38:34.189930] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.141 [2024-12-09 17:38:34.201179] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.141 [2024-12-09 17:38:34.201598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.141 [2024-12-09 17:38:34.201614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.141 [2024-12-09 17:38:34.201622] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.141 [2024-12-09 17:38:34.201783] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.141 [2024-12-09 17:38:34.201944] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.141 [2024-12-09 17:38:34.201954] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.141 [2024-12-09 17:38:34.201960] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.141 [2024-12-09 17:38:34.201966] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.141 [2024-12-09 17:38:34.213930] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.141 [2024-12-09 17:38:34.214368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.141 [2024-12-09 17:38:34.214386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.141 [2024-12-09 17:38:34.214394] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.141 [2024-12-09 17:38:34.214553] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.141 [2024-12-09 17:38:34.214715] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.141 [2024-12-09 17:38:34.214724] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.141 [2024-12-09 17:38:34.214731] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.141 [2024-12-09 17:38:34.214737] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.141 [2024-12-09 17:38:34.226736] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.141 [2024-12-09 17:38:34.227149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.141 [2024-12-09 17:38:34.227166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.142 [2024-12-09 17:38:34.227173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.142 [2024-12-09 17:38:34.227360] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.142 [2024-12-09 17:38:34.227530] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.142 [2024-12-09 17:38:34.227540] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.142 [2024-12-09 17:38:34.227546] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.142 [2024-12-09 17:38:34.227553] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.142 [2024-12-09 17:38:34.239560] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.142 [2024-12-09 17:38:34.239925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.142 [2024-12-09 17:38:34.239942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.142 [2024-12-09 17:38:34.239953] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.142 [2024-12-09 17:38:34.240123] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.142 [2024-12-09 17:38:34.240316] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.142 [2024-12-09 17:38:34.240326] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.142 [2024-12-09 17:38:34.240334] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.142 [2024-12-09 17:38:34.240341] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.142 [2024-12-09 17:38:34.252606] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.142 [2024-12-09 17:38:34.253010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.142 [2024-12-09 17:38:34.253029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.142 [2024-12-09 17:38:34.253037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.142 [2024-12-09 17:38:34.253210] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.142 [2024-12-09 17:38:34.253391] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.142 [2024-12-09 17:38:34.253401] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.142 [2024-12-09 17:38:34.253408] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.142 [2024-12-09 17:38:34.253415] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.142 [2024-12-09 17:38:34.265679] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.142 [2024-12-09 17:38:34.266084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.142 [2024-12-09 17:38:34.266102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.142 [2024-12-09 17:38:34.266110] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.142 [2024-12-09 17:38:34.266291] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.142 [2024-12-09 17:38:34.266466] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.142 [2024-12-09 17:38:34.266476] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.142 [2024-12-09 17:38:34.266483] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.142 [2024-12-09 17:38:34.266489] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.142 [2024-12-09 17:38:34.278703] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.142 [2024-12-09 17:38:34.279133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.142 [2024-12-09 17:38:34.279177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.142 [2024-12-09 17:38:34.279202] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.142 [2024-12-09 17:38:34.279674] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.142 [2024-12-09 17:38:34.279849] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.142 [2024-12-09 17:38:34.279857] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.142 [2024-12-09 17:38:34.279863] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.142 [2024-12-09 17:38:34.279870] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.142 [2024-12-09 17:38:34.291556] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.142 [2024-12-09 17:38:34.291965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.142 [2024-12-09 17:38:34.292001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.142 [2024-12-09 17:38:34.292028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.142 [2024-12-09 17:38:34.292627] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.142 [2024-12-09 17:38:34.292839] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.142 [2024-12-09 17:38:34.292847] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.142 [2024-12-09 17:38:34.292853] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.142 [2024-12-09 17:38:34.292860] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.142 [2024-12-09 17:38:34.304412] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.142 [2024-12-09 17:38:34.304752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.142 [2024-12-09 17:38:34.304770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.142 [2024-12-09 17:38:34.304777] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.142 [2024-12-09 17:38:34.304936] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.142 [2024-12-09 17:38:34.305103] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.142 [2024-12-09 17:38:34.305113] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.142 [2024-12-09 17:38:34.305119] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.142 [2024-12-09 17:38:34.305125] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.402 [2024-12-09 17:38:34.317485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.402 [2024-12-09 17:38:34.317912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.402 [2024-12-09 17:38:34.317970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.402 [2024-12-09 17:38:34.317995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.402 [2024-12-09 17:38:34.318596] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.402 [2024-12-09 17:38:34.319144] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.402 [2024-12-09 17:38:34.319154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.402 [2024-12-09 17:38:34.319164] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.402 [2024-12-09 17:38:34.319171] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.402 [2024-12-09 17:38:34.330451] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.402 [2024-12-09 17:38:34.330869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.402 [2024-12-09 17:38:34.330886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.402 [2024-12-09 17:38:34.330894] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.402 [2024-12-09 17:38:34.331053] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.402 [2024-12-09 17:38:34.331213] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.402 [2024-12-09 17:38:34.331229] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.402 [2024-12-09 17:38:34.331236] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.402 [2024-12-09 17:38:34.331242] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.402 [2024-12-09 17:38:34.343264] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.402 [2024-12-09 17:38:34.343668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.402 [2024-12-09 17:38:34.343686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.402 [2024-12-09 17:38:34.343694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.402 [2024-12-09 17:38:34.343863] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.402 [2024-12-09 17:38:34.344034] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.402 [2024-12-09 17:38:34.344044] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.402 [2024-12-09 17:38:34.344052] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.402 [2024-12-09 17:38:34.344059] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.402 [2024-12-09 17:38:34.356097] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.402 [2024-12-09 17:38:34.356376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.402 [2024-12-09 17:38:34.356394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.402 [2024-12-09 17:38:34.356402] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.402 [2024-12-09 17:38:34.356575] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.402 [2024-12-09 17:38:34.356736] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.402 [2024-12-09 17:38:34.356746] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.402 [2024-12-09 17:38:34.356752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.402 [2024-12-09 17:38:34.356758] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.402 [2024-12-09 17:38:34.368845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.402 [2024-12-09 17:38:34.369259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.402 [2024-12-09 17:38:34.369276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.402 [2024-12-09 17:38:34.369284] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.402 [2024-12-09 17:38:34.369444] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.402 [2024-12-09 17:38:34.369605] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.402 [2024-12-09 17:38:34.369615] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.402 [2024-12-09 17:38:34.369621] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.402 [2024-12-09 17:38:34.369628] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.402 [2024-12-09 17:38:34.381734] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.402 [2024-12-09 17:38:34.382127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.403 [2024-12-09 17:38:34.382145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.403 [2024-12-09 17:38:34.382152] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.403 [2024-12-09 17:38:34.382316] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.403 [2024-12-09 17:38:34.382479] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.403 [2024-12-09 17:38:34.382488] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.403 [2024-12-09 17:38:34.382494] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.403 [2024-12-09 17:38:34.382500] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.403 [2024-12-09 17:38:34.394629] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.403 [2024-12-09 17:38:34.395042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.403 [2024-12-09 17:38:34.395092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.403 [2024-12-09 17:38:34.395117] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.403 [2024-12-09 17:38:34.395716] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.403 [2024-12-09 17:38:34.396233] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.403 [2024-12-09 17:38:34.396258] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.403 [2024-12-09 17:38:34.396265] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.403 [2024-12-09 17:38:34.396272] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.403 [2024-12-09 17:38:34.407614] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.403 [2024-12-09 17:38:34.408019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.403 [2024-12-09 17:38:34.408036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.403 [2024-12-09 17:38:34.408046] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.403 [2024-12-09 17:38:34.408206] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.403 [2024-12-09 17:38:34.408374] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.403 [2024-12-09 17:38:34.408384] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.403 [2024-12-09 17:38:34.408391] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.403 [2024-12-09 17:38:34.408397] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.403 9668.67 IOPS, 37.77 MiB/s [2024-12-09T16:38:34.582Z] [2024-12-09 17:38:34.420486] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.403 [2024-12-09 17:38:34.420924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.403 [2024-12-09 17:38:34.420942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.403 [2024-12-09 17:38:34.420950] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.403 [2024-12-09 17:38:34.421109] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.403 [2024-12-09 17:38:34.421275] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.403 [2024-12-09 17:38:34.421284] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.403 [2024-12-09 17:38:34.421291] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.403 [2024-12-09 17:38:34.421297] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.403 [2024-12-09 17:38:34.433347] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.403 [2024-12-09 17:38:34.433769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.403 [2024-12-09 17:38:34.433814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.403 [2024-12-09 17:38:34.433838] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.403 [2024-12-09 17:38:34.434295] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.403 [2024-12-09 17:38:34.434459] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.403 [2024-12-09 17:38:34.434467] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.403 [2024-12-09 17:38:34.434473] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.403 [2024-12-09 17:38:34.434479] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.403 [2024-12-09 17:38:34.446219] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.403 [2024-12-09 17:38:34.446562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.403 [2024-12-09 17:38:34.446579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.403 [2024-12-09 17:38:34.446587] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.403 [2024-12-09 17:38:34.446747] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.403 [2024-12-09 17:38:34.446911] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.403 [2024-12-09 17:38:34.446921] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.403 [2024-12-09 17:38:34.446927] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.403 [2024-12-09 17:38:34.446933] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.403 [2024-12-09 17:38:34.459089] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.403 [2024-12-09 17:38:34.459485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.403 [2024-12-09 17:38:34.459503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.403 [2024-12-09 17:38:34.459510] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.403 [2024-12-09 17:38:34.459671] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.403 [2024-12-09 17:38:34.459833] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.403 [2024-12-09 17:38:34.459843] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.403 [2024-12-09 17:38:34.459849] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.403 [2024-12-09 17:38:34.459855] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.403 [2024-12-09 17:38:34.471833] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.403 [2024-12-09 17:38:34.472212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.403 [2024-12-09 17:38:34.472271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.403 [2024-12-09 17:38:34.472296] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.403 [2024-12-09 17:38:34.472881] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.403 [2024-12-09 17:38:34.473445] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.403 [2024-12-09 17:38:34.473455] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.403 [2024-12-09 17:38:34.473461] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.403 [2024-12-09 17:38:34.473468] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.403 [2024-12-09 17:38:34.484582] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.403 [2024-12-09 17:38:34.484994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.403 [2024-12-09 17:38:34.485011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.403 [2024-12-09 17:38:34.485019] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.403 [2024-12-09 17:38:34.485178] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.403 [2024-12-09 17:38:34.485367] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.403 [2024-12-09 17:38:34.485378] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.403 [2024-12-09 17:38:34.485387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.403 [2024-12-09 17:38:34.485394] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.403 [2024-12-09 17:38:34.497321] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.403 [2024-12-09 17:38:34.497728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.403 [2024-12-09 17:38:34.497746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.403 [2024-12-09 17:38:34.497753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.403 [2024-12-09 17:38:34.497922] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.403 [2024-12-09 17:38:34.498093] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.403 [2024-12-09 17:38:34.498103] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.403 [2024-12-09 17:38:34.498110] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.403 [2024-12-09 17:38:34.498116] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.403 [2024-12-09 17:38:34.510435] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.403 [2024-12-09 17:38:34.510853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.403 [2024-12-09 17:38:34.510871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.404 [2024-12-09 17:38:34.510879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.404 [2024-12-09 17:38:34.511047] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.404 [2024-12-09 17:38:34.511223] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.404 [2024-12-09 17:38:34.511234] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.404 [2024-12-09 17:38:34.511241] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.404 [2024-12-09 17:38:34.511248] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.404 [2024-12-09 17:38:34.523473] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.404 [2024-12-09 17:38:34.523920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.404 [2024-12-09 17:38:34.523965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.404 [2024-12-09 17:38:34.523988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.404 [2024-12-09 17:38:34.524518] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.404 [2024-12-09 17:38:34.524690] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.404 [2024-12-09 17:38:34.524700] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.404 [2024-12-09 17:38:34.524707] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.404 [2024-12-09 17:38:34.524714] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.404 [2024-12-09 17:38:34.538289] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.404 [2024-12-09 17:38:34.538806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.404 [2024-12-09 17:38:34.538829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.404 [2024-12-09 17:38:34.538839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.404 [2024-12-09 17:38:34.539097] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.404 [2024-12-09 17:38:34.539362] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.404 [2024-12-09 17:38:34.539377] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.404 [2024-12-09 17:38:34.539387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.404 [2024-12-09 17:38:34.539397] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.404 [2024-12-09 17:38:34.551294] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.404 [2024-12-09 17:38:34.551724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.404 [2024-12-09 17:38:34.551742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.404 [2024-12-09 17:38:34.551750] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.404 [2024-12-09 17:38:34.551925] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.404 [2024-12-09 17:38:34.552100] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.404 [2024-12-09 17:38:34.552111] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.404 [2024-12-09 17:38:34.552117] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.404 [2024-12-09 17:38:34.552124] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.404 [2024-12-09 17:38:34.564267] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.404 [2024-12-09 17:38:34.564677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.404 [2024-12-09 17:38:34.564714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.404 [2024-12-09 17:38:34.564741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.404 [2024-12-09 17:38:34.565346] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.404 [2024-12-09 17:38:34.565741] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.404 [2024-12-09 17:38:34.565760] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.404 [2024-12-09 17:38:34.565774] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.404 [2024-12-09 17:38:34.565788] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.664 [2024-12-09 17:38:34.579085] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.664 [2024-12-09 17:38:34.579513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.664 [2024-12-09 17:38:34.579536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.664 [2024-12-09 17:38:34.579551] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.664 [2024-12-09 17:38:34.579806] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.664 [2024-12-09 17:38:34.580065] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.664 [2024-12-09 17:38:34.580079] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.664 [2024-12-09 17:38:34.580089] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.664 [2024-12-09 17:38:34.580099] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.664 [2024-12-09 17:38:34.592166] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.664 [2024-12-09 17:38:34.592597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.664 [2024-12-09 17:38:34.592615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.664 [2024-12-09 17:38:34.592623] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.664 [2024-12-09 17:38:34.592797] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.664 [2024-12-09 17:38:34.592973] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.664 [2024-12-09 17:38:34.592983] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.664 [2024-12-09 17:38:34.592990] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.664 [2024-12-09 17:38:34.592997] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.664 [2024-12-09 17:38:34.604908] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.664 [2024-12-09 17:38:34.605247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.664 [2024-12-09 17:38:34.605265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.664 [2024-12-09 17:38:34.605273] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.664 [2024-12-09 17:38:34.605433] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.664 [2024-12-09 17:38:34.605594] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.664 [2024-12-09 17:38:34.605604] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.664 [2024-12-09 17:38:34.605610] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.664 [2024-12-09 17:38:34.605616] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.664 [2024-12-09 17:38:34.617853] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.664 [2024-12-09 17:38:34.618296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.665 [2024-12-09 17:38:34.618314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.665 [2024-12-09 17:38:34.618322] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.665 [2024-12-09 17:38:34.618496] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.665 [2024-12-09 17:38:34.618659] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.665 [2024-12-09 17:38:34.618671] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.665 [2024-12-09 17:38:34.618678] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.665 [2024-12-09 17:38:34.618685] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.665 [2024-12-09 17:38:34.630673] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.665 [2024-12-09 17:38:34.631082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.665 [2024-12-09 17:38:34.631121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.665 [2024-12-09 17:38:34.631147] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.665 [2024-12-09 17:38:34.631727] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.665 [2024-12-09 17:38:34.631899] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.665 [2024-12-09 17:38:34.631910] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.665 [2024-12-09 17:38:34.631916] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.665 [2024-12-09 17:38:34.631923] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.665 [2024-12-09 17:38:34.643452] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.665 [2024-12-09 17:38:34.643789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.665 [2024-12-09 17:38:34.643807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.665 [2024-12-09 17:38:34.643816] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.665 [2024-12-09 17:38:34.643985] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.665 [2024-12-09 17:38:34.644155] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.665 [2024-12-09 17:38:34.644165] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.665 [2024-12-09 17:38:34.644172] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.665 [2024-12-09 17:38:34.644179] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.665 [2024-12-09 17:38:34.656418] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.665 [2024-12-09 17:38:34.656764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.665 [2024-12-09 17:38:34.656781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.665 [2024-12-09 17:38:34.656789] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.665 [2024-12-09 17:38:34.656960] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.665 [2024-12-09 17:38:34.657129] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.665 [2024-12-09 17:38:34.657139] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.665 [2024-12-09 17:38:34.657146] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.665 [2024-12-09 17:38:34.657156] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.665 [2024-12-09 17:38:34.669451] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.665 [2024-12-09 17:38:34.669842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.665 [2024-12-09 17:38:34.669861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.665 [2024-12-09 17:38:34.669869] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.665 [2024-12-09 17:38:34.670044] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.665 [2024-12-09 17:38:34.670227] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.665 [2024-12-09 17:38:34.670237] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.665 [2024-12-09 17:38:34.670244] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.665 [2024-12-09 17:38:34.670251] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.665 [2024-12-09 17:38:34.682391] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.665 [2024-12-09 17:38:34.682774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.665 [2024-12-09 17:38:34.682792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.665 [2024-12-09 17:38:34.682799] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.665 [2024-12-09 17:38:34.682958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.665 [2024-12-09 17:38:34.683120] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.665 [2024-12-09 17:38:34.683130] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.665 [2024-12-09 17:38:34.683136] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.665 [2024-12-09 17:38:34.683142] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.665 [2024-12-09 17:38:34.695205] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.665 [2024-12-09 17:38:34.695506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.665 [2024-12-09 17:38:34.695524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.665 [2024-12-09 17:38:34.695531] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.665 [2024-12-09 17:38:34.695691] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.665 [2024-12-09 17:38:34.695853] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.665 [2024-12-09 17:38:34.695862] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.665 [2024-12-09 17:38:34.695868] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.665 [2024-12-09 17:38:34.695875] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.665 [2024-12-09 17:38:34.708106] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.665 [2024-12-09 17:38:34.708473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.665 [2024-12-09 17:38:34.708517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.665 [2024-12-09 17:38:34.708541] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.665 [2024-12-09 17:38:34.709011] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.665 [2024-12-09 17:38:34.709172] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.665 [2024-12-09 17:38:34.709182] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.665 [2024-12-09 17:38:34.709188] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.665 [2024-12-09 17:38:34.709195] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.665 [2024-12-09 17:38:34.720971] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.665 [2024-12-09 17:38:34.721362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.665 [2024-12-09 17:38:34.721380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.665 [2024-12-09 17:38:34.721387] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.665 [2024-12-09 17:38:34.721548] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.665 [2024-12-09 17:38:34.721710] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.665 [2024-12-09 17:38:34.721719] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.665 [2024-12-09 17:38:34.721725] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.665 [2024-12-09 17:38:34.721732] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.665 [2024-12-09 17:38:34.733806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.665 [2024-12-09 17:38:34.734224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.665 [2024-12-09 17:38:34.734242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.665 [2024-12-09 17:38:34.734250] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.665 [2024-12-09 17:38:34.734409] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.665 [2024-12-09 17:38:34.734570] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.665 [2024-12-09 17:38:34.734580] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.665 [2024-12-09 17:38:34.734586] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.665 [2024-12-09 17:38:34.734592] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.665 [2024-12-09 17:38:34.746669] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.665 [2024-12-09 17:38:34.747062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.665 [2024-12-09 17:38:34.747080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.666 [2024-12-09 17:38:34.747087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.666 [2024-12-09 17:38:34.747256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.666 [2024-12-09 17:38:34.747418] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.666 [2024-12-09 17:38:34.747428] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.666 [2024-12-09 17:38:34.747434] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.666 [2024-12-09 17:38:34.747440] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.666 [2024-12-09 17:38:34.759729] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.666 [2024-12-09 17:38:34.760155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.666 [2024-12-09 17:38:34.760173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.666 [2024-12-09 17:38:34.760181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.666 [2024-12-09 17:38:34.760357] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.666 [2024-12-09 17:38:34.760528] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.666 [2024-12-09 17:38:34.760538] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.666 [2024-12-09 17:38:34.760544] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.666 [2024-12-09 17:38:34.760551] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.666 [2024-12-09 17:38:34.772784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.666 [2024-12-09 17:38:34.773135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.666 [2024-12-09 17:38:34.773153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.666 [2024-12-09 17:38:34.773160] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.666 [2024-12-09 17:38:34.773336] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.666 [2024-12-09 17:38:34.773508] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.666 [2024-12-09 17:38:34.773518] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.666 [2024-12-09 17:38:34.773524] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.666 [2024-12-09 17:38:34.773531] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.666 [2024-12-09 17:38:34.785783] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.666 [2024-12-09 17:38:34.786203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.666 [2024-12-09 17:38:34.786226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.666 [2024-12-09 17:38:34.786234] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.666 [2024-12-09 17:38:34.786403] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.666 [2024-12-09 17:38:34.786576] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.666 [2024-12-09 17:38:34.786589] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.666 [2024-12-09 17:38:34.786595] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.666 [2024-12-09 17:38:34.786603] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.666 [2024-12-09 17:38:34.798630] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.666 [2024-12-09 17:38:34.799104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.666 [2024-12-09 17:38:34.799122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.666 [2024-12-09 17:38:34.799129] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.666 [2024-12-09 17:38:34.799316] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.666 [2024-12-09 17:38:34.799486] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.666 [2024-12-09 17:38:34.799497] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.666 [2024-12-09 17:38:34.799503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.666 [2024-12-09 17:38:34.799510] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.666 [2024-12-09 17:38:34.811620] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.666 [2024-12-09 17:38:34.812042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.666 [2024-12-09 17:38:34.812089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.666 [2024-12-09 17:38:34.812114] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.666 [2024-12-09 17:38:34.812507] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.666 [2024-12-09 17:38:34.812678] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.666 [2024-12-09 17:38:34.812688] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.666 [2024-12-09 17:38:34.812695] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.666 [2024-12-09 17:38:34.812701] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.666 [2024-12-09 17:38:34.824452] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.666 [2024-12-09 17:38:34.824777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.666 [2024-12-09 17:38:34.824794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.666 [2024-12-09 17:38:34.824801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.666 [2024-12-09 17:38:34.824961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.666 [2024-12-09 17:38:34.825122] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.666 [2024-12-09 17:38:34.825131] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.666 [2024-12-09 17:38:34.825138] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.666 [2024-12-09 17:38:34.825148] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.666 [2024-12-09 17:38:34.837349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.666 [2024-12-09 17:38:34.837635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.666 [2024-12-09 17:38:34.837653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.666 [2024-12-09 17:38:34.837662] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.666 [2024-12-09 17:38:34.837846] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.666 [2024-12-09 17:38:34.838037] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.666 [2024-12-09 17:38:34.838047] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.666 [2024-12-09 17:38:34.838055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.666 [2024-12-09 17:38:34.838062] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.926 [2024-12-09 17:38:34.850358] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.926 [2024-12-09 17:38:34.850714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.926 [2024-12-09 17:38:34.850733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.926 [2024-12-09 17:38:34.850741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.926 [2024-12-09 17:38:34.850911] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.926 [2024-12-09 17:38:34.851082] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.926 [2024-12-09 17:38:34.851093] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.926 [2024-12-09 17:38:34.851102] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.926 [2024-12-09 17:38:34.851110] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.926 [2024-12-09 17:38:34.863439] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.926 [2024-12-09 17:38:34.863735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.926 [2024-12-09 17:38:34.863753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.926 [2024-12-09 17:38:34.863762] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.926 [2024-12-09 17:38:34.863935] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.926 [2024-12-09 17:38:34.864111] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.926 [2024-12-09 17:38:34.864122] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.926 [2024-12-09 17:38:34.864128] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.926 [2024-12-09 17:38:34.864136] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.926 [2024-12-09 17:38:34.876568] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.926 [2024-12-09 17:38:34.876948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.926 [2024-12-09 17:38:34.876970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.926 [2024-12-09 17:38:34.876978] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.926 [2024-12-09 17:38:34.877147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.926 [2024-12-09 17:38:34.877326] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.926 [2024-12-09 17:38:34.877337] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.926 [2024-12-09 17:38:34.877343] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.926 [2024-12-09 17:38:34.877350] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.927 [2024-12-09 17:38:34.889542] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.927 [2024-12-09 17:38:34.889967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.927 [2024-12-09 17:38:34.889985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.927 [2024-12-09 17:38:34.889992] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.927 [2024-12-09 17:38:34.890152] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.927 [2024-12-09 17:38:34.890340] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.927 [2024-12-09 17:38:34.890351] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.927 [2024-12-09 17:38:34.890358] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.927 [2024-12-09 17:38:34.890365] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.927 [2024-12-09 17:38:34.902429] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.927 [2024-12-09 17:38:34.902776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.927 [2024-12-09 17:38:34.902793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.927 [2024-12-09 17:38:34.902802] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.927 [2024-12-09 17:38:34.902971] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.927 [2024-12-09 17:38:34.903143] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.927 [2024-12-09 17:38:34.903153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.927 [2024-12-09 17:38:34.903159] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.927 [2024-12-09 17:38:34.903166] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.927 [2024-12-09 17:38:34.915240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.927 [2024-12-09 17:38:34.915621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.927 [2024-12-09 17:38:34.915639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.927 [2024-12-09 17:38:34.915647] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.927 [2024-12-09 17:38:34.915820] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.927 [2024-12-09 17:38:34.915990] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.927 [2024-12-09 17:38:34.916000] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.927 [2024-12-09 17:38:34.916007] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.927 [2024-12-09 17:38:34.916013] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.927 [2024-12-09 17:38:34.928181] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.927 [2024-12-09 17:38:34.928538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.927 [2024-12-09 17:38:34.928557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.927 [2024-12-09 17:38:34.928565] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.927 [2024-12-09 17:38:34.928735] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.927 [2024-12-09 17:38:34.928907] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.927 [2024-12-09 17:38:34.928917] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.927 [2024-12-09 17:38:34.928923] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.927 [2024-12-09 17:38:34.928930] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.927 [2024-12-09 17:38:34.941080] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.927 [2024-12-09 17:38:34.941378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.927 [2024-12-09 17:38:34.941397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.927 [2024-12-09 17:38:34.941405] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.927 [2024-12-09 17:38:34.941574] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.927 [2024-12-09 17:38:34.941744] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.927 [2024-12-09 17:38:34.941754] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.927 [2024-12-09 17:38:34.941761] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.927 [2024-12-09 17:38:34.941768] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.927 [2024-12-09 17:38:34.953951] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.927 [2024-12-09 17:38:34.954353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.927 [2024-12-09 17:38:34.954399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.927 [2024-12-09 17:38:34.954423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.927 [2024-12-09 17:38:34.955009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.927 [2024-12-09 17:38:34.955574] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.927 [2024-12-09 17:38:34.955589] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.927 [2024-12-09 17:38:34.955595] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.927 [2024-12-09 17:38:34.955603] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.927 [2024-12-09 17:38:34.966841] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.927 [2024-12-09 17:38:34.967180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.927 [2024-12-09 17:38:34.967197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.927 [2024-12-09 17:38:34.967205] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.927 [2024-12-09 17:38:34.967394] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.927 [2024-12-09 17:38:34.967566] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.927 [2024-12-09 17:38:34.967576] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.927 [2024-12-09 17:38:34.967583] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.927 [2024-12-09 17:38:34.967589] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.927 [2024-12-09 17:38:34.979760] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.927 [2024-12-09 17:38:34.980159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.927 [2024-12-09 17:38:34.980177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.927 [2024-12-09 17:38:34.980184] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.927 [2024-12-09 17:38:34.980360] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.927 [2024-12-09 17:38:34.980538] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.927 [2024-12-09 17:38:34.980548] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.927 [2024-12-09 17:38:34.980554] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.927 [2024-12-09 17:38:34.980560] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.927 [2024-12-09 17:38:34.992595] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.927 [2024-12-09 17:38:34.993010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.927 [2024-12-09 17:38:34.993027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.927 [2024-12-09 17:38:34.993034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.927 [2024-12-09 17:38:34.993203] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.927 [2024-12-09 17:38:34.993382] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.927 [2024-12-09 17:38:34.993393] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.927 [2024-12-09 17:38:34.993400] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.927 [2024-12-09 17:38:34.993407] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.927 [2024-12-09 17:38:35.005547] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.927 [2024-12-09 17:38:35.005851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.927 [2024-12-09 17:38:35.005869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.927 [2024-12-09 17:38:35.005877] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.927 [2024-12-09 17:38:35.006047] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.927 [2024-12-09 17:38:35.006225] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.927 [2024-12-09 17:38:35.006236] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.927 [2024-12-09 17:38:35.006243] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.927 [2024-12-09 17:38:35.006249] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.927 [2024-12-09 17:38:35.018538] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.927 [2024-12-09 17:38:35.018880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.927 [2024-12-09 17:38:35.018899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.928 [2024-12-09 17:38:35.018907] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.928 [2024-12-09 17:38:35.019082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.928 [2024-12-09 17:38:35.019287] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.928 [2024-12-09 17:38:35.019297] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.928 [2024-12-09 17:38:35.019304] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.928 [2024-12-09 17:38:35.019311] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.928 [2024-12-09 17:38:35.031566] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.928 [2024-12-09 17:38:35.031846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.928 [2024-12-09 17:38:35.031864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.928 [2024-12-09 17:38:35.031872] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.928 [2024-12-09 17:38:35.032041] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.928 [2024-12-09 17:38:35.032212] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.928 [2024-12-09 17:38:35.032229] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.928 [2024-12-09 17:38:35.032235] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.928 [2024-12-09 17:38:35.032242] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.928 [2024-12-09 17:38:35.044477] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.928 [2024-12-09 17:38:35.044896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.928 [2024-12-09 17:38:35.044946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.928 [2024-12-09 17:38:35.044971] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.928 [2024-12-09 17:38:35.045519] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.928 [2024-12-09 17:38:35.045691] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.928 [2024-12-09 17:38:35.045699] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.928 [2024-12-09 17:38:35.045705] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.928 [2024-12-09 17:38:35.045711] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.928 [2024-12-09 17:38:35.057422] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.928 [2024-12-09 17:38:35.057845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.928 [2024-12-09 17:38:35.057884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.928 [2024-12-09 17:38:35.057910] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.928 [2024-12-09 17:38:35.058489] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.928 [2024-12-09 17:38:35.058651] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.928 [2024-12-09 17:38:35.058659] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.928 [2024-12-09 17:38:35.058666] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.928 [2024-12-09 17:38:35.058672] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.928 [2024-12-09 17:38:35.070236] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.928 [2024-12-09 17:38:35.070663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.928 [2024-12-09 17:38:35.070680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.928 [2024-12-09 17:38:35.070688] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.928 [2024-12-09 17:38:35.070847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.928 [2024-12-09 17:38:35.071008] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.928 [2024-12-09 17:38:35.071017] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.928 [2024-12-09 17:38:35.071024] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.928 [2024-12-09 17:38:35.071030] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.928 [2024-12-09 17:38:35.083024] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.928 [2024-12-09 17:38:35.083447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.928 [2024-12-09 17:38:35.083493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.928 [2024-12-09 17:38:35.083518] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.928 [2024-12-09 17:38:35.083931] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.928 [2024-12-09 17:38:35.084092] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.928 [2024-12-09 17:38:35.084102] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.928 [2024-12-09 17:38:35.084108] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.928 [2024-12-09 17:38:35.084114] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.928 [2024-12-09 17:38:35.095919] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.928 [2024-12-09 17:38:35.096279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.928 [2024-12-09 17:38:35.096315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:05.928 [2024-12-09 17:38:35.096324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:05.928 [2024-12-09 17:38:35.096499] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:05.928 [2024-12-09 17:38:35.096675] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.928 [2024-12-09 17:38:35.096685] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.928 [2024-12-09 17:38:35.096694] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.928 [2024-12-09 17:38:35.096701] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.187 [2024-12-09 17:38:35.108935] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.187 [2024-12-09 17:38:35.109369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.187 [2024-12-09 17:38:35.109388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.187 [2024-12-09 17:38:35.109396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.187 [2024-12-09 17:38:35.109578] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.187 [2024-12-09 17:38:35.109750] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.187 [2024-12-09 17:38:35.109770] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.187 [2024-12-09 17:38:35.109776] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.187 [2024-12-09 17:38:35.109783] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.187 [2024-12-09 17:38:35.121815] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.187 [2024-12-09 17:38:35.122234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.187 [2024-12-09 17:38:35.122252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.187 [2024-12-09 17:38:35.122260] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.187 [2024-12-09 17:38:35.122437] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.187 [2024-12-09 17:38:35.122599] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.187 [2024-12-09 17:38:35.122609] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.187 [2024-12-09 17:38:35.122618] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.187 [2024-12-09 17:38:35.122625] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.187 [2024-12-09 17:38:35.134624] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.187 [2024-12-09 17:38:35.135021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.187 [2024-12-09 17:38:35.135039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.187 [2024-12-09 17:38:35.135047] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.187 [2024-12-09 17:38:35.135215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.187 [2024-12-09 17:38:35.135392] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.187 [2024-12-09 17:38:35.135402] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.187 [2024-12-09 17:38:35.135408] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.187 [2024-12-09 17:38:35.135415] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.187 [2024-12-09 17:38:35.147421] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.187 [2024-12-09 17:38:35.147820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.187 [2024-12-09 17:38:35.147838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.187 [2024-12-09 17:38:35.147846] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.187 [2024-12-09 17:38:35.148015] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.187 [2024-12-09 17:38:35.148184] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.188 [2024-12-09 17:38:35.148195] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.188 [2024-12-09 17:38:35.148201] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.188 [2024-12-09 17:38:35.148207] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.188 [2024-12-09 17:38:35.160255] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.188 [2024-12-09 17:38:35.160651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.188 [2024-12-09 17:38:35.160668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.188 [2024-12-09 17:38:35.160676] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.188 [2024-12-09 17:38:35.160835] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.188 [2024-12-09 17:38:35.160996] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.188 [2024-12-09 17:38:35.161006] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.188 [2024-12-09 17:38:35.161012] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.188 [2024-12-09 17:38:35.161018] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.188 [2024-12-09 17:38:35.173055] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.188 [2024-12-09 17:38:35.173458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.188 [2024-12-09 17:38:35.173498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.188 [2024-12-09 17:38:35.173525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.188 [2024-12-09 17:38:35.174055] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.188 [2024-12-09 17:38:35.174225] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.188 [2024-12-09 17:38:35.174235] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.188 [2024-12-09 17:38:35.174242] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.188 [2024-12-09 17:38:35.174265] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.188 [2024-12-09 17:38:35.185858] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.188 [2024-12-09 17:38:35.186193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.188 [2024-12-09 17:38:35.186211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.188 [2024-12-09 17:38:35.186226] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.188 [2024-12-09 17:38:35.186409] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.188 [2024-12-09 17:38:35.186580] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.188 [2024-12-09 17:38:35.186590] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.188 [2024-12-09 17:38:35.186596] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.188 [2024-12-09 17:38:35.186603] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.188 [2024-12-09 17:38:35.198680] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.188 [2024-12-09 17:38:35.199092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.188 [2024-12-09 17:38:35.199134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.188 [2024-12-09 17:38:35.199160] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.188 [2024-12-09 17:38:35.199762] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.188 [2024-12-09 17:38:35.200251] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.188 [2024-12-09 17:38:35.200261] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.188 [2024-12-09 17:38:35.200268] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.188 [2024-12-09 17:38:35.200275] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.188 [2024-12-09 17:38:35.211601] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.188 [2024-12-09 17:38:35.212002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.188 [2024-12-09 17:38:35.212056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.188 [2024-12-09 17:38:35.212081] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.188 [2024-12-09 17:38:35.212666] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.188 [2024-12-09 17:38:35.213063] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.188 [2024-12-09 17:38:35.213082] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.188 [2024-12-09 17:38:35.213097] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.188 [2024-12-09 17:38:35.213110] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.188 [2024-12-09 17:38:35.226482] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.188 [2024-12-09 17:38:35.226998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.188 [2024-12-09 17:38:35.227045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.188 [2024-12-09 17:38:35.227069] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.188 [2024-12-09 17:38:35.227668] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.188 [2024-12-09 17:38:35.228261] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.188 [2024-12-09 17:38:35.228274] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.188 [2024-12-09 17:38:35.228284] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.188 [2024-12-09 17:38:35.228294] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.188 [2024-12-09 17:38:35.239511] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.188 [2024-12-09 17:38:35.239915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.188 [2024-12-09 17:38:35.239933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.188 [2024-12-09 17:38:35.239941] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.188 [2024-12-09 17:38:35.240111] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.188 [2024-12-09 17:38:35.240288] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.188 [2024-12-09 17:38:35.240298] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.188 [2024-12-09 17:38:35.240305] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.188 [2024-12-09 17:38:35.240312] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.188 [2024-12-09 17:38:35.252389] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.188 [2024-12-09 17:38:35.252806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.188 [2024-12-09 17:38:35.252850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.188 [2024-12-09 17:38:35.252874] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.188 [2024-12-09 17:38:35.253319] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.188 [2024-12-09 17:38:35.253484] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.188 [2024-12-09 17:38:35.253494] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.188 [2024-12-09 17:38:35.253500] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.188 [2024-12-09 17:38:35.253506] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.188 [2024-12-09 17:38:35.266987] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.188 [2024-12-09 17:38:35.267488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.188 [2024-12-09 17:38:35.267509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.188 [2024-12-09 17:38:35.267519] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.188 [2024-12-09 17:38:35.267755] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.188 [2024-12-09 17:38:35.267993] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.188 [2024-12-09 17:38:35.268005] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.188 [2024-12-09 17:38:35.268014] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.188 [2024-12-09 17:38:35.268023] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.188 [2024-12-09 17:38:35.280021] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.188 [2024-12-09 17:38:35.280450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.188 [2024-12-09 17:38:35.280468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.188 [2024-12-09 17:38:35.280476] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.188 [2024-12-09 17:38:35.280650] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.188 [2024-12-09 17:38:35.280849] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.188 [2024-12-09 17:38:35.280859] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.188 [2024-12-09 17:38:35.280866] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.188 [2024-12-09 17:38:35.280872] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.188 [2024-12-09 17:38:35.292945] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.188 [2024-12-09 17:38:35.293359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.188 [2024-12-09 17:38:35.293395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.188 [2024-12-09 17:38:35.293421] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.188 [2024-12-09 17:38:35.294006] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.188 [2024-12-09 17:38:35.294269] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.188 [2024-12-09 17:38:35.294279] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.188 [2024-12-09 17:38:35.294289] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.188 [2024-12-09 17:38:35.294296] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.188 [2024-12-09 17:38:35.305718] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.188 [2024-12-09 17:38:35.306134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.188 [2024-12-09 17:38:35.306151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.188 [2024-12-09 17:38:35.306158] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.188 [2024-12-09 17:38:35.306344] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.189 [2024-12-09 17:38:35.306514] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.189 [2024-12-09 17:38:35.306525] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.189 [2024-12-09 17:38:35.306531] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.189 [2024-12-09 17:38:35.306538] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.189 [2024-12-09 17:38:35.318508] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.189 [2024-12-09 17:38:35.318928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.189 [2024-12-09 17:38:35.318972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.189 [2024-12-09 17:38:35.318997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.189 [2024-12-09 17:38:35.319472] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.189 [2024-12-09 17:38:35.319635] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.189 [2024-12-09 17:38:35.319645] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.189 [2024-12-09 17:38:35.319651] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.189 [2024-12-09 17:38:35.319657] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.189 [2024-12-09 17:38:35.331264] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.189 [2024-12-09 17:38:35.331675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.189 [2024-12-09 17:38:35.331713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.189 [2024-12-09 17:38:35.331739] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.189 [2024-12-09 17:38:35.332316] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.189 [2024-12-09 17:38:35.332489] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.189 [2024-12-09 17:38:35.332500] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.189 [2024-12-09 17:38:35.332508] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.189 [2024-12-09 17:38:35.332515] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.189 [2024-12-09 17:38:35.344011] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.189 [2024-12-09 17:38:35.344354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.189 [2024-12-09 17:38:35.344371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.189 [2024-12-09 17:38:35.344378] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.189 [2024-12-09 17:38:35.344538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.189 [2024-12-09 17:38:35.344700] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.189 [2024-12-09 17:38:35.344709] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.189 [2024-12-09 17:38:35.344715] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.189 [2024-12-09 17:38:35.344721] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.189 [2024-12-09 17:38:35.356813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.189 [2024-12-09 17:38:35.357208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.189 [2024-12-09 17:38:35.357229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.189 [2024-12-09 17:38:35.357237] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.189 [2024-12-09 17:38:35.357396] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.189 [2024-12-09 17:38:35.357557] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.189 [2024-12-09 17:38:35.357566] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.189 [2024-12-09 17:38:35.357573] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.189 [2024-12-09 17:38:35.357579] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.448 [2024-12-09 17:38:35.369796] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.448 [2024-12-09 17:38:35.370227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.448 [2024-12-09 17:38:35.370244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.448 [2024-12-09 17:38:35.370252] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.448 [2024-12-09 17:38:35.370433] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.448 [2024-12-09 17:38:35.370608] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.448 [2024-12-09 17:38:35.370617] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.448 [2024-12-09 17:38:35.370623] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.448 [2024-12-09 17:38:35.370630] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.448 [2024-12-09 17:38:35.382581] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.448 [2024-12-09 17:38:35.382984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.448 [2024-12-09 17:38:35.383002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.448 [2024-12-09 17:38:35.383012] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.448 [2024-12-09 17:38:35.383173] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.448 [2024-12-09 17:38:35.383359] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.448 [2024-12-09 17:38:35.383370] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.448 [2024-12-09 17:38:35.383376] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.448 [2024-12-09 17:38:35.383383] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.448 [2024-12-09 17:38:35.395403] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.448 [2024-12-09 17:38:35.395811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.448 [2024-12-09 17:38:35.395850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.448 [2024-12-09 17:38:35.395876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.448 [2024-12-09 17:38:35.396439] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.448 [2024-12-09 17:38:35.396601] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.448 [2024-12-09 17:38:35.396611] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.448 [2024-12-09 17:38:35.396617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.448 [2024-12-09 17:38:35.396623] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.448 [2024-12-09 17:38:35.408261] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.448 [2024-12-09 17:38:35.408676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.448 [2024-12-09 17:38:35.408721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.448 [2024-12-09 17:38:35.408745] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.448 [2024-12-09 17:38:35.409344] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.448 [2024-12-09 17:38:35.409826] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.448 [2024-12-09 17:38:35.409836] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.448 [2024-12-09 17:38:35.409842] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.448 [2024-12-09 17:38:35.409848] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.448 7251.50 IOPS, 28.33 MiB/s [2024-12-09T16:38:35.627Z] [2024-12-09 17:38:35.421204] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.448 [2024-12-09 17:38:35.421539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.448 [2024-12-09 17:38:35.421556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.448 [2024-12-09 17:38:35.421564] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.448 [2024-12-09 17:38:35.421724] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.448 [2024-12-09 17:38:35.421889] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.448 [2024-12-09 17:38:35.421898] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.448 [2024-12-09 17:38:35.421904] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.448 [2024-12-09 17:38:35.421911] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.448 [2024-12-09 17:38:35.434045] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.448 [2024-12-09 17:38:35.434477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.448 [2024-12-09 17:38:35.434525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.448 [2024-12-09 17:38:35.434550] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.448 [2024-12-09 17:38:35.435135] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.448 [2024-12-09 17:38:35.435575] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.448 [2024-12-09 17:38:35.435596] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.448 [2024-12-09 17:38:35.435611] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.448 [2024-12-09 17:38:35.435626] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.448 [2024-12-09 17:38:35.448981] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.448 [2024-12-09 17:38:35.449493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.448 [2024-12-09 17:38:35.449539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.448 [2024-12-09 17:38:35.449564] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.448 [2024-12-09 17:38:35.450042] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.448 [2024-12-09 17:38:35.450306] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.448 [2024-12-09 17:38:35.450320] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.448 [2024-12-09 17:38:35.450330] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.448 [2024-12-09 17:38:35.450340] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.448 [2024-12-09 17:38:35.461951] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.448 [2024-12-09 17:38:35.462344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.448 [2024-12-09 17:38:35.462390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.448 [2024-12-09 17:38:35.462414] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.448 [2024-12-09 17:38:35.462638] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.448 [2024-12-09 17:38:35.462809] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.448 [2024-12-09 17:38:35.462819] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.448 [2024-12-09 17:38:35.462829] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.448 [2024-12-09 17:38:35.462836] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.448 [2024-12-09 17:38:35.474809] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.448 [2024-12-09 17:38:35.475226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.448 [2024-12-09 17:38:35.475259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.448 [2024-12-09 17:38:35.475267] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.448 [2024-12-09 17:38:35.475436] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.448 [2024-12-09 17:38:35.475607] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.448 [2024-12-09 17:38:35.475617] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.448 [2024-12-09 17:38:35.475624] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.448 [2024-12-09 17:38:35.475631] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.449 [2024-12-09 17:38:35.487682] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.449 [2024-12-09 17:38:35.488087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.449 [2024-12-09 17:38:35.488105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.449 [2024-12-09 17:38:35.488113] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.449 [2024-12-09 17:38:35.488288] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.449 [2024-12-09 17:38:35.488458] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.449 [2024-12-09 17:38:35.488467] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.449 [2024-12-09 17:38:35.488485] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.449 [2024-12-09 17:38:35.488492] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.449 [2024-12-09 17:38:35.500546] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.449 [2024-12-09 17:38:35.500969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.449 [2024-12-09 17:38:35.501014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.449 [2024-12-09 17:38:35.501038] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.449 [2024-12-09 17:38:35.501639] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.449 [2024-12-09 17:38:35.502125] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.449 [2024-12-09 17:38:35.502134] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.449 [2024-12-09 17:38:35.502141] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.449 [2024-12-09 17:38:35.502148] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.449 [2024-12-09 17:38:35.513481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.449 [2024-12-09 17:38:35.513815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.449 [2024-12-09 17:38:35.513832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.449 [2024-12-09 17:38:35.513839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.449 [2024-12-09 17:38:35.514000] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.449 [2024-12-09 17:38:35.514160] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.449 [2024-12-09 17:38:35.514169] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.449 [2024-12-09 17:38:35.514175] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.449 [2024-12-09 17:38:35.514181] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.449 [2024-12-09 17:38:35.526240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.449 [2024-12-09 17:38:35.526673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.449 [2024-12-09 17:38:35.526690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.449 [2024-12-09 17:38:35.526697] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.449 [2024-12-09 17:38:35.526867] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.449 [2024-12-09 17:38:35.527036] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.449 [2024-12-09 17:38:35.527046] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.449 [2024-12-09 17:38:35.527053] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.449 [2024-12-09 17:38:35.527061] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.449 [2024-12-09 17:38:35.539341] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.449 [2024-12-09 17:38:35.539698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.449 [2024-12-09 17:38:35.539716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.449 [2024-12-09 17:38:35.539723] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.449 [2024-12-09 17:38:35.539898] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.449 [2024-12-09 17:38:35.540081] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.449 [2024-12-09 17:38:35.540091] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.449 [2024-12-09 17:38:35.540097] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.449 [2024-12-09 17:38:35.540104] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.449 [2024-12-09 17:38:35.552347] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.449 [2024-12-09 17:38:35.552701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.449 [2024-12-09 17:38:35.552718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.449 [2024-12-09 17:38:35.552729] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.449 [2024-12-09 17:38:35.552898] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.449 [2024-12-09 17:38:35.553067] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.449 [2024-12-09 17:38:35.553077] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.449 [2024-12-09 17:38:35.553083] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.449 [2024-12-09 17:38:35.553090] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.449 [2024-12-09 17:38:35.565193] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.449 [2024-12-09 17:38:35.565598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.449 [2024-12-09 17:38:35.565644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.449 [2024-12-09 17:38:35.565668] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.449 [2024-12-09 17:38:35.566114] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.449 [2024-12-09 17:38:35.566281] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.449 [2024-12-09 17:38:35.566291] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.449 [2024-12-09 17:38:35.566297] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.449 [2024-12-09 17:38:35.566305] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.449 [2024-12-09 17:38:35.578044] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.449 [2024-12-09 17:38:35.578458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.449 [2024-12-09 17:38:35.578476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.449 [2024-12-09 17:38:35.578484] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.449 [2024-12-09 17:38:35.578644] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.449 [2024-12-09 17:38:35.578804] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.449 [2024-12-09 17:38:35.578814] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.449 [2024-12-09 17:38:35.578820] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.449 [2024-12-09 17:38:35.578826] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.449 [2024-12-09 17:38:35.590942] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.449 [2024-12-09 17:38:35.591372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.449 [2024-12-09 17:38:35.591390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.449 [2024-12-09 17:38:35.591398] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.449 [2024-12-09 17:38:35.591572] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.449 [2024-12-09 17:38:35.591752] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.449 [2024-12-09 17:38:35.591762] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.449 [2024-12-09 17:38:35.591769] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.449 [2024-12-09 17:38:35.591775] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.449 [2024-12-09 17:38:35.603698] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.449 [2024-12-09 17:38:35.604123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.449 [2024-12-09 17:38:35.604168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.449 [2024-12-09 17:38:35.604192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.449 [2024-12-09 17:38:35.604790] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.449 [2024-12-09 17:38:35.605180] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.449 [2024-12-09 17:38:35.605190] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.449 [2024-12-09 17:38:35.605196] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.449 [2024-12-09 17:38:35.605203] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.449 [2024-12-09 17:38:35.616535] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.450 [2024-12-09 17:38:35.616908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.450 [2024-12-09 17:38:35.616925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.450 [2024-12-09 17:38:35.616932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.450 [2024-12-09 17:38:35.617093] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.450 [2024-12-09 17:38:35.617284] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.450 [2024-12-09 17:38:35.617294] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.450 [2024-12-09 17:38:35.617301] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.450 [2024-12-09 17:38:35.617308] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.709 [2024-12-09 17:38:35.629366] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.709 [2024-12-09 17:38:35.629713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.709 [2024-12-09 17:38:35.629731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.709 [2024-12-09 17:38:35.629740] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.709 [2024-12-09 17:38:35.629915] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.709 [2024-12-09 17:38:35.630090] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.709 [2024-12-09 17:38:35.630100] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.709 [2024-12-09 17:38:35.630113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.709 [2024-12-09 17:38:35.630120] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.709 [2024-12-09 17:38:35.642364] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.709 [2024-12-09 17:38:35.642752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.709 [2024-12-09 17:38:35.642769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.709 [2024-12-09 17:38:35.642776] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.709 [2024-12-09 17:38:35.642937] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.709 [2024-12-09 17:38:35.643097] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.709 [2024-12-09 17:38:35.643107] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.709 [2024-12-09 17:38:35.643113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.709 [2024-12-09 17:38:35.643119] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.709 [2024-12-09 17:38:35.655101] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.709 [2024-12-09 17:38:35.655510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.709 [2024-12-09 17:38:35.655527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.709 [2024-12-09 17:38:35.655534] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.709 [2024-12-09 17:38:35.655694] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.709 [2024-12-09 17:38:35.655854] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.709 [2024-12-09 17:38:35.655864] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.709 [2024-12-09 17:38:35.655870] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.709 [2024-12-09 17:38:35.655876] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.709 [2024-12-09 17:38:35.667869] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.709 [2024-12-09 17:38:35.668208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.709 [2024-12-09 17:38:35.668229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.709 [2024-12-09 17:38:35.668237] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.709 [2024-12-09 17:38:35.668396] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.709 [2024-12-09 17:38:35.668557] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.709 [2024-12-09 17:38:35.668566] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.709 [2024-12-09 17:38:35.668573] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.709 [2024-12-09 17:38:35.668579] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.709 [2024-12-09 17:38:35.680619] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.709 [2024-12-09 17:38:35.681023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.709 [2024-12-09 17:38:35.681041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.709 [2024-12-09 17:38:35.681048] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.709 [2024-12-09 17:38:35.681208] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.709 [2024-12-09 17:38:35.681398] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.709 [2024-12-09 17:38:35.681409] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.709 [2024-12-09 17:38:35.681415] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.709 [2024-12-09 17:38:35.681421] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.709 [2024-12-09 17:38:35.693446] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.709 [2024-12-09 17:38:35.693869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.709 [2024-12-09 17:38:35.693914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.709 [2024-12-09 17:38:35.693938] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.709 [2024-12-09 17:38:35.694377] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.709 [2024-12-09 17:38:35.694549] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.709 [2024-12-09 17:38:35.694559] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.709 [2024-12-09 17:38:35.694565] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.709 [2024-12-09 17:38:35.694571] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.709 [2024-12-09 17:38:35.706271] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.709 [2024-12-09 17:38:35.706595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.709 [2024-12-09 17:38:35.706611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.709 [2024-12-09 17:38:35.706619] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.709 [2024-12-09 17:38:35.706779] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.709 [2024-12-09 17:38:35.706940] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.709 [2024-12-09 17:38:35.706950] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.709 [2024-12-09 17:38:35.706957] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.709 [2024-12-09 17:38:35.706964] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.709 [2024-12-09 17:38:35.719093] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.710 [2024-12-09 17:38:35.719506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.710 [2024-12-09 17:38:35.719546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.710 [2024-12-09 17:38:35.719580] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.710 [2024-12-09 17:38:35.720095] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.710 [2024-12-09 17:38:35.720279] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.710 [2024-12-09 17:38:35.720289] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.710 [2024-12-09 17:38:35.720296] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.710 [2024-12-09 17:38:35.720302] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.710 [2024-12-09 17:38:35.731967] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.710 [2024-12-09 17:38:35.732305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.710 [2024-12-09 17:38:35.732323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.710 [2024-12-09 17:38:35.732331] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.710 [2024-12-09 17:38:35.732491] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.710 [2024-12-09 17:38:35.732652] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.710 [2024-12-09 17:38:35.732662] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.710 [2024-12-09 17:38:35.732668] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.710 [2024-12-09 17:38:35.732674] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.710 [2024-12-09 17:38:35.744786] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.710 [2024-12-09 17:38:35.745200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.710 [2024-12-09 17:38:35.745222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.710 [2024-12-09 17:38:35.745229] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.710 [2024-12-09 17:38:35.745390] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.710 [2024-12-09 17:38:35.745551] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.710 [2024-12-09 17:38:35.745560] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.710 [2024-12-09 17:38:35.745567] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.710 [2024-12-09 17:38:35.745573] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.710 [2024-12-09 17:38:35.757565] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.710 [2024-12-09 17:38:35.757954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.710 [2024-12-09 17:38:35.757993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.710 [2024-12-09 17:38:35.758019] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.710 [2024-12-09 17:38:35.758545] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.710 [2024-12-09 17:38:35.758889] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.710 [2024-12-09 17:38:35.758907] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.710 [2024-12-09 17:38:35.758922] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.710 [2024-12-09 17:38:35.758935] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.710 [2024-12-09 17:38:35.772494] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.710 [2024-12-09 17:38:35.773018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.710 [2024-12-09 17:38:35.773063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.710 [2024-12-09 17:38:35.773087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.710 [2024-12-09 17:38:35.773575] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.710 [2024-12-09 17:38:35.773833] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.710 [2024-12-09 17:38:35.773846] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.710 [2024-12-09 17:38:35.773856] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.710 [2024-12-09 17:38:35.773866] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.710 [2024-12-09 17:38:35.785418] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.710 [2024-12-09 17:38:35.785750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.710 [2024-12-09 17:38:35.785768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.710 [2024-12-09 17:38:35.785775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.710 [2024-12-09 17:38:35.785945] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.710 [2024-12-09 17:38:35.786113] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.710 [2024-12-09 17:38:35.786123] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.710 [2024-12-09 17:38:35.786130] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.710 [2024-12-09 17:38:35.786136] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.710 [2024-12-09 17:38:35.798430] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.710 [2024-12-09 17:38:35.798861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.710 [2024-12-09 17:38:35.798880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.710 [2024-12-09 17:38:35.798888] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.710 [2024-12-09 17:38:35.799063] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.710 [2024-12-09 17:38:35.799242] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.710 [2024-12-09 17:38:35.799253] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.710 [2024-12-09 17:38:35.799264] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.710 [2024-12-09 17:38:35.799272] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.710 [2024-12-09 17:38:35.811223] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.710 [2024-12-09 17:38:35.811661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.710 [2024-12-09 17:38:35.811708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.710 [2024-12-09 17:38:35.811732] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.710 [2024-12-09 17:38:35.812331] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.710 [2024-12-09 17:38:35.812575] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.710 [2024-12-09 17:38:35.812584] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.710 [2024-12-09 17:38:35.812591] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.710 [2024-12-09 17:38:35.812597] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.710 [2024-12-09 17:38:35.823995] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.710 [2024-12-09 17:38:35.824369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.710 [2024-12-09 17:38:35.824417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.710 [2024-12-09 17:38:35.824442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.710 [2024-12-09 17:38:35.825029] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.710 [2024-12-09 17:38:35.825632] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.710 [2024-12-09 17:38:35.825660] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.710 [2024-12-09 17:38:35.825689] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.710 [2024-12-09 17:38:35.825696] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.710 [2024-12-09 17:38:35.836742] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.710 [2024-12-09 17:38:35.837152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.710 [2024-12-09 17:38:35.837169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.710 [2024-12-09 17:38:35.837176] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.710 [2024-12-09 17:38:35.837361] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.710 [2024-12-09 17:38:35.837531] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.710 [2024-12-09 17:38:35.837541] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.710 [2024-12-09 17:38:35.837548] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.710 [2024-12-09 17:38:35.837555] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.710 [2024-12-09 17:38:35.849574] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.710 [2024-12-09 17:38:35.849989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.710 [2024-12-09 17:38:35.850006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.710 [2024-12-09 17:38:35.850013] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.711 [2024-12-09 17:38:35.850173] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.711 [2024-12-09 17:38:35.850360] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.711 [2024-12-09 17:38:35.850370] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.711 [2024-12-09 17:38:35.850378] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.711 [2024-12-09 17:38:35.850384] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.711 [2024-12-09 17:38:35.862422] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.711 [2024-12-09 17:38:35.862891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.711 [2024-12-09 17:38:35.862912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.711 [2024-12-09 17:38:35.862920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.711 [2024-12-09 17:38:35.863091] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.711 [2024-12-09 17:38:35.863288] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.711 [2024-12-09 17:38:35.863300] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.711 [2024-12-09 17:38:35.863306] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.711 [2024-12-09 17:38:35.863314] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.711 [2024-12-09 17:38:35.875574] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.711 [2024-12-09 17:38:35.876013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.711 [2024-12-09 17:38:35.876032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.711 [2024-12-09 17:38:35.876041] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.711 [2024-12-09 17:38:35.876224] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.711 [2024-12-09 17:38:35.876401] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.711 [2024-12-09 17:38:35.876411] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.711 [2024-12-09 17:38:35.876418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.711 [2024-12-09 17:38:35.876426] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.971 [2024-12-09 17:38:35.888732] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.971 [2024-12-09 17:38:35.889172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.971 [2024-12-09 17:38:35.889191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.971 [2024-12-09 17:38:35.889202] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.971 [2024-12-09 17:38:35.889385] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.971 [2024-12-09 17:38:35.889563] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.971 [2024-12-09 17:38:35.889573] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.971 [2024-12-09 17:38:35.889580] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.971 [2024-12-09 17:38:35.889587] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.971 [2024-12-09 17:38:35.901866] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.971 [2024-12-09 17:38:35.902320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.971 [2024-12-09 17:38:35.902367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.971 [2024-12-09 17:38:35.902392] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.971 [2024-12-09 17:38:35.902866] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.971 [2024-12-09 17:38:35.903042] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.971 [2024-12-09 17:38:35.903052] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.971 [2024-12-09 17:38:35.903059] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.971 [2024-12-09 17:38:35.903065] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.971 [2024-12-09 17:38:35.914855] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.971 [2024-12-09 17:38:35.915211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.971 [2024-12-09 17:38:35.915235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.971 [2024-12-09 17:38:35.915243] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.971 [2024-12-09 17:38:35.915412] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.971 [2024-12-09 17:38:35.915584] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.971 [2024-12-09 17:38:35.915593] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.971 [2024-12-09 17:38:35.915600] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.971 [2024-12-09 17:38:35.915607] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.971 [2024-12-09 17:38:35.927651] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.971 [2024-12-09 17:38:35.928073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.971 [2024-12-09 17:38:35.928120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.971 [2024-12-09 17:38:35.928144] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.971 [2024-12-09 17:38:35.928679] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.971 [2024-12-09 17:38:35.928855] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.971 [2024-12-09 17:38:35.928865] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.971 [2024-12-09 17:38:35.928871] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.971 [2024-12-09 17:38:35.928878] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.971 [2024-12-09 17:38:35.940422] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.971 [2024-12-09 17:38:35.940838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.971 [2024-12-09 17:38:35.940855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.971 [2024-12-09 17:38:35.940863] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.971 [2024-12-09 17:38:35.941024] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.971 [2024-12-09 17:38:35.941186] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.971 [2024-12-09 17:38:35.941196] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.971 [2024-12-09 17:38:35.941201] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.971 [2024-12-09 17:38:35.941208] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.971 [2024-12-09 17:38:35.953156] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.971 [2024-12-09 17:38:35.953518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.971 [2024-12-09 17:38:35.953564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.971 [2024-12-09 17:38:35.953588] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.971 [2024-12-09 17:38:35.954087] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.971 [2024-12-09 17:38:35.954272] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.971 [2024-12-09 17:38:35.954282] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.971 [2024-12-09 17:38:35.954288] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.971 [2024-12-09 17:38:35.954295] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.971 [2024-12-09 17:38:35.965912] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.971 [2024-12-09 17:38:35.966328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.971 [2024-12-09 17:38:35.966378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.971 [2024-12-09 17:38:35.966403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.971 [2024-12-09 17:38:35.966970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.971 [2024-12-09 17:38:35.967132] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.971 [2024-12-09 17:38:35.967142] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.971 [2024-12-09 17:38:35.967148] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.972 [2024-12-09 17:38:35.967158] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.972 [2024-12-09 17:38:35.978693] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.972 [2024-12-09 17:38:35.979115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.972 [2024-12-09 17:38:35.979161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.972 [2024-12-09 17:38:35.979185] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.972 [2024-12-09 17:38:35.979787] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.972 [2024-12-09 17:38:35.980159] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.972 [2024-12-09 17:38:35.980169] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.972 [2024-12-09 17:38:35.980176] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.972 [2024-12-09 17:38:35.980184] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.972 [2024-12-09 17:38:35.991434] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.972 [2024-12-09 17:38:35.991828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.972 [2024-12-09 17:38:35.991845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.972 [2024-12-09 17:38:35.991853] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.972 [2024-12-09 17:38:35.992013] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.972 [2024-12-09 17:38:35.992174] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.972 [2024-12-09 17:38:35.992183] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.972 [2024-12-09 17:38:35.992189] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.972 [2024-12-09 17:38:35.992196] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.972 [2024-12-09 17:38:36.004305] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.972 [2024-12-09 17:38:36.004733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.972 [2024-12-09 17:38:36.004777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.972 [2024-12-09 17:38:36.004799] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.972 [2024-12-09 17:38:36.005224] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.972 [2024-12-09 17:38:36.005409] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.972 [2024-12-09 17:38:36.005419] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.972 [2024-12-09 17:38:36.005426] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.972 [2024-12-09 17:38:36.005433] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.972 [2024-12-09 17:38:36.017108] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.972 [2024-12-09 17:38:36.017432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.972 [2024-12-09 17:38:36.017449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.972 [2024-12-09 17:38:36.017456] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.972 [2024-12-09 17:38:36.017616] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.972 [2024-12-09 17:38:36.017777] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.972 [2024-12-09 17:38:36.017786] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.972 [2024-12-09 17:38:36.017792] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.972 [2024-12-09 17:38:36.017799] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.972 [2024-12-09 17:38:36.029932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.972 [2024-12-09 17:38:36.030352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.972 [2024-12-09 17:38:36.030399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.972 [2024-12-09 17:38:36.030422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.972 [2024-12-09 17:38:36.031008] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.972 [2024-12-09 17:38:36.031403] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.972 [2024-12-09 17:38:36.031414] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.972 [2024-12-09 17:38:36.031421] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.972 [2024-12-09 17:38:36.031428] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.972 [2024-12-09 17:38:36.042895] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.972 [2024-12-09 17:38:36.043243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.972 [2024-12-09 17:38:36.043261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.972 [2024-12-09 17:38:36.043270] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.972 [2024-12-09 17:38:36.043439] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.972 [2024-12-09 17:38:36.043609] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.972 [2024-12-09 17:38:36.043619] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.972 [2024-12-09 17:38:36.043625] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.972 [2024-12-09 17:38:36.043632] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.972 [2024-12-09 17:38:36.055900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.972 [2024-12-09 17:38:36.056342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.972 [2024-12-09 17:38:36.056389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.972 [2024-12-09 17:38:36.056414] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.972 [2024-12-09 17:38:36.057006] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.972 [2024-12-09 17:38:36.057367] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.972 [2024-12-09 17:38:36.057377] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.972 [2024-12-09 17:38:36.057384] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.972 [2024-12-09 17:38:36.057391] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.972 [2024-12-09 17:38:36.068925] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.972 [2024-12-09 17:38:36.069304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.972 [2024-12-09 17:38:36.069322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.972 [2024-12-09 17:38:36.069330] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.972 [2024-12-09 17:38:36.069500] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.972 [2024-12-09 17:38:36.069670] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.972 [2024-12-09 17:38:36.069680] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.972 [2024-12-09 17:38:36.069687] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.972 [2024-12-09 17:38:36.069694] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.972 [2024-12-09 17:38:36.081838] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.972 [2024-12-09 17:38:36.082230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.972 [2024-12-09 17:38:36.082288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.972 [2024-12-09 17:38:36.082313] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.972 [2024-12-09 17:38:36.082899] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.972 [2024-12-09 17:38:36.083400] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.972 [2024-12-09 17:38:36.083411] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.972 [2024-12-09 17:38:36.083419] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.972 [2024-12-09 17:38:36.083426] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.972 [2024-12-09 17:38:36.094702] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.972 [2024-12-09 17:38:36.095079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.972 [2024-12-09 17:38:36.095097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.972 [2024-12-09 17:38:36.095105] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.972 [2024-12-09 17:38:36.095280] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.972 [2024-12-09 17:38:36.095450] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.972 [2024-12-09 17:38:36.095463] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.972 [2024-12-09 17:38:36.095470] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.972 [2024-12-09 17:38:36.095486] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.972 [2024-12-09 17:38:36.107546] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.973 [2024-12-09 17:38:36.107885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.973 [2024-12-09 17:38:36.107902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.973 [2024-12-09 17:38:36.107909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.973 [2024-12-09 17:38:36.108069] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.973 [2024-12-09 17:38:36.108236] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.973 [2024-12-09 17:38:36.108246] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.973 [2024-12-09 17:38:36.108253] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.973 [2024-12-09 17:38:36.108260] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.973 [2024-12-09 17:38:36.120408] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.973 [2024-12-09 17:38:36.120747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.973 [2024-12-09 17:38:36.120765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.973 [2024-12-09 17:38:36.120772] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.973 [2024-12-09 17:38:36.120942] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.973 [2024-12-09 17:38:36.121113] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.973 [2024-12-09 17:38:36.121123] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.973 [2024-12-09 17:38:36.121130] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.973 [2024-12-09 17:38:36.121136] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.973 [2024-12-09 17:38:36.133483] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.973 [2024-12-09 17:38:36.133759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.973 [2024-12-09 17:38:36.133777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.973 [2024-12-09 17:38:36.133784] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.973 [2024-12-09 17:38:36.133944] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.973 [2024-12-09 17:38:36.134105] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.973 [2024-12-09 17:38:36.134115] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.973 [2024-12-09 17:38:36.134121] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.973 [2024-12-09 17:38:36.134130] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.973 [2024-12-09 17:38:36.146611] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.973 [2024-12-09 17:38:36.146880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.973 [2024-12-09 17:38:36.146899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:06.973 [2024-12-09 17:38:36.146907] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:06.973 [2024-12-09 17:38:36.147081] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:06.973 [2024-12-09 17:38:36.147261] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.973 [2024-12-09 17:38:36.147271] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.973 [2024-12-09 17:38:36.147278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.973 [2024-12-09 17:38:36.147286] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.233 [2024-12-09 17:38:36.159567] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.233 [2024-12-09 17:38:36.159993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.233 [2024-12-09 17:38:36.160039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:07.233 [2024-12-09 17:38:36.160063] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:07.233 [2024-12-09 17:38:36.160561] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:07.233 [2024-12-09 17:38:36.160724] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.233 [2024-12-09 17:38:36.160734] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.233 [2024-12-09 17:38:36.160740] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.233 [2024-12-09 17:38:36.160747] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.233 [2024-12-09 17:38:36.172384] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.233 [2024-12-09 17:38:36.172710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.233 [2024-12-09 17:38:36.172727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:07.233 [2024-12-09 17:38:36.172735] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:07.233 [2024-12-09 17:38:36.172896] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:07.233 [2024-12-09 17:38:36.173057] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.233 [2024-12-09 17:38:36.173067] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.233 [2024-12-09 17:38:36.173073] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.233 [2024-12-09 17:38:36.173080] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.233 [2024-12-09 17:38:36.185348] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.233 [2024-12-09 17:38:36.185687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.233 [2024-12-09 17:38:36.185704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:07.233 [2024-12-09 17:38:36.185711] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:07.233 [2024-12-09 17:38:36.185870] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:07.233 [2024-12-09 17:38:36.186032] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.233 [2024-12-09 17:38:36.186041] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.233 [2024-12-09 17:38:36.186047] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.233 [2024-12-09 17:38:36.186053] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.233 [2024-12-09 17:38:36.198230] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.233 [2024-12-09 17:38:36.198594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.233 [2024-12-09 17:38:36.198611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:07.233 [2024-12-09 17:38:36.198618] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:07.233 [2024-12-09 17:38:36.198778] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:07.233 [2024-12-09 17:38:36.198939] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.233 [2024-12-09 17:38:36.198949] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.233 [2024-12-09 17:38:36.198955] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.233 [2024-12-09 17:38:36.198961] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.233 [2024-12-09 17:38:36.211238] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.233 [2024-12-09 17:38:36.211525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.233 [2024-12-09 17:38:36.211543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:07.233 [2024-12-09 17:38:36.211550] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:07.233 [2024-12-09 17:38:36.211720] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:07.233 [2024-12-09 17:38:36.211889] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.233 [2024-12-09 17:38:36.211899] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.233 [2024-12-09 17:38:36.211905] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.233 [2024-12-09 17:38:36.211911] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.233 [2024-12-09 17:38:36.224212] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.233 [2024-12-09 17:38:36.224568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.233 [2024-12-09 17:38:36.224585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:07.233 [2024-12-09 17:38:36.224592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:07.233 [2024-12-09 17:38:36.224756] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:07.233 [2024-12-09 17:38:36.224918] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.233 [2024-12-09 17:38:36.224928] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.233 [2024-12-09 17:38:36.224935] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.233 [2024-12-09 17:38:36.224941] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.233 [2024-12-09 17:38:36.237107] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.233 [2024-12-09 17:38:36.237429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.233 [2024-12-09 17:38:36.237447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:07.233 [2024-12-09 17:38:36.237454] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:07.233 [2024-12-09 17:38:36.237614] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:07.233 [2024-12-09 17:38:36.237775] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.234 [2024-12-09 17:38:36.237785] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.234 [2024-12-09 17:38:36.237791] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.234 [2024-12-09 17:38:36.237798] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.234 [2024-12-09 17:38:36.250013] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.234 [2024-12-09 17:38:36.250377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.234 [2024-12-09 17:38:36.250424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:07.234 [2024-12-09 17:38:36.250449] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:07.234 [2024-12-09 17:38:36.250973] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:07.234 [2024-12-09 17:38:36.251144] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.234 [2024-12-09 17:38:36.251154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.234 [2024-12-09 17:38:36.251162] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.234 [2024-12-09 17:38:36.251170] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.234 [2024-12-09 17:38:36.262796] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.234 [2024-12-09 17:38:36.263196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.234 [2024-12-09 17:38:36.263255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:07.234 [2024-12-09 17:38:36.263280] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:07.234 [2024-12-09 17:38:36.263774] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:07.234 [2024-12-09 17:38:36.263945] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.234 [2024-12-09 17:38:36.263960] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.234 [2024-12-09 17:38:36.263969] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.234 [2024-12-09 17:38:36.263976] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.234 [2024-12-09 17:38:36.275608] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.234 [2024-12-09 17:38:36.276011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.234 [2024-12-09 17:38:36.276029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:07.234 [2024-12-09 17:38:36.276037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:07.234 [2024-12-09 17:38:36.276207] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:07.234 [2024-12-09 17:38:36.276386] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.234 [2024-12-09 17:38:36.276397] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.234 [2024-12-09 17:38:36.276404] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.234 [2024-12-09 17:38:36.276411] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.234 [2024-12-09 17:38:36.288585] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.234 [2024-12-09 17:38:36.288939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.234 [2024-12-09 17:38:36.288983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:07.234 [2024-12-09 17:38:36.289008] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:07.234 [2024-12-09 17:38:36.289526] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:07.234 [2024-12-09 17:38:36.289688] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.234 [2024-12-09 17:38:36.289698] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.234 [2024-12-09 17:38:36.289704] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.234 [2024-12-09 17:38:36.289712] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.234 [2024-12-09 17:38:36.301465] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.234 [2024-12-09 17:38:36.301855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.234 [2024-12-09 17:38:36.301873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:07.234 [2024-12-09 17:38:36.301881] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:07.234 [2024-12-09 17:38:36.302050] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:07.234 [2024-12-09 17:38:36.302226] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.234 [2024-12-09 17:38:36.302236] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.234 [2024-12-09 17:38:36.302260] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.234 [2024-12-09 17:38:36.302271] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.234 [2024-12-09 17:38:36.314565] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.234 [2024-12-09 17:38:36.315012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.234 [2024-12-09 17:38:36.315058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:07.234 [2024-12-09 17:38:36.315083] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:07.234 [2024-12-09 17:38:36.315681] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:07.234 [2024-12-09 17:38:36.316022] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.234 [2024-12-09 17:38:36.316032] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.234 [2024-12-09 17:38:36.316038] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.234 [2024-12-09 17:38:36.316044] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.234 [2024-12-09 17:38:36.327578] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.234 [2024-12-09 17:38:36.327971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.234 [2024-12-09 17:38:36.328017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:07.234 [2024-12-09 17:38:36.328042] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:07.234 [2024-12-09 17:38:36.328640] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:07.234 [2024-12-09 17:38:36.329089] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.234 [2024-12-09 17:38:36.329099] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.234 [2024-12-09 17:38:36.329105] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.234 [2024-12-09 17:38:36.329112] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.234 [2024-12-09 17:38:36.340458] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.234 [2024-12-09 17:38:36.340850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.234 [2024-12-09 17:38:36.340867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:07.234 [2024-12-09 17:38:36.340875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:07.234 [2024-12-09 17:38:36.341045] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:07.234 [2024-12-09 17:38:36.341215] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.234 [2024-12-09 17:38:36.341232] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.234 [2024-12-09 17:38:36.341240] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.234 [2024-12-09 17:38:36.341247] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.234 [2024-12-09 17:38:36.353334] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.234 [2024-12-09 17:38:36.353627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.234 [2024-12-09 17:38:36.353648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:07.234 [2024-12-09 17:38:36.353656] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:07.234 [2024-12-09 17:38:36.353825] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:07.234 [2024-12-09 17:38:36.353996] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.234 [2024-12-09 17:38:36.354006] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.234 [2024-12-09 17:38:36.354012] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.234 [2024-12-09 17:38:36.354019] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.235 [2024-12-09 17:38:36.366326] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.235 [2024-12-09 17:38:36.366608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.235 [2024-12-09 17:38:36.366626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:07.235 [2024-12-09 17:38:36.366634] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:07.235 [2024-12-09 17:38:36.366811] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:07.235 [2024-12-09 17:38:36.366973] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.235 [2024-12-09 17:38:36.366983] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.235 [2024-12-09 17:38:36.366989] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.235 [2024-12-09 17:38:36.366995] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.235 [2024-12-09 17:38:36.379191] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.235 [2024-12-09 17:38:36.379575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.235 [2024-12-09 17:38:36.379620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:07.235 [2024-12-09 17:38:36.379644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:07.235 [2024-12-09 17:38:36.380084] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:07.235 [2024-12-09 17:38:36.380249] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.235 [2024-12-09 17:38:36.380259] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.235 [2024-12-09 17:38:36.380265] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.235 [2024-12-09 17:38:36.380271] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.235 [2024-12-09 17:38:36.392033] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.235 [2024-12-09 17:38:36.392370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.235 [2024-12-09 17:38:36.392388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:07.235 [2024-12-09 17:38:36.392396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:07.235 [2024-12-09 17:38:36.392561] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:07.235 [2024-12-09 17:38:36.392721] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.235 [2024-12-09 17:38:36.392731] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.235 [2024-12-09 17:38:36.392737] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.235 [2024-12-09 17:38:36.392743] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.235 [2024-12-09 17:38:36.404991] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.235 [2024-12-09 17:38:36.405443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.235 [2024-12-09 17:38:36.405461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:07.235 [2024-12-09 17:38:36.405469] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:07.235 [2024-12-09 17:38:36.405643] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:07.235 [2024-12-09 17:38:36.405819] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.235 [2024-12-09 17:38:36.405829] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.235 [2024-12-09 17:38:36.405836] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.235 [2024-12-09 17:38:36.405843] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.495 5801.20 IOPS, 22.66 MiB/s [2024-12-09T16:38:36.674Z] [2024-12-09 17:38:36.419308] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.495 [2024-12-09 17:38:36.419679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.495 [2024-12-09 17:38:36.419724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:07.495 [2024-12-09 17:38:36.419749] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:07.495 [2024-12-09 17:38:36.420207] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:07.495 [2024-12-09 17:38:36.420404] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.495 [2024-12-09 17:38:36.420414] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.495 [2024-12-09 17:38:36.420421] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.495 [2024-12-09 17:38:36.420428] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.495 [2024-12-09 17:38:36.432108] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.495 [2024-12-09 17:38:36.432404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.495 [2024-12-09 17:38:36.432422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:07.495 [2024-12-09 17:38:36.432429] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:07.495 [2024-12-09 17:38:36.432600] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:07.495 [2024-12-09 17:38:36.432771] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.495 [2024-12-09 17:38:36.432784] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.495 [2024-12-09 17:38:36.432790] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.495 [2024-12-09 17:38:36.432797] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.495 [2024-12-09 17:38:36.445250] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.495 [2024-12-09 17:38:36.445586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.495 [2024-12-09 17:38:36.445605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:07.495 [2024-12-09 17:38:36.445613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:07.495 [2024-12-09 17:38:36.445788] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:07.495 [2024-12-09 17:38:36.445963] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.495 [2024-12-09 17:38:36.445974] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.495 [2024-12-09 17:38:36.445981] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.495 [2024-12-09 17:38:36.445989] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.495 [2024-12-09 17:38:36.458269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.495 [2024-12-09 17:38:36.458605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.495 [2024-12-09 17:38:36.458624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:07.495 [2024-12-09 17:38:36.458632] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:07.495 [2024-12-09 17:38:36.458806] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:07.495 [2024-12-09 17:38:36.458983] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.495 [2024-12-09 17:38:36.458993] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.495 [2024-12-09 17:38:36.459000] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.495 [2024-12-09 17:38:36.459007] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.495 [2024-12-09 17:38:36.471376] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.495 [2024-12-09 17:38:36.471726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.495 [2024-12-09 17:38:36.471744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:07.495 [2024-12-09 17:38:36.471753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:07.495 [2024-12-09 17:38:36.471921] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:07.495 [2024-12-09 17:38:36.472092] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.495 [2024-12-09 17:38:36.472102] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.495 [2024-12-09 17:38:36.472108] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.495 [2024-12-09 17:38:36.472119] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.495 [2024-12-09 17:38:36.484348] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.495 [2024-12-09 17:38:36.484702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.495 [2024-12-09 17:38:36.484746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:07.495 [2024-12-09 17:38:36.484770] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:07.495 [2024-12-09 17:38:36.485234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:07.495 [2024-12-09 17:38:36.485423] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.495 [2024-12-09 17:38:36.485433] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.495 [2024-12-09 17:38:36.485440] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.495 [2024-12-09 17:38:36.485447] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.495 [2024-12-09 17:38:36.497109] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.495 [2024-12-09 17:38:36.497500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.495 [2024-12-09 17:38:36.497518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:07.496 [2024-12-09 17:38:36.497526] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:07.496 [2024-12-09 17:38:36.497696] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:07.496 [2024-12-09 17:38:36.497867] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.496 [2024-12-09 17:38:36.497877] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.496 [2024-12-09 17:38:36.497884] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.496 [2024-12-09 17:38:36.497890] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.496 [2024-12-09 17:38:36.509884] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.496 [2024-12-09 17:38:36.510302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.496 [2024-12-09 17:38:36.510357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:07.496 [2024-12-09 17:38:36.510383] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:07.496 [2024-12-09 17:38:36.510969] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:07.496 [2024-12-09 17:38:36.511189] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.496 [2024-12-09 17:38:36.511198] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.496 [2024-12-09 17:38:36.511205] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.496 [2024-12-09 17:38:36.511211] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.496 [2024-12-09 17:38:36.522765] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.496 [2024-12-09 17:38:36.523177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.496 [2024-12-09 17:38:36.523198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:07.496 [2024-12-09 17:38:36.523205] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:07.496 [2024-12-09 17:38:36.523394] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:07.496 [2024-12-09 17:38:36.523564] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.496 [2024-12-09 17:38:36.523574] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.496 [2024-12-09 17:38:36.523581] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.496 [2024-12-09 17:38:36.523588] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.496 [2024-12-09 17:38:36.535617] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.496 [2024-12-09 17:38:36.536015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.496 [2024-12-09 17:38:36.536031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:07.496 [2024-12-09 17:38:36.536039] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:07.496 [2024-12-09 17:38:36.536199] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:07.496 [2024-12-09 17:38:36.536388] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.496 [2024-12-09 17:38:36.536398] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.496 [2024-12-09 17:38:36.536405] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.496 [2024-12-09 17:38:36.536411] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.496 [2024-12-09 17:38:36.548504] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.496 [2024-12-09 17:38:36.548907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.496 [2024-12-09 17:38:36.548925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:07.496 [2024-12-09 17:38:36.548932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:07.496 [2024-12-09 17:38:36.549092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:07.496 [2024-12-09 17:38:36.549275] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.496 [2024-12-09 17:38:36.549286] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.496 [2024-12-09 17:38:36.549293] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.496 [2024-12-09 17:38:36.549301] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.496 [2024-12-09 17:38:36.561307] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.496 [2024-12-09 17:38:36.561730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.496 [2024-12-09 17:38:36.561748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:07.496 [2024-12-09 17:38:36.561755] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:07.496 [2024-12-09 17:38:36.561928] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:07.496 [2024-12-09 17:38:36.562098] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.496 [2024-12-09 17:38:36.562109] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.496 [2024-12-09 17:38:36.562115] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.496 [2024-12-09 17:38:36.562122] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.496 [2024-12-09 17:38:36.574413] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.496 [2024-12-09 17:38:36.574769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.496 [2024-12-09 17:38:36.574787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:07.496 [2024-12-09 17:38:36.574795] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:07.496 [2024-12-09 17:38:36.574969] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:07.496 [2024-12-09 17:38:36.575143] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.496 [2024-12-09 17:38:36.575153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.496 [2024-12-09 17:38:36.575160] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.496 [2024-12-09 17:38:36.575167] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.496 [2024-12-09 17:38:36.587306] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.496 [2024-12-09 17:38:36.587706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.496 [2024-12-09 17:38:36.587724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:07.496 [2024-12-09 17:38:36.587731] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:07.496 [2024-12-09 17:38:36.587900] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:07.496 [2024-12-09 17:38:36.588069] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.496 [2024-12-09 17:38:36.588079] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.496 [2024-12-09 17:38:36.588085] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.496 [2024-12-09 17:38:36.588092] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.496 [2024-12-09 17:38:36.600183] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.496 [2024-12-09 17:38:36.600579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.496 [2024-12-09 17:38:36.600596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:07.496 [2024-12-09 17:38:36.600603] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:07.496 [2024-12-09 17:38:36.600763] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:07.496 [2024-12-09 17:38:36.600924] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.496 [2024-12-09 17:38:36.600937] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.496 [2024-12-09 17:38:36.600943] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.496 [2024-12-09 17:38:36.600950] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.496 [2024-12-09 17:38:36.612970] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.496 [2024-12-09 17:38:36.613395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.496 [2024-12-09 17:38:36.613442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:07.496 [2024-12-09 17:38:36.613467] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:07.496 [2024-12-09 17:38:36.614054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:07.496 [2024-12-09 17:38:36.614586] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.496 [2024-12-09 17:38:36.614595] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.496 [2024-12-09 17:38:36.614601] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.496 [2024-12-09 17:38:36.614608] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.496 [2024-12-09 17:38:36.625797] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.496 [2024-12-09 17:38:36.626226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.496 [2024-12-09 17:38:36.626274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:07.496 [2024-12-09 17:38:36.626299] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:07.496 [2024-12-09 17:38:36.626732] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:07.497 [2024-12-09 17:38:36.626893] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.497 [2024-12-09 17:38:36.626901] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.497 [2024-12-09 17:38:36.626908] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.497 [2024-12-09 17:38:36.626914] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.497 [2024-12-09 17:38:36.638663] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.497 [2024-12-09 17:38:36.639085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.497 [2024-12-09 17:38:36.639131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:07.497 [2024-12-09 17:38:36.639154] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:07.497 [2024-12-09 17:38:36.639609] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:07.497 [2024-12-09 17:38:36.639782] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.497 [2024-12-09 17:38:36.639792] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.497 [2024-12-09 17:38:36.639799] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.497 [2024-12-09 17:38:36.639805] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.497 [2024-12-09 17:38:36.651510] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.497 [2024-12-09 17:38:36.651943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.497 [2024-12-09 17:38:36.651960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:07.497 [2024-12-09 17:38:36.651967] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:07.497 [2024-12-09 17:38:36.652127] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:07.497 [2024-12-09 17:38:36.652312] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.497 [2024-12-09 17:38:36.652322] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.497 [2024-12-09 17:38:36.652330] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.497 [2024-12-09 17:38:36.652337] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.497 [2024-12-09 17:38:36.664320] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.497 [2024-12-09 17:38:36.664737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.497 [2024-12-09 17:38:36.664792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:07.497 [2024-12-09 17:38:36.664817] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:07.497 [2024-12-09 17:38:36.665419] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:07.497 [2024-12-09 17:38:36.665918] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.497 [2024-12-09 17:38:36.665928] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.497 [2024-12-09 17:38:36.665951] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.497 [2024-12-09 17:38:36.665957] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.757 [2024-12-09 17:38:36.677379] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.757 [2024-12-09 17:38:36.677756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.757 [2024-12-09 17:38:36.677774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:07.757 [2024-12-09 17:38:36.677781] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:07.757 [2024-12-09 17:38:36.677957] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:07.757 [2024-12-09 17:38:36.678132] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.757 [2024-12-09 17:38:36.678142] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.757 [2024-12-09 17:38:36.678148] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.757 [2024-12-09 17:38:36.678155] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.757 [2024-12-09 17:38:36.690278] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.757 [2024-12-09 17:38:36.690696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.757 [2024-12-09 17:38:36.690749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:07.757 [2024-12-09 17:38:36.690773] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:07.757 [2024-12-09 17:38:36.691373] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:07.757 [2024-12-09 17:38:36.691829] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.757 [2024-12-09 17:38:36.691838] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.757 [2024-12-09 17:38:36.691844] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.757 [2024-12-09 17:38:36.691850] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.757 [2024-12-09 17:38:36.703241] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.757 [2024-12-09 17:38:36.703654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.757 [2024-12-09 17:38:36.703671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:07.757 [2024-12-09 17:38:36.703679] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:07.757 [2024-12-09 17:38:36.703838] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:07.757 [2024-12-09 17:38:36.703999] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.757 [2024-12-09 17:38:36.704009] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.757 [2024-12-09 17:38:36.704016] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.757 [2024-12-09 17:38:36.704022] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.757 [2024-12-09 17:38:36.716049] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.757 [2024-12-09 17:38:36.716466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.757 [2024-12-09 17:38:36.716483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:07.757 [2024-12-09 17:38:36.716491] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:07.757 [2024-12-09 17:38:36.716651] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:07.757 [2024-12-09 17:38:36.716812] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.757 [2024-12-09 17:38:36.716821] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.757 [2024-12-09 17:38:36.716827] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.757 [2024-12-09 17:38:36.716833] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.757 [2024-12-09 17:38:36.728882] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.757 [2024-12-09 17:38:36.729297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.757 [2024-12-09 17:38:36.729314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:07.757 [2024-12-09 17:38:36.729322] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:07.757 [2024-12-09 17:38:36.729482] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:07.757 [2024-12-09 17:38:36.729645] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.757 [2024-12-09 17:38:36.729655] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.757 [2024-12-09 17:38:36.729662] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.757 [2024-12-09 17:38:36.729668] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.757 [2024-12-09 17:38:36.741769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.757 [2024-12-09 17:38:36.742184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.757 [2024-12-09 17:38:36.742202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:07.757 [2024-12-09 17:38:36.742209] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:07.757 [2024-12-09 17:38:36.742396] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:07.757 [2024-12-09 17:38:36.742566] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.757 [2024-12-09 17:38:36.742576] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.757 [2024-12-09 17:38:36.742582] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.757 [2024-12-09 17:38:36.742589] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.757 [2024-12-09 17:38:36.754654] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.757 [2024-12-09 17:38:36.755076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.757 [2024-12-09 17:38:36.755122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:07.757 [2024-12-09 17:38:36.755146] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:07.757 [2024-12-09 17:38:36.755627] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:07.757 [2024-12-09 17:38:36.755799] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.757 [2024-12-09 17:38:36.755809] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.757 [2024-12-09 17:38:36.755815] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.757 [2024-12-09 17:38:36.755822] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.758 [2024-12-09 17:38:36.767518] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.758 [2024-12-09 17:38:36.767938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.758 [2024-12-09 17:38:36.767983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:07.758 [2024-12-09 17:38:36.768007] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:07.758 [2024-12-09 17:38:36.768605] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:07.758 [2024-12-09 17:38:36.769169] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.758 [2024-12-09 17:38:36.769179] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.758 [2024-12-09 17:38:36.769188] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.758 [2024-12-09 17:38:36.769194] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.758 [2024-12-09 17:38:36.780251] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.758 [2024-12-09 17:38:36.780604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.758 [2024-12-09 17:38:36.780621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:07.758 [2024-12-09 17:38:36.780628] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:07.758 [2024-12-09 17:38:36.780787] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:07.758 [2024-12-09 17:38:36.780948] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.758 [2024-12-09 17:38:36.780958] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.758 [2024-12-09 17:38:36.780965] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.758 [2024-12-09 17:38:36.780971] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.758 [2024-12-09 17:38:36.793095] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.758 [2024-12-09 17:38:36.793510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.758 [2024-12-09 17:38:36.793528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:07.758 [2024-12-09 17:38:36.793536] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:07.758 [2024-12-09 17:38:36.793696] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:07.758 [2024-12-09 17:38:36.793857] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.758 [2024-12-09 17:38:36.793867] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.758 [2024-12-09 17:38:36.793873] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.758 [2024-12-09 17:38:36.793880] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.758 [2024-12-09 17:38:36.805943] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.758 [2024-12-09 17:38:36.806270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.758 [2024-12-09 17:38:36.806288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:07.758 [2024-12-09 17:38:36.806296] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:07.758 [2024-12-09 17:38:36.806456] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:07.758 [2024-12-09 17:38:36.806617] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.758 [2024-12-09 17:38:36.806628] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.758 [2024-12-09 17:38:36.806634] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.758 [2024-12-09 17:38:36.806640] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.758 [2024-12-09 17:38:36.818754] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.758 [2024-12-09 17:38:36.819170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.758 [2024-12-09 17:38:36.819187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:07.758 [2024-12-09 17:38:36.819195] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:07.758 [2024-12-09 17:38:36.819389] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:07.758 [2024-12-09 17:38:36.819566] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.758 [2024-12-09 17:38:36.819577] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.758 [2024-12-09 17:38:36.819584] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.758 [2024-12-09 17:38:36.819591] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.758 [2024-12-09 17:38:36.831858] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.758 [2024-12-09 17:38:36.832283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.758 [2024-12-09 17:38:36.832301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:07.758 [2024-12-09 17:38:36.832309] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:07.758 [2024-12-09 17:38:36.832497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:07.758 [2024-12-09 17:38:36.832677] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.758 [2024-12-09 17:38:36.832687] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.758 [2024-12-09 17:38:36.832693] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.758 [2024-12-09 17:38:36.832700] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.758 [2024-12-09 17:38:36.844775] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.758 [2024-12-09 17:38:36.845134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.758 [2024-12-09 17:38:36.845152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:07.758 [2024-12-09 17:38:36.845161] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:07.758 [2024-12-09 17:38:36.845335] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:07.758 [2024-12-09 17:38:36.845506] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.758 [2024-12-09 17:38:36.845516] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.758 [2024-12-09 17:38:36.845522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.758 [2024-12-09 17:38:36.845529] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.758 [2024-12-09 17:38:36.857584] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.758 [2024-12-09 17:38:36.857992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.758 [2024-12-09 17:38:36.858009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:07.758 [2024-12-09 17:38:36.858019] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:07.758 [2024-12-09 17:38:36.858179] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:07.758 [2024-12-09 17:38:36.858371] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.758 [2024-12-09 17:38:36.858382] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.758 [2024-12-09 17:38:36.858389] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.758 [2024-12-09 17:38:36.858395] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.758 [2024-12-09 17:38:36.870413] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.758 [2024-12-09 17:38:36.870803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.758 [2024-12-09 17:38:36.870820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:07.758 [2024-12-09 17:38:36.870828] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:07.758 [2024-12-09 17:38:36.870989] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:07.758 [2024-12-09 17:38:36.871151] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.758 [2024-12-09 17:38:36.871161] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.758 [2024-12-09 17:38:36.871167] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.758 [2024-12-09 17:38:36.871173] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.758 [2024-12-09 17:38:36.883214] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.758 [2024-12-09 17:38:36.883623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.758 [2024-12-09 17:38:36.883676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:07.758 [2024-12-09 17:38:36.883701] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:07.758 [2024-12-09 17:38:36.884212] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:07.758 [2024-12-09 17:38:36.884404] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.758 [2024-12-09 17:38:36.884413] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.758 [2024-12-09 17:38:36.884419] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.758 [2024-12-09 17:38:36.884425] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.758 [2024-12-09 17:38:36.896167] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.758 [2024-12-09 17:38:36.896520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.758 [2024-12-09 17:38:36.896539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:07.759 [2024-12-09 17:38:36.896547] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:07.759 [2024-12-09 17:38:36.896716] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:07.759 [2024-12-09 17:38:36.896892] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.759 [2024-12-09 17:38:36.896903] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.759 [2024-12-09 17:38:36.896909] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.759 [2024-12-09 17:38:36.896916] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.759 [2024-12-09 17:38:36.909149] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.759 [2024-12-09 17:38:36.909565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.759 [2024-12-09 17:38:36.909583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:07.759 [2024-12-09 17:38:36.909591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:07.759 [2024-12-09 17:38:36.909760] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:07.759 [2024-12-09 17:38:36.909929] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.759 [2024-12-09 17:38:36.909939] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.759 [2024-12-09 17:38:36.909946] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.759 [2024-12-09 17:38:36.909953] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.759 [2024-12-09 17:38:36.922099] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.759 [2024-12-09 17:38:36.922463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.759 [2024-12-09 17:38:36.922480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:07.759 [2024-12-09 17:38:36.922488] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:07.759 [2024-12-09 17:38:36.922658] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:07.759 [2024-12-09 17:38:36.922827] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.759 [2024-12-09 17:38:36.922836] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.759 [2024-12-09 17:38:36.922842] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.759 [2024-12-09 17:38:36.922849] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.018 [2024-12-09 17:38:36.935145] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.018 [2024-12-09 17:38:36.935572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.018 [2024-12-09 17:38:36.935588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:08.018 [2024-12-09 17:38:36.935595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:08.018 [2024-12-09 17:38:36.935765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:08.018 [2024-12-09 17:38:36.935934] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.018 [2024-12-09 17:38:36.935942] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.018 [2024-12-09 17:38:36.935952] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.018 [2024-12-09 17:38:36.935959] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.018 [2024-12-09 17:38:36.948118] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.018 [2024-12-09 17:38:36.948547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.018 [2024-12-09 17:38:36.948593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:08.018 [2024-12-09 17:38:36.948616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:08.018 [2024-12-09 17:38:36.949203] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:08.018 [2024-12-09 17:38:36.949580] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.018 [2024-12-09 17:38:36.949589] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.018 [2024-12-09 17:38:36.949595] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.018 [2024-12-09 17:38:36.949601] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.018 [2024-12-09 17:38:36.961014] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.018 [2024-12-09 17:38:36.961419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.018 [2024-12-09 17:38:36.961463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:08.018 [2024-12-09 17:38:36.961487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:08.018 [2024-12-09 17:38:36.962070] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:08.018 [2024-12-09 17:38:36.962619] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.018 [2024-12-09 17:38:36.962627] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.018 [2024-12-09 17:38:36.962634] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.018 [2024-12-09 17:38:36.962641] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.018 [2024-12-09 17:38:36.973915] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.018 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2731975 Killed "${NVMF_APP[@]}" "$@" 00:28:08.018 [2024-12-09 17:38:36.974314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.018 [2024-12-09 17:38:36.974331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:08.018 [2024-12-09 17:38:36.974339] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:08.018 [2024-12-09 17:38:36.974513] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:08.018 [2024-12-09 17:38:36.974687] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.018 [2024-12-09 17:38:36.974696] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.018 [2024-12-09 17:38:36.974703] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.018 [2024-12-09 17:38:36.974713] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.018 17:38:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:28:08.018 17:38:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:08.018 17:38:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:08.018 17:38:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:08.018 17:38:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:08.018 17:38:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2733371 00:28:08.018 17:38:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2733371 00:28:08.018 17:38:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:08.018 17:38:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2733371 ']' 00:28:08.018 17:38:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:08.018 17:38:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:08.018 17:38:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:08.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:08.018 17:38:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:08.018 17:38:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:08.019 [2024-12-09 17:38:36.986964] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.019 [2024-12-09 17:38:36.987391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.019 [2024-12-09 17:38:36.987410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:08.019 [2024-12-09 17:38:36.987418] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:08.019 [2024-12-09 17:38:36.987592] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:08.019 [2024-12-09 17:38:36.987768] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.019 [2024-12-09 17:38:36.987776] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.019 [2024-12-09 17:38:36.987782] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.019 [2024-12-09 17:38:36.987789] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.019 [2024-12-09 17:38:37.000045] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.019 [2024-12-09 17:38:37.000395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.019 [2024-12-09 17:38:37.000413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:08.019 [2024-12-09 17:38:37.000421] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:08.019 [2024-12-09 17:38:37.000595] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:08.019 [2024-12-09 17:38:37.000770] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.019 [2024-12-09 17:38:37.000779] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.019 [2024-12-09 17:38:37.000787] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.019 [2024-12-09 17:38:37.000797] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.019 [2024-12-09 17:38:37.013008] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.019 [2024-12-09 17:38:37.013433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.019 [2024-12-09 17:38:37.013451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:08.019 [2024-12-09 17:38:37.013458] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:08.019 [2024-12-09 17:38:37.013632] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:08.019 [2024-12-09 17:38:37.013819] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.019 [2024-12-09 17:38:37.013828] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.019 [2024-12-09 17:38:37.013835] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.019 [2024-12-09 17:38:37.013841] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.019 [2024-12-09 17:38:37.025966] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.019 [2024-12-09 17:38:37.026390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.019 [2024-12-09 17:38:37.026406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:08.019 [2024-12-09 17:38:37.026414] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:08.019 [2024-12-09 17:38:37.026583] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:08.019 [2024-12-09 17:38:37.026754] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.019 [2024-12-09 17:38:37.026762] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.019 [2024-12-09 17:38:37.026768] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.019 [2024-12-09 17:38:37.026775] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.019 [2024-12-09 17:38:37.034890] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:28:08.019 [2024-12-09 17:38:37.034926] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:08.019 [2024-12-09 17:38:37.039052] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.019 [2024-12-09 17:38:37.039478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.019 [2024-12-09 17:38:37.039494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:08.019 [2024-12-09 17:38:37.039502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:08.019 [2024-12-09 17:38:37.039670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:08.019 [2024-12-09 17:38:37.039840] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.019 [2024-12-09 17:38:37.039848] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.019 [2024-12-09 17:38:37.039854] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.019 [2024-12-09 17:38:37.039864] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.019 [2024-12-09 17:38:37.052059] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.019 [2024-12-09 17:38:37.052469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.019 [2024-12-09 17:38:37.052487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:08.019 [2024-12-09 17:38:37.052494] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:08.019 [2024-12-09 17:38:37.052663] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:08.019 [2024-12-09 17:38:37.052833] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.019 [2024-12-09 17:38:37.052841] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.019 [2024-12-09 17:38:37.052848] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.019 [2024-12-09 17:38:37.052854] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.019 [2024-12-09 17:38:37.065132] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.019 [2024-12-09 17:38:37.065572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.019 [2024-12-09 17:38:37.065589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:08.019 [2024-12-09 17:38:37.065596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:08.019 [2024-12-09 17:38:37.065771] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:08.019 [2024-12-09 17:38:37.065945] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.019 [2024-12-09 17:38:37.065953] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.019 [2024-12-09 17:38:37.065960] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.019 [2024-12-09 17:38:37.065966] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.019 [2024-12-09 17:38:37.078242] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.019 [2024-12-09 17:38:37.078664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.019 [2024-12-09 17:38:37.078680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:08.019 [2024-12-09 17:38:37.078688] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:08.019 [2024-12-09 17:38:37.078863] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:08.019 [2024-12-09 17:38:37.079037] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.019 [2024-12-09 17:38:37.079045] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.019 [2024-12-09 17:38:37.079052] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.019 [2024-12-09 17:38:37.079059] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.019 [2024-12-09 17:38:37.091340] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.019 [2024-12-09 17:38:37.091774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.019 [2024-12-09 17:38:37.091791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:08.019 [2024-12-09 17:38:37.091798] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:08.019 [2024-12-09 17:38:37.091972] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:08.019 [2024-12-09 17:38:37.092147] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.019 [2024-12-09 17:38:37.092156] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.019 [2024-12-09 17:38:37.092162] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.019 [2024-12-09 17:38:37.092169] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.019 [2024-12-09 17:38:37.104333] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.019 [2024-12-09 17:38:37.104668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.019 [2024-12-09 17:38:37.104685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:08.019 [2024-12-09 17:38:37.104692] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:08.019 [2024-12-09 17:38:37.104862] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:08.019 [2024-12-09 17:38:37.105031] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.019 [2024-12-09 17:38:37.105040] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.019 [2024-12-09 17:38:37.105046] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.020 [2024-12-09 17:38:37.105052] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.020 [2024-12-09 17:38:37.114725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:08.020 [2024-12-09 17:38:37.117255] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.020 [2024-12-09 17:38:37.117690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.020 [2024-12-09 17:38:37.117708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:08.020 [2024-12-09 17:38:37.117716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:08.020 [2024-12-09 17:38:37.117886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:08.020 [2024-12-09 17:38:37.118058] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.020 [2024-12-09 17:38:37.118067] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.020 [2024-12-09 17:38:37.118073] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.020 [2024-12-09 17:38:37.118079] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.020 [2024-12-09 17:38:37.130237] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.020 [2024-12-09 17:38:37.130670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.020 [2024-12-09 17:38:37.130687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:08.020 [2024-12-09 17:38:37.130700] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:08.020 [2024-12-09 17:38:37.130870] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:08.020 [2024-12-09 17:38:37.131041] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.020 [2024-12-09 17:38:37.131049] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.020 [2024-12-09 17:38:37.131056] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.020 [2024-12-09 17:38:37.131062] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.020 [2024-12-09 17:38:37.143171] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.020 [2024-12-09 17:38:37.143626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.020 [2024-12-09 17:38:37.143643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:08.020 [2024-12-09 17:38:37.143651] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:08.020 [2024-12-09 17:38:37.143824] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:08.020 [2024-12-09 17:38:37.144000] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.020 [2024-12-09 17:38:37.144009] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.020 [2024-12-09 17:38:37.144016] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.020 [2024-12-09 17:38:37.144023] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.020 [2024-12-09 17:38:37.153973] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:08.020 [2024-12-09 17:38:37.153996] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:08.020 [2024-12-09 17:38:37.154004] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:08.020 [2024-12-09 17:38:37.154010] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:08.020 [2024-12-09 17:38:37.154016] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:08.020 [2024-12-09 17:38:37.155349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:08.020 [2024-12-09 17:38:37.155385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:08.020 [2024-12-09 17:38:37.155387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:08.020 [2024-12-09 17:38:37.156258] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.020 [2024-12-09 17:38:37.156631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.020 [2024-12-09 17:38:37.156648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:08.020 [2024-12-09 17:38:37.156656] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:08.020 [2024-12-09 17:38:37.156832] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:08.020 [2024-12-09 17:38:37.157007] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.020 [2024-12-09 17:38:37.157016] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.020 [2024-12-09 17:38:37.157023] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.020 [2024-12-09 17:38:37.157034] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.020 [2024-12-09 17:38:37.169317] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.020 [2024-12-09 17:38:37.169773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.020 [2024-12-09 17:38:37.169793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:08.020 [2024-12-09 17:38:37.169801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:08.020 [2024-12-09 17:38:37.169976] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:08.020 [2024-12-09 17:38:37.170151] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.020 [2024-12-09 17:38:37.170160] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.020 [2024-12-09 17:38:37.170167] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.020 [2024-12-09 17:38:37.170174] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.020 [2024-12-09 17:38:37.182453] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.020 [2024-12-09 17:38:37.182903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.020 [2024-12-09 17:38:37.182924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:08.020 [2024-12-09 17:38:37.182932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:08.020 [2024-12-09 17:38:37.183107] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:08.020 [2024-12-09 17:38:37.183286] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.020 [2024-12-09 17:38:37.183296] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.020 [2024-12-09 17:38:37.183303] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.020 [2024-12-09 17:38:37.183310] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.279 [2024-12-09 17:38:37.195588] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.279 [2024-12-09 17:38:37.196008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.279 [2024-12-09 17:38:37.196027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:08.279 [2024-12-09 17:38:37.196036] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:08.279 [2024-12-09 17:38:37.196213] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:08.279 [2024-12-09 17:38:37.196396] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.279 [2024-12-09 17:38:37.196406] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.279 [2024-12-09 17:38:37.196413] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.279 [2024-12-09 17:38:37.196420] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.279 [2024-12-09 17:38:37.208714] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.279 [2024-12-09 17:38:37.209156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.279 [2024-12-09 17:38:37.209174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:08.279 [2024-12-09 17:38:37.209182] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:08.279 [2024-12-09 17:38:37.209363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:08.279 [2024-12-09 17:38:37.209538] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.279 [2024-12-09 17:38:37.209546] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.279 [2024-12-09 17:38:37.209553] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.279 [2024-12-09 17:38:37.209560] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.279 [2024-12-09 17:38:37.221849] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.279 [2024-12-09 17:38:37.222261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.279 [2024-12-09 17:38:37.222279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:08.279 [2024-12-09 17:38:37.222287] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:08.279 [2024-12-09 17:38:37.222463] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:08.279 [2024-12-09 17:38:37.222637] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.279 [2024-12-09 17:38:37.222646] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.279 [2024-12-09 17:38:37.222654] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.279 [2024-12-09 17:38:37.222660] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.279 [2024-12-09 17:38:37.234924] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.279 [2024-12-09 17:38:37.235354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.279 [2024-12-09 17:38:37.235371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:08.279 [2024-12-09 17:38:37.235379] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:08.279 [2024-12-09 17:38:37.235554] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:08.279 [2024-12-09 17:38:37.235728] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.279 [2024-12-09 17:38:37.235737] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.279 [2024-12-09 17:38:37.235744] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.279 [2024-12-09 17:38:37.235750] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.279 [2024-12-09 17:38:37.248026] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.279 [2024-12-09 17:38:37.248465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.279 [2024-12-09 17:38:37.248482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:08.279 [2024-12-09 17:38:37.248490] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:08.279 [2024-12-09 17:38:37.248668] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:08.279 [2024-12-09 17:38:37.248842] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.279 [2024-12-09 17:38:37.248850] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.279 [2024-12-09 17:38:37.248857] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.279 [2024-12-09 17:38:37.248864] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.279 [2024-12-09 17:38:37.261124] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.279 [2024-12-09 17:38:37.261562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.279 [2024-12-09 17:38:37.261578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:08.279 [2024-12-09 17:38:37.261586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:08.279 [2024-12-09 17:38:37.261759] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:08.279 [2024-12-09 17:38:37.261935] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.279 [2024-12-09 17:38:37.261943] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.279 [2024-12-09 17:38:37.261950] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.279 [2024-12-09 17:38:37.261956] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.279 [2024-12-09 17:38:37.274212] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.279 [2024-12-09 17:38:37.274621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.279 [2024-12-09 17:38:37.274637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:08.279 [2024-12-09 17:38:37.274644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:08.279 [2024-12-09 17:38:37.274817] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:08.280 [2024-12-09 17:38:37.274991] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.280 [2024-12-09 17:38:37.274999] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.280 [2024-12-09 17:38:37.275006] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.280 [2024-12-09 17:38:37.275012] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.280 [2024-12-09 17:38:37.287268] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.280 [2024-12-09 17:38:37.287697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.280 [2024-12-09 17:38:37.287714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:08.280 [2024-12-09 17:38:37.287722] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:08.280 [2024-12-09 17:38:37.287895] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:08.280 [2024-12-09 17:38:37.288070] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.280 [2024-12-09 17:38:37.288081] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.280 [2024-12-09 17:38:37.288088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.280 [2024-12-09 17:38:37.288094] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.280 [2024-12-09 17:38:37.300365] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.280 [2024-12-09 17:38:37.300798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.280 [2024-12-09 17:38:37.300815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:08.280 [2024-12-09 17:38:37.300823] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:08.280 [2024-12-09 17:38:37.300996] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:08.280 [2024-12-09 17:38:37.301172] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.280 [2024-12-09 17:38:37.301180] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.280 [2024-12-09 17:38:37.301187] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.280 [2024-12-09 17:38:37.301193] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.280 [2024-12-09 17:38:37.313442] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.280 [2024-12-09 17:38:37.313835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.280 [2024-12-09 17:38:37.313852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:08.280 [2024-12-09 17:38:37.313859] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:08.280 [2024-12-09 17:38:37.314032] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:08.280 [2024-12-09 17:38:37.314206] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.280 [2024-12-09 17:38:37.314215] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.280 [2024-12-09 17:38:37.314226] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.280 [2024-12-09 17:38:37.314232] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.280 [2024-12-09 17:38:37.326500] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.280 [2024-12-09 17:38:37.326936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.280 [2024-12-09 17:38:37.326952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:08.280 [2024-12-09 17:38:37.326960] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:08.280 [2024-12-09 17:38:37.327133] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:08.280 [2024-12-09 17:38:37.327313] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.280 [2024-12-09 17:38:37.327322] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.280 [2024-12-09 17:38:37.327329] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.280 [2024-12-09 17:38:37.327338] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.280 [2024-12-09 17:38:37.339597] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.280 [2024-12-09 17:38:37.340009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.280 [2024-12-09 17:38:37.340025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:08.280 [2024-12-09 17:38:37.340033] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:08.280 [2024-12-09 17:38:37.340206] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:08.280 [2024-12-09 17:38:37.340387] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.280 [2024-12-09 17:38:37.340396] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.280 [2024-12-09 17:38:37.340403] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.280 [2024-12-09 17:38:37.340410] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.280 [2024-12-09 17:38:37.352661] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.280 [2024-12-09 17:38:37.352987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.280 [2024-12-09 17:38:37.353004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:08.280 [2024-12-09 17:38:37.353011] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:08.280 [2024-12-09 17:38:37.353184] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:08.280 [2024-12-09 17:38:37.353363] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.280 [2024-12-09 17:38:37.353372] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.280 [2024-12-09 17:38:37.353378] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.280 [2024-12-09 17:38:37.353384] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.280 [2024-12-09 17:38:37.365647] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.280 [2024-12-09 17:38:37.366086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.280 [2024-12-09 17:38:37.366102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:08.280 [2024-12-09 17:38:37.366110] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:08.280 [2024-12-09 17:38:37.366288] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:08.280 [2024-12-09 17:38:37.366463] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.280 [2024-12-09 17:38:37.366471] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.280 [2024-12-09 17:38:37.366478] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.280 [2024-12-09 17:38:37.366484] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.280 [2024-12-09 17:38:37.378728] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.280 [2024-12-09 17:38:37.379161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.280 [2024-12-09 17:38:37.379180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:08.280 [2024-12-09 17:38:37.379188] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:08.280 [2024-12-09 17:38:37.379365] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:08.280 [2024-12-09 17:38:37.379539] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.280 [2024-12-09 17:38:37.379548] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.280 [2024-12-09 17:38:37.379555] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.280 [2024-12-09 17:38:37.379561] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.280 [2024-12-09 17:38:37.391816] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.280 [2024-12-09 17:38:37.392226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.280 [2024-12-09 17:38:37.392243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:08.280 [2024-12-09 17:38:37.392250] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:08.280 [2024-12-09 17:38:37.392423] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:08.280 [2024-12-09 17:38:37.392597] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.280 [2024-12-09 17:38:37.392605] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.280 [2024-12-09 17:38:37.392612] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.280 [2024-12-09 17:38:37.392618] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.280 [2024-12-09 17:38:37.404877] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.280 [2024-12-09 17:38:37.405290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.280 [2024-12-09 17:38:37.405307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:08.280 [2024-12-09 17:38:37.405314] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:08.280 [2024-12-09 17:38:37.405487] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:08.280 [2024-12-09 17:38:37.405662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.280 [2024-12-09 17:38:37.405670] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.281 [2024-12-09 17:38:37.405677] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.281 [2024-12-09 17:38:37.405683] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.281 [2024-12-09 17:38:37.417948] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.281 [2024-12-09 17:38:37.418385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.281 [2024-12-09 17:38:37.418401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:08.281 [2024-12-09 17:38:37.418409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:08.281 [2024-12-09 17:38:37.418587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:08.281 [2024-12-09 17:38:37.418761] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.281 [2024-12-09 17:38:37.418769] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.281 [2024-12-09 17:38:37.418776] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.281 [2024-12-09 17:38:37.418782] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.281 4834.33 IOPS, 18.88 MiB/s [2024-12-09T16:38:37.460Z] [2024-12-09 17:38:37.431067] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.281 [2024-12-09 17:38:37.431426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.281 [2024-12-09 17:38:37.431443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:08.281 [2024-12-09 17:38:37.431451] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:08.281 [2024-12-09 17:38:37.431625] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:08.281 [2024-12-09 17:38:37.431800] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.281 [2024-12-09 17:38:37.431808] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.281 [2024-12-09 17:38:37.431815] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.281 [2024-12-09 17:38:37.431821] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.281 [2024-12-09 17:38:37.444070] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.281 [2024-12-09 17:38:37.444505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.281 [2024-12-09 17:38:37.444522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:08.281 [2024-12-09 17:38:37.444529] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:08.281 [2024-12-09 17:38:37.444703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:08.281 [2024-12-09 17:38:37.444878] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.281 [2024-12-09 17:38:37.444886] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.281 [2024-12-09 17:38:37.444893] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.281 [2024-12-09 17:38:37.444900] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.539 [2024-12-09 17:38:37.457160] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.539 [2024-12-09 17:38:37.457594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.539 [2024-12-09 17:38:37.457610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:08.539 [2024-12-09 17:38:37.457617] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:08.539 [2024-12-09 17:38:37.457791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:08.539 [2024-12-09 17:38:37.457965] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.539 [2024-12-09 17:38:37.457976] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.539 [2024-12-09 17:38:37.457983] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.539 [2024-12-09 17:38:37.457989] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.540 [2024-12-09 17:38:37.470266] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.540 [2024-12-09 17:38:37.470696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.540 [2024-12-09 17:38:37.470714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:08.540 [2024-12-09 17:38:37.470721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:08.540 [2024-12-09 17:38:37.470896] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:08.540 [2024-12-09 17:38:37.471070] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.540 [2024-12-09 17:38:37.471079] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.540 [2024-12-09 17:38:37.471086] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.540 [2024-12-09 17:38:37.471093] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.540 [2024-12-09 17:38:37.483360] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.540 [2024-12-09 17:38:37.483763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.540 [2024-12-09 17:38:37.483780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:08.540 [2024-12-09 17:38:37.483787] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:08.540 [2024-12-09 17:38:37.483961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:08.540 [2024-12-09 17:38:37.484135] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.540 [2024-12-09 17:38:37.484144] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.540 [2024-12-09 17:38:37.484150] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.540 [2024-12-09 17:38:37.484156] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.540 [2024-12-09 17:38:37.496432] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.540 [2024-12-09 17:38:37.496865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.540 [2024-12-09 17:38:37.496883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:08.540 [2024-12-09 17:38:37.496893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:08.540 [2024-12-09 17:38:37.497066] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:08.540 [2024-12-09 17:38:37.497245] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.540 [2024-12-09 17:38:37.497255] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.540 [2024-12-09 17:38:37.497261] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.540 [2024-12-09 17:38:37.497273] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.540 [2024-12-09 17:38:37.509542] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.540 [2024-12-09 17:38:37.509890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.540 [2024-12-09 17:38:37.509907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:08.540 [2024-12-09 17:38:37.509915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:08.540 [2024-12-09 17:38:37.510089] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:08.540 [2024-12-09 17:38:37.510269] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.540 [2024-12-09 17:38:37.510278] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.540 [2024-12-09 17:38:37.510285] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.540 [2024-12-09 17:38:37.510292] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.540 [2024-12-09 17:38:37.522596] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.540 [2024-12-09 17:38:37.523030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.540 [2024-12-09 17:38:37.523047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:08.540 [2024-12-09 17:38:37.523054] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:08.540 [2024-12-09 17:38:37.523233] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:08.540 [2024-12-09 17:38:37.523409] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.540 [2024-12-09 17:38:37.523417] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.540 [2024-12-09 17:38:37.523424] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.540 [2024-12-09 17:38:37.523430] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.540 [2024-12-09 17:38:37.535703] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.540 [2024-12-09 17:38:37.535987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.540 [2024-12-09 17:38:37.536005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:08.540 [2024-12-09 17:38:37.536014] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:08.540 [2024-12-09 17:38:37.536188] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:08.540 [2024-12-09 17:38:37.536368] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.540 [2024-12-09 17:38:37.536377] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.540 [2024-12-09 17:38:37.536385] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.540 [2024-12-09 17:38:37.536391] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.540 [2024-12-09 17:38:37.548826] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.540 [2024-12-09 17:38:37.549111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.540 [2024-12-09 17:38:37.549131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:08.540 [2024-12-09 17:38:37.549139] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:08.540 [2024-12-09 17:38:37.549319] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:08.540 [2024-12-09 17:38:37.549495] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.540 [2024-12-09 17:38:37.549504] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.540 [2024-12-09 17:38:37.549511] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.540 [2024-12-09 17:38:37.549519] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.540 [2024-12-09 17:38:37.561958] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.540 [2024-12-09 17:38:37.562296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.540 [2024-12-09 17:38:37.562314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:08.540 [2024-12-09 17:38:37.562321] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:08.540 [2024-12-09 17:38:37.562496] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:08.540 [2024-12-09 17:38:37.562670] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.540 [2024-12-09 17:38:37.562680] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.540 [2024-12-09 17:38:37.562687] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.540 [2024-12-09 17:38:37.562693] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.540 [2024-12-09 17:38:37.574962] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.540 [2024-12-09 17:38:37.575238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.540 [2024-12-09 17:38:37.575255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:08.540 [2024-12-09 17:38:37.575263] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:08.540 [2024-12-09 17:38:37.575437] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:08.540 [2024-12-09 17:38:37.575611] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.541 [2024-12-09 17:38:37.575621] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.541 [2024-12-09 17:38:37.575628] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.541 [2024-12-09 17:38:37.575634] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.541 [2024-12-09 17:38:37.588079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.541 [2024-12-09 17:38:37.588353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.541 [2024-12-09 17:38:37.588370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:08.541 [2024-12-09 17:38:37.588377] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:08.541 [2024-12-09 17:38:37.588555] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:08.541 [2024-12-09 17:38:37.588730] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.541 [2024-12-09 17:38:37.588738] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.541 [2024-12-09 17:38:37.588745] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.541 [2024-12-09 17:38:37.588751] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.541 [2024-12-09 17:38:37.601203] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.541 [2024-12-09 17:38:37.601549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.541 [2024-12-09 17:38:37.601565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:08.541 [2024-12-09 17:38:37.601573] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:08.541 [2024-12-09 17:38:37.601746] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:08.541 [2024-12-09 17:38:37.601921] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.541 [2024-12-09 17:38:37.601930] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.541 [2024-12-09 17:38:37.601936] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.541 [2024-12-09 17:38:37.601943] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.541 [2024-12-09 17:38:37.614234] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.541 [2024-12-09 17:38:37.614518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.541 [2024-12-09 17:38:37.614535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:08.541 [2024-12-09 17:38:37.614542] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:08.541 [2024-12-09 17:38:37.614716] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:08.541 [2024-12-09 17:38:37.614890] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.541 [2024-12-09 17:38:37.614898] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.541 [2024-12-09 17:38:37.614905] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.541 [2024-12-09 17:38:37.614911] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.541 [2024-12-09 17:38:37.627370] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.541 [2024-12-09 17:38:37.627707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.541 [2024-12-09 17:38:37.627724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:08.541 [2024-12-09 17:38:37.627731] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:08.541 [2024-12-09 17:38:37.627906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:08.541 [2024-12-09 17:38:37.628080] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.541 [2024-12-09 17:38:37.628093] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.541 [2024-12-09 17:38:37.628100] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.541 [2024-12-09 17:38:37.628106] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.541 [2024-12-09 17:38:37.640384] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.541 [2024-12-09 17:38:37.640789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.541 [2024-12-09 17:38:37.640807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:08.541 [2024-12-09 17:38:37.640814] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:08.541 [2024-12-09 17:38:37.640988] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:08.541 [2024-12-09 17:38:37.641162] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.541 [2024-12-09 17:38:37.641171] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.541 [2024-12-09 17:38:37.641177] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.541 [2024-12-09 17:38:37.641183] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.541 [2024-12-09 17:38:37.653448] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.541 [2024-12-09 17:38:37.653748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.541 [2024-12-09 17:38:37.653764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:08.541 [2024-12-09 17:38:37.653772] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:08.541 [2024-12-09 17:38:37.653945] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:08.541 [2024-12-09 17:38:37.654120] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.541 [2024-12-09 17:38:37.654129] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.541 [2024-12-09 17:38:37.654135] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.541 [2024-12-09 17:38:37.654142] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.541 [2024-12-09 17:38:37.666583] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.541 [2024-12-09 17:38:37.666857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.541 [2024-12-09 17:38:37.666874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:08.541 [2024-12-09 17:38:37.666881] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:08.541 [2024-12-09 17:38:37.667055] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:08.541 [2024-12-09 17:38:37.667234] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.541 [2024-12-09 17:38:37.667243] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.541 [2024-12-09 17:38:37.667249] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.541 [2024-12-09 17:38:37.667256] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.541 [2024-12-09 17:38:37.679692] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.541 [2024-12-09 17:38:37.680029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.541 [2024-12-09 17:38:37.680046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:08.541 [2024-12-09 17:38:37.680053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:08.541 [2024-12-09 17:38:37.680232] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:08.541 [2024-12-09 17:38:37.680406] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.541 [2024-12-09 17:38:37.680416] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.541 [2024-12-09 17:38:37.680423] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.541 [2024-12-09 17:38:37.680429] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.541 [2024-12-09 17:38:37.692696] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.541 [2024-12-09 17:38:37.693108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.541 [2024-12-09 17:38:37.693125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:08.541 [2024-12-09 17:38:37.693132] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:08.541 [2024-12-09 17:38:37.693310] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:08.541 [2024-12-09 17:38:37.693486] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.541 [2024-12-09 17:38:37.693495] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.541 [2024-12-09 17:38:37.693502] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.541 [2024-12-09 17:38:37.693508] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.541 [2024-12-09 17:38:37.705786] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.541 [2024-12-09 17:38:37.706136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.541 [2024-12-09 17:38:37.706153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:08.541 [2024-12-09 17:38:37.706160] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:08.541 [2024-12-09 17:38:37.706338] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:08.541 [2024-12-09 17:38:37.706514] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.541 [2024-12-09 17:38:37.706523] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.541 [2024-12-09 17:38:37.706530] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.541 [2024-12-09 17:38:37.706536] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.801 [2024-12-09 17:38:37.718815] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.801 [2024-12-09 17:38:37.719106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.801 [2024-12-09 17:38:37.719125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:08.801 [2024-12-09 17:38:37.719133] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:08.801 [2024-12-09 17:38:37.719320] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:08.801 [2024-12-09 17:38:37.719495] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.801 [2024-12-09 17:38:37.719504] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.801 [2024-12-09 17:38:37.719510] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.801 [2024-12-09 17:38:37.719517] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.801 [2024-12-09 17:38:37.731944] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.801 [2024-12-09 17:38:37.732303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.801 [2024-12-09 17:38:37.732321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:08.801 [2024-12-09 17:38:37.732329] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:08.801 [2024-12-09 17:38:37.732502] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:08.801 [2024-12-09 17:38:37.732677] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.801 [2024-12-09 17:38:37.732686] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.801 [2024-12-09 17:38:37.732693] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.801 [2024-12-09 17:38:37.732699] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.801 [2024-12-09 17:38:37.745080] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.801 [2024-12-09 17:38:37.745381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.801 [2024-12-09 17:38:37.745398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:08.801 [2024-12-09 17:38:37.745405] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:08.801 [2024-12-09 17:38:37.745579] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:08.801 [2024-12-09 17:38:37.745755] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.801 [2024-12-09 17:38:37.745763] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.801 [2024-12-09 17:38:37.745770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.801 [2024-12-09 17:38:37.745776] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.801 [2024-12-09 17:38:37.758208] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.801 [2024-12-09 17:38:37.758499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.801 [2024-12-09 17:38:37.758515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:08.801 [2024-12-09 17:38:37.758523] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:08.801 [2024-12-09 17:38:37.758699] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:08.801 [2024-12-09 17:38:37.758873] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.801 [2024-12-09 17:38:37.758882] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.801 [2024-12-09 17:38:37.758888] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.801 [2024-12-09 17:38:37.758894] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.801 [2024-12-09 17:38:37.771330] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.801 [2024-12-09 17:38:37.771680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.801 [2024-12-09 17:38:37.771697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:08.801 [2024-12-09 17:38:37.771705] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:08.801 [2024-12-09 17:38:37.771879] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:08.801 [2024-12-09 17:38:37.772055] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.801 [2024-12-09 17:38:37.772064] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.801 [2024-12-09 17:38:37.772071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.801 [2024-12-09 17:38:37.772077] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.801 [2024-12-09 17:38:37.784351] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.801 [2024-12-09 17:38:37.784640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.801 [2024-12-09 17:38:37.784658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:08.801 [2024-12-09 17:38:37.784665] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:08.801 [2024-12-09 17:38:37.784840] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:08.801 [2024-12-09 17:38:37.785014] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.801 [2024-12-09 17:38:37.785023] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.801 [2024-12-09 17:38:37.785031] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.801 [2024-12-09 17:38:37.785038] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.801 [2024-12-09 17:38:37.797333] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.801 [2024-12-09 17:38:37.797683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.801 [2024-12-09 17:38:37.797701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:08.801 [2024-12-09 17:38:37.797708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:08.801 [2024-12-09 17:38:37.797882] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:08.801 [2024-12-09 17:38:37.798057] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.801 [2024-12-09 17:38:37.798066] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.801 [2024-12-09 17:38:37.798077] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.801 [2024-12-09 17:38:37.798083] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.801 [2024-12-09 17:38:37.810364] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.801 [2024-12-09 17:38:37.810702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.801 [2024-12-09 17:38:37.810719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:08.801 [2024-12-09 17:38:37.810726] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:08.801 [2024-12-09 17:38:37.810902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:08.801 [2024-12-09 17:38:37.811077] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.801 [2024-12-09 17:38:37.811087] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.801 [2024-12-09 17:38:37.811095] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.801 [2024-12-09 17:38:37.811101] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.801 [2024-12-09 17:38:37.823410] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.801 [2024-12-09 17:38:37.823768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.801 [2024-12-09 17:38:37.823785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:08.801 [2024-12-09 17:38:37.823793] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:08.801 [2024-12-09 17:38:37.823967] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:08.801 [2024-12-09 17:38:37.824144] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.801 [2024-12-09 17:38:37.824153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.801 [2024-12-09 17:38:37.824160] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.801 [2024-12-09 17:38:37.824167] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.801 [2024-12-09 17:38:37.836432] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.801 [2024-12-09 17:38:37.836772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.801 [2024-12-09 17:38:37.836792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:08.801 [2024-12-09 17:38:37.836799] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:08.801 [2024-12-09 17:38:37.836972] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:08.802 [2024-12-09 17:38:37.837149] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.802 [2024-12-09 17:38:37.837159] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.802 [2024-12-09 17:38:37.837166] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.802 [2024-12-09 17:38:37.837173] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.802 [2024-12-09 17:38:37.849456] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.802 [2024-12-09 17:38:37.849855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.802 [2024-12-09 17:38:37.849872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:08.802 [2024-12-09 17:38:37.849879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:08.802 [2024-12-09 17:38:37.850052] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:08.802 [2024-12-09 17:38:37.850230] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.802 [2024-12-09 17:38:37.850238] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.802 [2024-12-09 17:38:37.850244] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.802 [2024-12-09 17:38:37.850250] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.802 [2024-12-09 17:38:37.862521] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.802 [2024-12-09 17:38:37.862806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.802 [2024-12-09 17:38:37.862822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:08.802 [2024-12-09 17:38:37.862830] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:08.802 [2024-12-09 17:38:37.863003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:08.802 [2024-12-09 17:38:37.863178] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.802 [2024-12-09 17:38:37.863186] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.802 [2024-12-09 17:38:37.863192] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.802 [2024-12-09 17:38:37.863198] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.802 17:38:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:08.802 17:38:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:28:08.802 17:38:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:08.802 17:38:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:08.802 [2024-12-09 17:38:37.875634] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.802 17:38:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:08.802 [2024-12-09 17:38:37.876038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.802 [2024-12-09 17:38:37.876055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:08.802 [2024-12-09 17:38:37.876062] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:08.802 [2024-12-09 17:38:37.876241] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:08.802 [2024-12-09 17:38:37.876416] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.802 [2024-12-09 17:38:37.876424] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.802 [2024-12-09 17:38:37.876431] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.802 [2024-12-09 17:38:37.876440] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.802 [2024-12-09 17:38:37.888712] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.802 [2024-12-09 17:38:37.889117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.802 [2024-12-09 17:38:37.889133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:08.802 [2024-12-09 17:38:37.889140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:08.802 [2024-12-09 17:38:37.889319] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:08.802 [2024-12-09 17:38:37.889494] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.802 [2024-12-09 17:38:37.889502] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.802 [2024-12-09 17:38:37.889509] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.802 [2024-12-09 17:38:37.889516] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.802 [2024-12-09 17:38:37.901797] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.802 [2024-12-09 17:38:37.902119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.802 [2024-12-09 17:38:37.902135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:08.802 [2024-12-09 17:38:37.902142] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:08.802 [2024-12-09 17:38:37.902322] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:08.802 [2024-12-09 17:38:37.902497] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.802 [2024-12-09 17:38:37.902506] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.802 [2024-12-09 17:38:37.902515] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.802 [2024-12-09 17:38:37.902522] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.802 17:38:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:08.802 17:38:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:08.802 17:38:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.802 17:38:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:08.802 [2024-12-09 17:38:37.914791] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.802 [2024-12-09 17:38:37.915138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.802 [2024-12-09 17:38:37.915155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:08.802 [2024-12-09 17:38:37.915163] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:08.802 [2024-12-09 17:38:37.915340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:08.802 [2024-12-09 17:38:37.915516] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.802 [2024-12-09 17:38:37.915524] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.802 [2024-12-09 17:38:37.915534] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.802 [2024-12-09 17:38:37.915541] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.802 [2024-12-09 17:38:37.918267] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:08.802 17:38:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.802 17:38:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:08.802 17:38:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.802 17:38:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:08.802 [2024-12-09 17:38:37.927827] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.802 [2024-12-09 17:38:37.928157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.802 [2024-12-09 17:38:37.928175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:08.802 [2024-12-09 17:38:37.928183] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:08.802 [2024-12-09 17:38:37.928364] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:08.802 [2024-12-09 17:38:37.928539] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.802 [2024-12-09 17:38:37.928548] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.802 [2024-12-09 17:38:37.928555] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.802 [2024-12-09 17:38:37.928561] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.802 [2024-12-09 17:38:37.940829] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.802 [2024-12-09 17:38:37.941116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.802 [2024-12-09 17:38:37.941133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:08.803 [2024-12-09 17:38:37.941141] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:08.803 [2024-12-09 17:38:37.941320] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:08.803 [2024-12-09 17:38:37.941495] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.803 [2024-12-09 17:38:37.941504] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.803 [2024-12-09 17:38:37.941510] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.803 [2024-12-09 17:38:37.941516] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.803 [2024-12-09 17:38:37.953956] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.803 [2024-12-09 17:38:37.954309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.803 [2024-12-09 17:38:37.954326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:08.803 [2024-12-09 17:38:37.954334] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:08.803 [2024-12-09 17:38:37.954507] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:08.803 [2024-12-09 17:38:37.954682] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.803 [2024-12-09 17:38:37.954695] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.803 [2024-12-09 17:38:37.954701] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.803 [2024-12-09 17:38:37.954708] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.803 Malloc0 00:28:08.803 17:38:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.803 17:38:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:08.803 17:38:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.803 17:38:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:08.803 [2024-12-09 17:38:37.966970] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.803 [2024-12-09 17:38:37.967404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.803 [2024-12-09 17:38:37.967421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5aa0 with addr=10.0.0.2, port=4420 00:28:08.803 [2024-12-09 17:38:37.967428] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5aa0 is same with the state(6) to be set 00:28:08.803 [2024-12-09 17:38:37.967602] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5aa0 (9): Bad file descriptor 00:28:08.803 [2024-12-09 17:38:37.967776] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.803 [2024-12-09 17:38:37.967784] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.803 [2024-12-09 17:38:37.967791] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.803 [2024-12-09 17:38:37.967798] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.803 17:38:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.803 17:38:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:08.803 17:38:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.803 17:38:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:08.803 17:38:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.803 17:38:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:09.060 17:38:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.060 17:38:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:09.060 [2024-12-09 17:38:37.979614] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:09.060 [2024-12-09 17:38:37.980059] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.060 17:38:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.060 17:38:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2732457 00:28:09.060 [2024-12-09 17:38:38.016480] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:28:10.248 4804.00 IOPS, 18.77 MiB/s [2024-12-09T16:38:40.797Z] 5632.75 IOPS, 22.00 MiB/s [2024-12-09T16:38:41.728Z] 6280.00 IOPS, 24.53 MiB/s [2024-12-09T16:38:42.659Z] 6787.50 IOPS, 26.51 MiB/s [2024-12-09T16:38:43.590Z] 7215.09 IOPS, 28.18 MiB/s [2024-12-09T16:38:44.519Z] 7572.50 IOPS, 29.58 MiB/s [2024-12-09T16:38:45.450Z] 7876.00 IOPS, 30.77 MiB/s [2024-12-09T16:38:46.819Z] 8127.36 IOPS, 31.75 MiB/s [2024-12-09T16:38:46.819Z] 8352.53 IOPS, 32.63 MiB/s 00:28:17.641 Latency(us) 00:28:17.641 [2024-12-09T16:38:46.820Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:17.641 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:17.641 Verification LBA range: start 0x0 length 0x4000 00:28:17.641 Nvme1n1 : 15.01 8355.68 32.64 12884.50 0.00 6006.83 429.10 24966.10 00:28:17.641 [2024-12-09T16:38:46.820Z] =================================================================================================================== 00:28:17.641 [2024-12-09T16:38:46.820Z] Total : 8355.68 32.64 12884.50 0.00 6006.83 429.10 24966.10 00:28:17.641 17:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:28:17.641 17:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:17.641 17:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.641 17:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:17.641 17:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.641 17:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:28:17.641 17:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:28:17.641 17:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:17.641 17:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:28:17.641 17:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:17.641 17:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:28:17.641 17:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:17.641 17:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:17.641 rmmod nvme_tcp 00:28:17.641 rmmod nvme_fabrics 00:28:17.641 rmmod nvme_keyring 00:28:17.641 17:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:17.641 17:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:28:17.641 17:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:28:17.641 17:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 2733371 ']' 00:28:17.641 17:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 2733371 00:28:17.641 17:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 2733371 ']' 00:28:17.641 17:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 2733371 00:28:17.641 17:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:28:17.641 17:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:17.641 17:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2733371 00:28:17.641 17:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:17.641 17:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:17.641 17:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2733371' 00:28:17.641 killing process with pid 2733371 00:28:17.641 17:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 2733371 00:28:17.641 17:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 2733371 00:28:17.900 17:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:17.900 17:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:17.900 17:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:17.900 17:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:28:17.900 17:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:28:17.900 17:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:17.900 17:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:28:17.900 17:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:17.900 17:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:17.900 17:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:17.900 17:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:17.900 17:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:20.543 17:38:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:20.543 00:28:20.543 real 0m26.675s 00:28:20.543 user 1m2.881s 00:28:20.543 sys 0m6.760s 00:28:20.543 17:38:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:20.543 17:38:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:20.543 ************************************ 00:28:20.543 END TEST nvmf_bdevperf 00:28:20.543 ************************************ 00:28:20.543 17:38:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:20.543 17:38:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:20.543 17:38:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:20.543 17:38:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.543 ************************************ 00:28:20.543 START TEST nvmf_target_disconnect 00:28:20.543 ************************************ 00:28:20.543 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:20.543 * Looking for test storage... 00:28:20.543 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:20.543 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:20.543 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:28:20.543 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:20.543 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:20.543 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:20.543 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:20.543 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:20.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:20.544 --rc genhtml_branch_coverage=1 00:28:20.544 --rc genhtml_function_coverage=1 00:28:20.544 --rc genhtml_legend=1 00:28:20.544 --rc geninfo_all_blocks=1 00:28:20.544 --rc geninfo_unexecuted_blocks=1 00:28:20.544 00:28:20.544 ' 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:20.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:20.544 --rc genhtml_branch_coverage=1 00:28:20.544 --rc genhtml_function_coverage=1 00:28:20.544 --rc genhtml_legend=1 00:28:20.544 --rc geninfo_all_blocks=1 00:28:20.544 --rc geninfo_unexecuted_blocks=1 00:28:20.544 00:28:20.544 ' 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:20.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:20.544 --rc genhtml_branch_coverage=1 00:28:20.544 --rc genhtml_function_coverage=1 00:28:20.544 --rc genhtml_legend=1 00:28:20.544 --rc geninfo_all_blocks=1 00:28:20.544 --rc geninfo_unexecuted_blocks=1 00:28:20.544 00:28:20.544 ' 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:20.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:20.544 --rc genhtml_branch_coverage=1 00:28:20.544 --rc genhtml_function_coverage=1 00:28:20.544 --rc genhtml_legend=1 00:28:20.544 --rc geninfo_all_blocks=1 00:28:20.544 --rc geninfo_unexecuted_blocks=1 00:28:20.544 00:28:20.544 ' 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:20.544 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:20.544 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:20.545 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:28:20.545 17:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:25.820 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:25.820 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:25.820 Found net devices under 0000:af:00.0: cvl_0_0 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:25.820 Found net devices under 0000:af:00.1: cvl_0_1 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:25.820 17:38:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:26.079 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:26.079 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:26.079 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:26.079 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:26.079 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:26.079 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:26.079 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:26.079 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:26.079 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:26.079 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.351 ms 00:28:26.079 00:28:26.079 --- 10.0.0.2 ping statistics --- 00:28:26.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:26.079 rtt min/avg/max/mdev = 0.351/0.351/0.351/0.000 ms 00:28:26.079 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:26.079 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:26.079 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:28:26.079 00:28:26.079 --- 10.0.0.1 ping statistics --- 00:28:26.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:26.079 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:28:26.079 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:26.079 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:28:26.079 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:26.079 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:26.079 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:26.079 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:26.079 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:26.079 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:26.079 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:26.338 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:28:26.338 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:26.338 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:26.338 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:26.338 ************************************ 00:28:26.338 START TEST nvmf_target_disconnect_tc1 00:28:26.338 ************************************ 00:28:26.338 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:28:26.338 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:26.338 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:28:26.338 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:26.338 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:26.338 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:26.338 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:26.338 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:26.338 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:26.338 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:26.338 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:26.338 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:28:26.338 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:26.338 [2024-12-09 17:38:55.403325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.338 [2024-12-09 17:38:55.403363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cd410 with addr=10.0.0.2, port=4420 00:28:26.338 [2024-12-09 17:38:55.403398] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:26.338 [2024-12-09 17:38:55.403407] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:26.338 [2024-12-09 17:38:55.403414] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:28:26.338 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:28:26.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:28:26.338 Initializing NVMe Controllers 00:28:26.338 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:28:26.338 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:26.338 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:26.338 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:26.338 00:28:26.338 real 0m0.120s 00:28:26.338 user 0m0.051s 00:28:26.338 sys 0m0.068s 00:28:26.338 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:26.338 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:26.338 ************************************ 00:28:26.338 END TEST nvmf_target_disconnect_tc1 00:28:26.338 ************************************ 00:28:26.338 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:28:26.338 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:26.338 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:26.338 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:26.338 ************************************ 00:28:26.338 START TEST nvmf_target_disconnect_tc2 00:28:26.338 ************************************ 00:28:26.338 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:28:26.338 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:28:26.338 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:26.338 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:26.338 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:26.338 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:26.338 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2738486 00:28:26.338 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2738486 00:28:26.338 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:26.338 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2738486 ']' 00:28:26.338 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:26.338 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:26.338 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:26.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:26.338 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:26.338 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:26.597 [2024-12-09 17:38:55.535097] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:28:26.597 [2024-12-09 17:38:55.535136] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:26.597 [2024-12-09 17:38:55.613864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:26.597 [2024-12-09 17:38:55.654716] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:26.597 [2024-12-09 17:38:55.654751] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:26.597 [2024-12-09 17:38:55.654758] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:26.597 [2024-12-09 17:38:55.654764] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:26.597 [2024-12-09 17:38:55.654769] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:26.597 [2024-12-09 17:38:55.656255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:28:26.597 [2024-12-09 17:38:55.656362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:28:26.597 [2024-12-09 17:38:55.656489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:26.597 [2024-12-09 17:38:55.656490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:28:26.597 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:26.597 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:28:26.597 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:26.597 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:26.597 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:26.855 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:26.855 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:26.855 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.855 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:26.855 Malloc0 00:28:26.855 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.855 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:26.855 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.855 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:26.855 [2024-12-09 17:38:55.828785] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:26.855 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.855 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:26.855 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.855 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:26.855 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.855 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:26.855 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.855 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:26.855 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.855 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:26.855 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.856 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:26.856 [2024-12-09 17:38:55.857910] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:26.856 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.856 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:26.856 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.856 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:26.856 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.856 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2738511 00:28:26.856 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:28:26.856 17:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:28.755 17:38:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2738486 00:28:28.755 17:38:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:28:28.755 Read completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 Read completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 Read completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 Read completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 Read completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 Read completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 Read completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 Read completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 Read completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 Read completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 Write completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 Write completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 Read completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 Read completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 Read completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 Write completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 Write completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 Read completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 Write completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 Read completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 Read completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 Write completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 Read completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 Read completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 Read completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 Read completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 Write completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 Read completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 Write completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 Read completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 Read completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 Write completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 Read completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 Read completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 Read completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 Read completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 Read completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 Read completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 Read completed with error (sct=0, sc=8) 00:28:28.755 [2024-12-09 17:38:57.886066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:28.755 starting I/O failed 00:28:28.755 Read completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 Read completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 Read completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 Read completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 Read completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 Write completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 Write completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 Write completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 Write completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 Write completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 Write completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 Write completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 Write completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 Read completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 Write completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 Read completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 Write completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 Read completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 Write completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 Read completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 Write completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 Read completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 Write completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 Write completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 Write completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 [2024-12-09 17:38:57.886273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:28.755 Read completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 Read completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 Read completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 Read completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 Read completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 Read completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 Write completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 Read completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 Write completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 Write completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 Write completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 Write completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 Write completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 Read completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 Read completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 Read completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 Write completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 Write completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.755 Read completed with error (sct=0, sc=8) 00:28:28.755 starting I/O failed 00:28:28.756 Write completed with error (sct=0, sc=8) 00:28:28.756 starting I/O failed 00:28:28.756 Write completed with error (sct=0, sc=8) 00:28:28.756 starting I/O failed 00:28:28.756 Write completed with error (sct=0, sc=8) 00:28:28.756 starting I/O failed 00:28:28.756 Read completed with error (sct=0, sc=8) 00:28:28.756 starting I/O failed 00:28:28.756 Write completed with error (sct=0, sc=8) 00:28:28.756 starting I/O failed 00:28:28.756 Write completed with error (sct=0, sc=8) 00:28:28.756 starting I/O failed 00:28:28.756 Read completed with error (sct=0, sc=8) 00:28:28.756 starting I/O failed 00:28:28.756 Read completed with error (sct=0, sc=8) 00:28:28.756 starting I/O failed 00:28:28.756 Read completed with error (sct=0, sc=8) 00:28:28.756 starting I/O failed 00:28:28.756 Write completed with error (sct=0, sc=8) 00:28:28.756 starting I/O failed 00:28:28.756 Read completed with error (sct=0, sc=8) 00:28:28.756 starting I/O failed 00:28:28.756 Write completed with error (sct=0, sc=8) 00:28:28.756 starting I/O failed 00:28:28.756 Read completed with error (sct=0, sc=8) 00:28:28.756 starting I/O failed 00:28:28.756 [2024-12-09 17:38:57.886464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:28.756 Read completed with error (sct=0, sc=8) 00:28:28.756 starting I/O failed 00:28:28.756 Read completed with error (sct=0, sc=8) 00:28:28.756 starting I/O failed 00:28:28.756 Read completed with error (sct=0, sc=8) 00:28:28.756 starting I/O failed 00:28:28.756 Read completed with error (sct=0, sc=8) 00:28:28.756 starting I/O failed 00:28:28.756 Read completed with error (sct=0, sc=8) 00:28:28.756 starting I/O failed 00:28:28.756 Read completed with error (sct=0, sc=8) 00:28:28.756 starting I/O failed 00:28:28.756 Read completed with error (sct=0, sc=8) 00:28:28.756 starting I/O failed 00:28:28.756 Read completed with error (sct=0, sc=8) 00:28:28.756 starting I/O failed 00:28:28.756 Read completed with error (sct=0, sc=8) 00:28:28.756 starting I/O failed 00:28:28.756 Read completed with error (sct=0, sc=8) 00:28:28.756 starting I/O failed 00:28:28.756 Read completed with error (sct=0, sc=8) 00:28:28.756 starting I/O failed 00:28:28.756 Read completed with error (sct=0, sc=8) 00:28:28.756 starting I/O failed 00:28:28.756 Write completed with error (sct=0, sc=8) 00:28:28.756 starting I/O failed 00:28:28.756 Read completed with error (sct=0, sc=8) 00:28:28.756 starting I/O failed 00:28:28.756 Write completed with error (sct=0, sc=8) 00:28:28.756 starting I/O failed 00:28:28.756 Read completed with error (sct=0, sc=8) 00:28:28.756 starting I/O failed 00:28:28.756 Read completed with error (sct=0, sc=8) 00:28:28.756 starting I/O failed 00:28:28.756 Write completed with error (sct=0, sc=8) 00:28:28.756 starting I/O failed 00:28:28.756 Write completed with error (sct=0, sc=8) 00:28:28.756 starting I/O failed 00:28:28.756 Read completed with error (sct=0, sc=8) 00:28:28.756 starting I/O failed 00:28:28.756 Read completed with error (sct=0, sc=8) 00:28:28.756 starting I/O failed 00:28:28.756 Write completed with error (sct=0, sc=8) 00:28:28.756 starting I/O failed 00:28:28.756 Read completed with error (sct=0, sc=8) 00:28:28.756 starting I/O failed 00:28:28.756 Read completed with error (sct=0, sc=8) 00:28:28.756 starting I/O failed 00:28:28.756 Read completed with error (sct=0, sc=8) 00:28:28.756 starting I/O failed 00:28:28.756 Read completed with error (sct=0, sc=8) 00:28:28.756 starting I/O failed 00:28:28.756 Read completed with error (sct=0, sc=8) 00:28:28.756 starting I/O failed 00:28:28.756 Write completed with error (sct=0, sc=8) 00:28:28.756 starting I/O failed 00:28:28.756 Write completed with error (sct=0, sc=8) 00:28:28.756 starting I/O failed 00:28:28.756 Read completed with error (sct=0, sc=8) 00:28:28.756 starting I/O failed 00:28:28.756 Read completed with error (sct=0, sc=8) 00:28:28.756 starting I/O failed 00:28:28.756 Write completed with error (sct=0, sc=8) 00:28:28.756 starting I/O failed 00:28:28.756 [2024-12-09 17:38:57.886663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.756 [2024-12-09 17:38:57.886803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.756 [2024-12-09 17:38:57.886825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:28.756 qpair failed and we were unable to recover it. 00:28:28.756 [2024-12-09 17:38:57.887086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.756 [2024-12-09 17:38:57.887121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:28.756 qpair failed and we were unable to recover it. 00:28:28.756 [2024-12-09 17:38:57.887364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.756 [2024-12-09 17:38:57.887399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:28.756 qpair failed and we were unable to recover it. 00:28:28.756 [2024-12-09 17:38:57.887594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.756 [2024-12-09 17:38:57.887625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:28.756 qpair failed and we were unable to recover it. 00:28:28.756 [2024-12-09 17:38:57.887785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.756 [2024-12-09 17:38:57.887817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:28.756 qpair failed and we were unable to recover it. 00:28:28.756 [2024-12-09 17:38:57.887957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.756 [2024-12-09 17:38:57.887989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:28.756 qpair failed and we were unable to recover it. 00:28:28.756 [2024-12-09 17:38:57.888168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.756 [2024-12-09 17:38:57.888200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:28.756 qpair failed and we were unable to recover it. 00:28:28.756 [2024-12-09 17:38:57.888351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.756 [2024-12-09 17:38:57.888361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:28.756 qpair failed and we were unable to recover it. 00:28:28.756 [2024-12-09 17:38:57.888452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.756 [2024-12-09 17:38:57.888463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:28.756 qpair failed and we were unable to recover it. 00:28:28.756 [2024-12-09 17:38:57.888605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.756 [2024-12-09 17:38:57.888615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:28.756 qpair failed and we were unable to recover it. 00:28:28.756 [2024-12-09 17:38:57.888712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.756 [2024-12-09 17:38:57.888723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:28.756 qpair failed and we were unable to recover it. 00:28:28.756 [2024-12-09 17:38:57.888817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.756 [2024-12-09 17:38:57.888827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:28.756 qpair failed and we were unable to recover it. 00:28:28.756 [2024-12-09 17:38:57.888907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.756 [2024-12-09 17:38:57.888920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:28.756 qpair failed and we were unable to recover it. 00:28:28.756 [2024-12-09 17:38:57.889091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.756 [2024-12-09 17:38:57.889122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:28.756 qpair failed and we were unable to recover it. 00:28:28.756 [2024-12-09 17:38:57.889328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.756 [2024-12-09 17:38:57.889361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:28.756 qpair failed and we were unable to recover it. 00:28:28.756 [2024-12-09 17:38:57.889501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.756 [2024-12-09 17:38:57.889533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:28.756 qpair failed and we were unable to recover it. 00:28:28.756 [2024-12-09 17:38:57.889717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.756 [2024-12-09 17:38:57.889727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:28.756 qpair failed and we were unable to recover it. 00:28:28.756 [2024-12-09 17:38:57.889816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.756 [2024-12-09 17:38:57.889827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:28.756 qpair failed and we were unable to recover it. 00:28:28.756 [2024-12-09 17:38:57.889966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.756 [2024-12-09 17:38:57.889976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:28.756 qpair failed and we were unable to recover it. 00:28:28.756 [2024-12-09 17:38:57.890194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.756 [2024-12-09 17:38:57.890204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:28.756 qpair failed and we were unable to recover it. 00:28:28.756 [2024-12-09 17:38:57.890360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.756 [2024-12-09 17:38:57.890385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:28.756 qpair failed and we were unable to recover it. 00:28:28.756 [2024-12-09 17:38:57.890559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.756 [2024-12-09 17:38:57.890591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:28.756 qpair failed and we were unable to recover it. 00:28:28.756 [2024-12-09 17:38:57.890713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.756 [2024-12-09 17:38:57.890744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:28.756 qpair failed and we were unable to recover it. 00:28:28.757 [2024-12-09 17:38:57.890935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.757 [2024-12-09 17:38:57.890967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:28.757 qpair failed and we were unable to recover it. 00:28:28.757 [2024-12-09 17:38:57.891149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.757 [2024-12-09 17:38:57.891181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:28.757 qpair failed and we were unable to recover it. 00:28:28.757 [2024-12-09 17:38:57.891412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.757 [2024-12-09 17:38:57.891422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:28.757 qpair failed and we were unable to recover it. 00:28:28.757 [2024-12-09 17:38:57.891551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.757 [2024-12-09 17:38:57.891562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:28.757 qpair failed and we were unable to recover it. 00:28:28.757 [2024-12-09 17:38:57.891764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.757 [2024-12-09 17:38:57.891775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:28.757 qpair failed and we were unable to recover it. 00:28:28.757 [2024-12-09 17:38:57.891933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.757 [2024-12-09 17:38:57.891943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:28.757 qpair failed and we were unable to recover it. 00:28:28.757 [2024-12-09 17:38:57.892126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.757 [2024-12-09 17:38:57.892158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:28.757 qpair failed and we were unable to recover it. 00:28:28.757 [2024-12-09 17:38:57.892280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.757 [2024-12-09 17:38:57.892313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:28.757 qpair failed and we were unable to recover it. 00:28:28.757 [2024-12-09 17:38:57.892460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.757 [2024-12-09 17:38:57.892493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:28.757 qpair failed and we were unable to recover it. 00:28:28.757 [2024-12-09 17:38:57.892671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.757 [2024-12-09 17:38:57.892681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:28.757 qpair failed and we were unable to recover it. 00:28:28.757 [2024-12-09 17:38:57.892864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.757 [2024-12-09 17:38:57.892896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:28.757 qpair failed and we were unable to recover it. 00:28:28.757 [2024-12-09 17:38:57.893087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.757 [2024-12-09 17:38:57.893120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:28.757 qpair failed and we were unable to recover it. 00:28:28.757 [2024-12-09 17:38:57.893347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.757 [2024-12-09 17:38:57.893382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:28.757 qpair failed and we were unable to recover it. 00:28:28.757 [2024-12-09 17:38:57.893572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.757 [2024-12-09 17:38:57.893604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:28.757 qpair failed and we were unable to recover it. 00:28:28.757 [2024-12-09 17:38:57.893857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.757 [2024-12-09 17:38:57.893889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:28.757 qpair failed and we were unable to recover it. 00:28:28.757 [2024-12-09 17:38:57.894136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.757 [2024-12-09 17:38:57.894168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:28.757 qpair failed and we were unable to recover it. 00:28:28.757 [2024-12-09 17:38:57.894412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.757 [2024-12-09 17:38:57.894465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:28.757 qpair failed and we were unable to recover it. 00:28:28.757 [2024-12-09 17:38:57.894670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.757 [2024-12-09 17:38:57.894681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:28.757 qpair failed and we were unable to recover it. 00:28:28.757 [2024-12-09 17:38:57.894766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.757 [2024-12-09 17:38:57.894776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:28.757 qpair failed and we were unable to recover it. 00:28:28.757 [2024-12-09 17:38:57.894939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.757 [2024-12-09 17:38:57.894949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:28.757 qpair failed and we were unable to recover it. 00:28:28.757 [2024-12-09 17:38:57.895090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.757 [2024-12-09 17:38:57.895100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:28.757 qpair failed and we were unable to recover it. 00:28:28.757 [2024-12-09 17:38:57.895248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.757 [2024-12-09 17:38:57.895259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:28.757 qpair failed and we were unable to recover it. 00:28:28.757 [2024-12-09 17:38:57.895343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.757 [2024-12-09 17:38:57.895353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:28.757 qpair failed and we were unable to recover it. 00:28:28.757 [2024-12-09 17:38:57.895431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.757 [2024-12-09 17:38:57.895441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:28.757 qpair failed and we were unable to recover it. 00:28:28.757 [2024-12-09 17:38:57.895601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.757 [2024-12-09 17:38:57.895611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:28.757 qpair failed and we were unable to recover it. 00:28:28.757 [2024-12-09 17:38:57.895738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.757 [2024-12-09 17:38:57.895749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:28.757 qpair failed and we were unable to recover it. 00:28:28.757 [2024-12-09 17:38:57.895827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.757 [2024-12-09 17:38:57.895836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:28.757 qpair failed and we were unable to recover it. 00:28:28.757 [2024-12-09 17:38:57.895997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.757 [2024-12-09 17:38:57.896007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:28.757 qpair failed and we were unable to recover it. 00:28:28.757 [2024-12-09 17:38:57.896158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.757 [2024-12-09 17:38:57.896189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:28.757 qpair failed and we were unable to recover it. 00:28:28.757 [2024-12-09 17:38:57.896467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.757 [2024-12-09 17:38:57.896525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.757 qpair failed and we were unable to recover it. 00:28:28.757 [2024-12-09 17:38:57.896710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.757 [2024-12-09 17:38:57.896720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.757 qpair failed and we were unable to recover it. 00:28:28.757 [2024-12-09 17:38:57.896919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.757 [2024-12-09 17:38:57.896952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.757 qpair failed and we were unable to recover it. 00:28:28.757 [2024-12-09 17:38:57.897234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.757 [2024-12-09 17:38:57.897267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.757 qpair failed and we were unable to recover it. 00:28:28.757 [2024-12-09 17:38:57.897481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.757 [2024-12-09 17:38:57.897512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.757 qpair failed and we were unable to recover it. 00:28:28.757 [2024-12-09 17:38:57.897654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.757 [2024-12-09 17:38:57.897687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.757 qpair failed and we were unable to recover it. 00:28:28.757 [2024-12-09 17:38:57.897917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.757 [2024-12-09 17:38:57.897948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.757 qpair failed and we were unable to recover it. 00:28:28.757 [2024-12-09 17:38:57.898212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.757 [2024-12-09 17:38:57.898258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.757 qpair failed and we were unable to recover it. 00:28:28.757 [2024-12-09 17:38:57.898437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.757 [2024-12-09 17:38:57.898469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.757 qpair failed and we were unable to recover it. 00:28:28.757 [2024-12-09 17:38:57.898591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.757 [2024-12-09 17:38:57.898623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.757 qpair failed and we were unable to recover it. 00:28:28.757 [2024-12-09 17:38:57.898814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.758 [2024-12-09 17:38:57.898846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.758 qpair failed and we were unable to recover it. 00:28:28.758 [2024-12-09 17:38:57.898980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.758 [2024-12-09 17:38:57.899012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.758 qpair failed and we were unable to recover it. 00:28:28.758 [2024-12-09 17:38:57.899196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.758 [2024-12-09 17:38:57.899239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.758 qpair failed and we were unable to recover it. 00:28:28.758 [2024-12-09 17:38:57.899458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.758 [2024-12-09 17:38:57.899471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.758 qpair failed and we were unable to recover it. 00:28:28.758 [2024-12-09 17:38:57.899571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.758 [2024-12-09 17:38:57.899584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.758 qpair failed and we were unable to recover it. 00:28:28.758 [2024-12-09 17:38:57.899787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.758 [2024-12-09 17:38:57.899801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.758 qpair failed and we were unable to recover it. 00:28:28.758 [2024-12-09 17:38:57.899882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.758 [2024-12-09 17:38:57.899895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.758 qpair failed and we were unable to recover it. 00:28:28.758 [2024-12-09 17:38:57.899974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.758 [2024-12-09 17:38:57.899987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.758 qpair failed and we were unable to recover it. 00:28:28.758 [2024-12-09 17:38:57.900175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.758 [2024-12-09 17:38:57.900207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.758 qpair failed and we were unable to recover it. 00:28:28.758 [2024-12-09 17:38:57.900517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.758 [2024-12-09 17:38:57.900549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.758 qpair failed and we were unable to recover it. 00:28:28.758 [2024-12-09 17:38:57.900759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.758 [2024-12-09 17:38:57.900772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.758 qpair failed and we were unable to recover it. 00:28:28.758 [2024-12-09 17:38:57.900997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.758 [2024-12-09 17:38:57.901029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.758 qpair failed and we were unable to recover it. 00:28:28.758 [2024-12-09 17:38:57.901286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.758 [2024-12-09 17:38:57.901319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.758 qpair failed and we were unable to recover it. 00:28:28.758 [2024-12-09 17:38:57.901512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.758 [2024-12-09 17:38:57.901525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.758 qpair failed and we were unable to recover it. 00:28:28.758 [2024-12-09 17:38:57.901695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.758 [2024-12-09 17:38:57.901726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.758 qpair failed and we were unable to recover it. 00:28:28.758 [2024-12-09 17:38:57.901916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.758 [2024-12-09 17:38:57.901946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.758 qpair failed and we were unable to recover it. 00:28:28.758 [2024-12-09 17:38:57.902203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.758 [2024-12-09 17:38:57.902243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.758 qpair failed and we were unable to recover it. 00:28:28.758 [2024-12-09 17:38:57.902372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.758 [2024-12-09 17:38:57.902387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.758 qpair failed and we were unable to recover it. 00:28:28.758 [2024-12-09 17:38:57.902588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.758 [2024-12-09 17:38:57.902619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.758 qpair failed and we were unable to recover it. 00:28:28.758 [2024-12-09 17:38:57.902933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.758 [2024-12-09 17:38:57.902965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.758 qpair failed and we were unable to recover it. 00:28:28.758 [2024-12-09 17:38:57.903149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.758 [2024-12-09 17:38:57.903180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.758 qpair failed and we were unable to recover it. 00:28:28.758 [2024-12-09 17:38:57.903470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.758 [2024-12-09 17:38:57.903501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.758 qpair failed and we were unable to recover it. 00:28:28.758 [2024-12-09 17:38:57.903744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.758 [2024-12-09 17:38:57.903775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.758 qpair failed and we were unable to recover it. 00:28:28.758 [2024-12-09 17:38:57.903987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.758 [2024-12-09 17:38:57.904018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.758 qpair failed and we were unable to recover it. 00:28:28.758 [2024-12-09 17:38:57.904281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.758 [2024-12-09 17:38:57.904313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.758 qpair failed and we were unable to recover it. 00:28:28.758 [2024-12-09 17:38:57.904488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.758 [2024-12-09 17:38:57.904519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.758 qpair failed and we were unable to recover it. 00:28:28.758 [2024-12-09 17:38:57.904760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.758 [2024-12-09 17:38:57.904773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.758 qpair failed and we were unable to recover it. 00:28:28.758 [2024-12-09 17:38:57.904950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.758 [2024-12-09 17:38:57.904980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.758 qpair failed and we were unable to recover it. 00:28:28.758 [2024-12-09 17:38:57.905252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.758 [2024-12-09 17:38:57.905285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.758 qpair failed and we were unable to recover it. 00:28:28.758 [2024-12-09 17:38:57.905480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.758 [2024-12-09 17:38:57.905511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.758 qpair failed and we were unable to recover it. 00:28:28.758 [2024-12-09 17:38:57.905724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.758 [2024-12-09 17:38:57.905754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.758 qpair failed and we were unable to recover it. 00:28:28.758 [2024-12-09 17:38:57.905893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.758 [2024-12-09 17:38:57.905925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.758 qpair failed and we were unable to recover it. 00:28:28.758 [2024-12-09 17:38:57.906193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.758 [2024-12-09 17:38:57.906250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.758 qpair failed and we were unable to recover it. 00:28:28.758 [2024-12-09 17:38:57.906424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.758 [2024-12-09 17:38:57.906456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.758 qpair failed and we were unable to recover it. 00:28:28.758 [2024-12-09 17:38:57.906593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.758 [2024-12-09 17:38:57.906606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.758 qpair failed and we were unable to recover it. 00:28:28.758 [2024-12-09 17:38:57.906744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.758 [2024-12-09 17:38:57.906757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.758 qpair failed and we were unable to recover it. 00:28:28.758 [2024-12-09 17:38:57.906962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.758 [2024-12-09 17:38:57.906975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.758 qpair failed and we were unable to recover it. 00:28:28.758 [2024-12-09 17:38:57.907205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.758 [2024-12-09 17:38:57.907255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.758 qpair failed and we were unable to recover it. 00:28:28.758 [2024-12-09 17:38:57.907501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.758 [2024-12-09 17:38:57.907532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.758 qpair failed and we were unable to recover it. 00:28:28.758 [2024-12-09 17:38:57.907655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.759 [2024-12-09 17:38:57.907687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.759 qpair failed and we were unable to recover it. 00:28:28.759 [2024-12-09 17:38:57.907807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.759 [2024-12-09 17:38:57.907837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.759 qpair failed and we were unable to recover it. 00:28:28.759 [2024-12-09 17:38:57.908027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.759 [2024-12-09 17:38:57.908059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.759 qpair failed and we were unable to recover it. 00:28:28.759 [2024-12-09 17:38:57.908326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.759 [2024-12-09 17:38:57.908361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.759 qpair failed and we were unable to recover it. 00:28:28.759 [2024-12-09 17:38:57.908550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.759 [2024-12-09 17:38:57.908581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.759 qpair failed and we were unable to recover it. 00:28:28.759 [2024-12-09 17:38:57.908773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.759 [2024-12-09 17:38:57.908805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.759 qpair failed and we were unable to recover it. 00:28:28.759 [2024-12-09 17:38:57.909066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.759 [2024-12-09 17:38:57.909098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.759 qpair failed and we were unable to recover it. 00:28:28.759 [2024-12-09 17:38:57.909284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.759 [2024-12-09 17:38:57.909316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.759 qpair failed and we were unable to recover it. 00:28:28.759 [2024-12-09 17:38:57.909499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.759 [2024-12-09 17:38:57.909530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.759 qpair failed and we were unable to recover it. 00:28:28.759 [2024-12-09 17:38:57.909770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.759 [2024-12-09 17:38:57.909802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.759 qpair failed and we were unable to recover it. 00:28:28.759 [2024-12-09 17:38:57.910061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.759 [2024-12-09 17:38:57.910092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.759 qpair failed and we were unable to recover it. 00:28:28.759 [2024-12-09 17:38:57.910349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.759 [2024-12-09 17:38:57.910381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.759 qpair failed and we were unable to recover it. 00:28:28.759 [2024-12-09 17:38:57.910627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.759 [2024-12-09 17:38:57.910659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.759 qpair failed and we were unable to recover it. 00:28:28.759 [2024-12-09 17:38:57.910841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.759 [2024-12-09 17:38:57.910873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.759 qpair failed and we were unable to recover it. 00:28:28.759 [2024-12-09 17:38:57.911080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.759 [2024-12-09 17:38:57.911112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.759 qpair failed and we were unable to recover it. 00:28:28.759 [2024-12-09 17:38:57.911298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.759 [2024-12-09 17:38:57.911332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.759 qpair failed and we were unable to recover it. 00:28:28.759 [2024-12-09 17:38:57.911534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.759 [2024-12-09 17:38:57.911565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.759 qpair failed and we were unable to recover it. 00:28:28.759 [2024-12-09 17:38:57.911873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.759 [2024-12-09 17:38:57.911906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.759 qpair failed and we were unable to recover it. 00:28:28.759 [2024-12-09 17:38:57.912207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.759 [2024-12-09 17:38:57.912258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.759 qpair failed and we were unable to recover it. 00:28:28.759 [2024-12-09 17:38:57.912407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.759 [2024-12-09 17:38:57.912438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.759 qpair failed and we were unable to recover it. 00:28:28.759 [2024-12-09 17:38:57.912637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.759 [2024-12-09 17:38:57.912669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.759 qpair failed and we were unable to recover it. 00:28:28.759 [2024-12-09 17:38:57.912959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.759 [2024-12-09 17:38:57.912990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.759 qpair failed and we were unable to recover it. 00:28:28.759 [2024-12-09 17:38:57.913117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.759 [2024-12-09 17:38:57.913148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.759 qpair failed and we were unable to recover it. 00:28:28.759 [2024-12-09 17:38:57.913329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.759 [2024-12-09 17:38:57.913362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.759 qpair failed and we were unable to recover it. 00:28:28.759 [2024-12-09 17:38:57.913544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.759 [2024-12-09 17:38:57.913575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.759 qpair failed and we were unable to recover it. 00:28:28.759 [2024-12-09 17:38:57.913844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.759 [2024-12-09 17:38:57.913875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.759 qpair failed and we were unable to recover it. 00:28:28.759 [2024-12-09 17:38:57.914133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.759 [2024-12-09 17:38:57.914164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.759 qpair failed and we were unable to recover it. 00:28:28.759 [2024-12-09 17:38:57.914302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.759 [2024-12-09 17:38:57.914334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.759 qpair failed and we were unable to recover it. 00:28:28.759 [2024-12-09 17:38:57.914572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.759 [2024-12-09 17:38:57.914603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.759 qpair failed and we were unable to recover it. 00:28:28.759 [2024-12-09 17:38:57.914796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.759 [2024-12-09 17:38:57.914827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.759 qpair failed and we were unable to recover it. 00:28:28.759 [2024-12-09 17:38:57.915000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.759 [2024-12-09 17:38:57.915031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.759 qpair failed and we were unable to recover it. 00:28:28.759 [2024-12-09 17:38:57.915240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.759 [2024-12-09 17:38:57.915273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.759 qpair failed and we were unable to recover it. 00:28:28.759 [2024-12-09 17:38:57.915538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.759 [2024-12-09 17:38:57.915570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.759 qpair failed and we were unable to recover it. 00:28:28.760 [2024-12-09 17:38:57.915822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.760 [2024-12-09 17:38:57.915853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.760 qpair failed and we were unable to recover it. 00:28:28.760 [2024-12-09 17:38:57.916118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.760 [2024-12-09 17:38:57.916150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.760 qpair failed and we were unable to recover it. 00:28:28.760 [2024-12-09 17:38:57.916361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.760 [2024-12-09 17:38:57.916392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.760 qpair failed and we were unable to recover it. 00:28:28.760 [2024-12-09 17:38:57.916531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.760 [2024-12-09 17:38:57.916561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.760 qpair failed and we were unable to recover it. 00:28:28.760 [2024-12-09 17:38:57.916749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.760 [2024-12-09 17:38:57.916780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.760 qpair failed and we were unable to recover it. 00:28:28.760 [2024-12-09 17:38:57.917089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.760 [2024-12-09 17:38:57.917120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.760 qpair failed and we were unable to recover it. 00:28:28.760 [2024-12-09 17:38:57.917304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.760 [2024-12-09 17:38:57.917337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.760 qpair failed and we were unable to recover it. 00:28:28.760 [2024-12-09 17:38:57.917595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.760 [2024-12-09 17:38:57.917626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.760 qpair failed and we were unable to recover it. 00:28:28.760 [2024-12-09 17:38:57.917817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.760 [2024-12-09 17:38:57.917848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.760 qpair failed and we were unable to recover it. 00:28:28.760 [2024-12-09 17:38:57.918130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.760 [2024-12-09 17:38:57.918161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.760 qpair failed and we were unable to recover it. 00:28:28.760 [2024-12-09 17:38:57.918354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.760 [2024-12-09 17:38:57.918387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.760 qpair failed and we were unable to recover it. 00:28:28.760 [2024-12-09 17:38:57.918601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.760 [2024-12-09 17:38:57.918632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.760 qpair failed and we were unable to recover it. 00:28:28.760 [2024-12-09 17:38:57.918935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.760 [2024-12-09 17:38:57.918984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.760 qpair failed and we were unable to recover it. 00:28:28.760 [2024-12-09 17:38:57.919170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.760 [2024-12-09 17:38:57.919203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.760 qpair failed and we were unable to recover it. 00:28:28.760 [2024-12-09 17:38:57.919423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.760 [2024-12-09 17:38:57.919455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.760 qpair failed and we were unable to recover it. 00:28:28.760 [2024-12-09 17:38:57.919696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.760 [2024-12-09 17:38:57.919728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.760 qpair failed and we were unable to recover it. 00:28:28.760 [2024-12-09 17:38:57.920045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.760 [2024-12-09 17:38:57.920077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.760 qpair failed and we were unable to recover it. 00:28:28.760 [2024-12-09 17:38:57.920323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.760 [2024-12-09 17:38:57.920357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.760 qpair failed and we were unable to recover it. 00:28:28.760 [2024-12-09 17:38:57.920541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.760 [2024-12-09 17:38:57.920572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.760 qpair failed and we were unable to recover it. 00:28:28.760 [2024-12-09 17:38:57.920811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.760 [2024-12-09 17:38:57.920843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.760 qpair failed and we were unable to recover it. 00:28:28.760 [2024-12-09 17:38:57.921031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.760 [2024-12-09 17:38:57.921064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.760 qpair failed and we were unable to recover it. 00:28:28.760 [2024-12-09 17:38:57.921304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.760 [2024-12-09 17:38:57.921338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.760 qpair failed and we were unable to recover it. 00:28:28.760 [2024-12-09 17:38:57.921623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.760 [2024-12-09 17:38:57.921654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.760 qpair failed and we were unable to recover it. 00:28:28.760 [2024-12-09 17:38:57.921794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.760 [2024-12-09 17:38:57.921825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.760 qpair failed and we were unable to recover it. 00:28:28.760 [2024-12-09 17:38:57.921998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.760 [2024-12-09 17:38:57.922029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.760 qpair failed and we were unable to recover it. 00:28:28.760 [2024-12-09 17:38:57.922301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.760 [2024-12-09 17:38:57.922340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.760 qpair failed and we were unable to recover it. 00:28:28.760 [2024-12-09 17:38:57.922609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.760 [2024-12-09 17:38:57.922641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.760 qpair failed and we were unable to recover it. 00:28:28.760 [2024-12-09 17:38:57.922891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.760 [2024-12-09 17:38:57.922922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.760 qpair failed and we were unable to recover it. 00:28:28.760 [2024-12-09 17:38:57.923101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.760 [2024-12-09 17:38:57.923131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.760 qpair failed and we were unable to recover it. 00:28:28.760 [2024-12-09 17:38:57.923399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.760 [2024-12-09 17:38:57.923432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.760 qpair failed and we were unable to recover it. 00:28:28.760 [2024-12-09 17:38:57.923614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.760 [2024-12-09 17:38:57.923645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.760 qpair failed and we were unable to recover it. 00:28:28.760 [2024-12-09 17:38:57.923865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.760 [2024-12-09 17:38:57.923907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.760 qpair failed and we were unable to recover it. 00:28:28.760 [2024-12-09 17:38:57.924114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.760 [2024-12-09 17:38:57.924161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.760 qpair failed and we were unable to recover it. 00:28:28.760 [2024-12-09 17:38:57.924416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.760 [2024-12-09 17:38:57.924458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.760 qpair failed and we were unable to recover it. 00:28:28.760 [2024-12-09 17:38:57.924707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.760 [2024-12-09 17:38:57.924740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.760 qpair failed and we were unable to recover it. 00:28:28.760 [2024-12-09 17:38:57.925005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.760 [2024-12-09 17:38:57.925037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.760 qpair failed and we were unable to recover it. 00:28:28.760 [2024-12-09 17:38:57.925290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.760 [2024-12-09 17:38:57.925323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.760 qpair failed and we were unable to recover it. 00:28:28.760 [2024-12-09 17:38:57.925526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.760 [2024-12-09 17:38:57.925558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.760 qpair failed and we were unable to recover it. 00:28:28.760 [2024-12-09 17:38:57.925756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.760 [2024-12-09 17:38:57.925788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.760 qpair failed and we were unable to recover it. 00:28:28.761 [2024-12-09 17:38:57.926089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.761 [2024-12-09 17:38:57.926120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.761 qpair failed and we were unable to recover it. 00:28:28.761 [2024-12-09 17:38:57.926356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.761 [2024-12-09 17:38:57.926407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.761 qpair failed and we were unable to recover it. 00:28:28.761 [2024-12-09 17:38:57.926629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.761 [2024-12-09 17:38:57.926670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.761 qpair failed and we were unable to recover it. 00:28:28.761 [2024-12-09 17:38:57.926878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.761 [2024-12-09 17:38:57.926911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.761 qpair failed and we were unable to recover it. 00:28:28.761 [2024-12-09 17:38:57.927086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.761 [2024-12-09 17:38:57.927118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.761 qpair failed and we were unable to recover it. 00:28:28.761 [2024-12-09 17:38:57.927401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.761 [2024-12-09 17:38:57.927435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.761 qpair failed and we were unable to recover it. 00:28:28.761 [2024-12-09 17:38:57.927628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.761 [2024-12-09 17:38:57.927660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.761 qpair failed and we were unable to recover it. 00:28:28.761 [2024-12-09 17:38:57.927900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.761 [2024-12-09 17:38:57.927933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.761 qpair failed and we were unable to recover it. 00:28:28.761 [2024-12-09 17:38:57.928122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.761 [2024-12-09 17:38:57.928154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.761 qpair failed and we were unable to recover it. 00:28:28.761 [2024-12-09 17:38:57.928450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.761 [2024-12-09 17:38:57.928497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.761 qpair failed and we were unable to recover it. 00:28:28.761 [2024-12-09 17:38:57.928645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.761 [2024-12-09 17:38:57.928691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:28.761 qpair failed and we were unable to recover it. 00:28:29.035 [2024-12-09 17:38:57.928988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.035 [2024-12-09 17:38:57.929025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.035 qpair failed and we were unable to recover it. 00:28:29.035 [2024-12-09 17:38:57.929245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.035 [2024-12-09 17:38:57.929279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.035 qpair failed and we were unable to recover it. 00:28:29.035 [2024-12-09 17:38:57.929464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.035 [2024-12-09 17:38:57.929497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.035 qpair failed and we were unable to recover it. 00:28:29.035 [2024-12-09 17:38:57.929761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.035 [2024-12-09 17:38:57.929794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.035 qpair failed and we were unable to recover it. 00:28:29.035 [2024-12-09 17:38:57.930082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.035 [2024-12-09 17:38:57.930114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.035 qpair failed and we were unable to recover it. 00:28:29.035 [2024-12-09 17:38:57.930320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.035 [2024-12-09 17:38:57.930353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.035 qpair failed and we were unable to recover it. 00:28:29.035 [2024-12-09 17:38:57.930595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.035 [2024-12-09 17:38:57.930627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.035 qpair failed and we were unable to recover it. 00:28:29.035 [2024-12-09 17:38:57.930824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.035 [2024-12-09 17:38:57.930856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.035 qpair failed and we were unable to recover it. 00:28:29.035 [2024-12-09 17:38:57.931054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.035 [2024-12-09 17:38:57.931085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.035 qpair failed and we were unable to recover it. 00:28:29.035 [2024-12-09 17:38:57.931269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.035 [2024-12-09 17:38:57.931302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.035 qpair failed and we were unable to recover it. 00:28:29.036 [2024-12-09 17:38:57.931496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.036 [2024-12-09 17:38:57.931527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.036 qpair failed and we were unable to recover it. 00:28:29.036 [2024-12-09 17:38:57.931719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.036 [2024-12-09 17:38:57.931750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.036 qpair failed and we were unable to recover it. 00:28:29.036 [2024-12-09 17:38:57.932004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.036 [2024-12-09 17:38:57.932035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.036 qpair failed and we were unable to recover it. 00:28:29.036 [2024-12-09 17:38:57.932299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.036 [2024-12-09 17:38:57.932334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.036 qpair failed and we were unable to recover it. 00:28:29.036 [2024-12-09 17:38:57.932527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.036 [2024-12-09 17:38:57.932558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.036 qpair failed and we were unable to recover it. 00:28:29.036 [2024-12-09 17:38:57.932780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.036 [2024-12-09 17:38:57.932818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.036 qpair failed and we were unable to recover it. 00:28:29.036 [2024-12-09 17:38:57.933134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.036 [2024-12-09 17:38:57.933166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.036 qpair failed and we were unable to recover it. 00:28:29.036 [2024-12-09 17:38:57.933322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.036 [2024-12-09 17:38:57.933355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.036 qpair failed and we were unable to recover it. 00:28:29.036 [2024-12-09 17:38:57.933624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.036 [2024-12-09 17:38:57.933656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.036 qpair failed and we were unable to recover it. 00:28:29.036 [2024-12-09 17:38:57.933929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.036 [2024-12-09 17:38:57.933961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.036 qpair failed and we were unable to recover it. 00:28:29.036 [2024-12-09 17:38:57.934138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.036 [2024-12-09 17:38:57.934169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.036 qpair failed and we were unable to recover it. 00:28:29.036 [2024-12-09 17:38:57.934372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.036 [2024-12-09 17:38:57.934406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.036 qpair failed and we were unable to recover it. 00:28:29.036 [2024-12-09 17:38:57.934580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.036 [2024-12-09 17:38:57.934611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.036 qpair failed and we were unable to recover it. 00:28:29.036 [2024-12-09 17:38:57.934825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.036 [2024-12-09 17:38:57.934856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.036 qpair failed and we were unable to recover it. 00:28:29.036 [2024-12-09 17:38:57.935049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.036 [2024-12-09 17:38:57.935080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.036 qpair failed and we were unable to recover it. 00:28:29.036 [2024-12-09 17:38:57.935266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.036 [2024-12-09 17:38:57.935299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.036 qpair failed and we were unable to recover it. 00:28:29.036 [2024-12-09 17:38:57.935488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.036 [2024-12-09 17:38:57.935519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.036 qpair failed and we were unable to recover it. 00:28:29.036 [2024-12-09 17:38:57.935757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.036 [2024-12-09 17:38:57.935790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.036 qpair failed and we were unable to recover it. 00:28:29.036 [2024-12-09 17:38:57.935972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.036 [2024-12-09 17:38:57.936004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.036 qpair failed and we were unable to recover it. 00:28:29.036 [2024-12-09 17:38:57.936301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.036 [2024-12-09 17:38:57.936335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.036 qpair failed and we were unable to recover it. 00:28:29.036 [2024-12-09 17:38:57.936619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.036 [2024-12-09 17:38:57.936650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.036 qpair failed and we were unable to recover it. 00:28:29.036 [2024-12-09 17:38:57.936781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.036 [2024-12-09 17:38:57.936813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.036 qpair failed and we were unable to recover it. 00:28:29.036 [2024-12-09 17:38:57.937105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.036 [2024-12-09 17:38:57.937137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.036 qpair failed and we were unable to recover it. 00:28:29.036 [2024-12-09 17:38:57.937386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.036 [2024-12-09 17:38:57.937419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.036 qpair failed and we were unable to recover it. 00:28:29.036 [2024-12-09 17:38:57.937610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.036 [2024-12-09 17:38:57.937641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.036 qpair failed and we were unable to recover it. 00:28:29.036 [2024-12-09 17:38:57.937788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.036 [2024-12-09 17:38:57.937819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.036 qpair failed and we were unable to recover it. 00:28:29.036 [2024-12-09 17:38:57.937952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.036 [2024-12-09 17:38:57.937983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.036 qpair failed and we were unable to recover it. 00:28:29.036 [2024-12-09 17:38:57.938118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.036 [2024-12-09 17:38:57.938150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.036 qpair failed and we were unable to recover it. 00:28:29.036 [2024-12-09 17:38:57.938403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.036 [2024-12-09 17:38:57.938436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.036 qpair failed and we were unable to recover it. 00:28:29.036 [2024-12-09 17:38:57.938713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.036 [2024-12-09 17:38:57.938744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.036 qpair failed and we were unable to recover it. 00:28:29.036 [2024-12-09 17:38:57.938888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.036 [2024-12-09 17:38:57.938920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.036 qpair failed and we were unable to recover it. 00:28:29.036 [2024-12-09 17:38:57.939103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.036 [2024-12-09 17:38:57.939134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.036 qpair failed and we were unable to recover it. 00:28:29.036 [2024-12-09 17:38:57.939403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.036 [2024-12-09 17:38:57.939437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.036 qpair failed and we were unable to recover it. 00:28:29.036 [2024-12-09 17:38:57.939749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.036 [2024-12-09 17:38:57.939793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.036 qpair failed and we were unable to recover it. 00:28:29.036 [2024-12-09 17:38:57.940014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.036 [2024-12-09 17:38:57.940048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.036 qpair failed and we were unable to recover it. 00:28:29.036 [2024-12-09 17:38:57.940238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.036 [2024-12-09 17:38:57.940274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.036 qpair failed and we were unable to recover it. 00:28:29.036 [2024-12-09 17:38:57.940549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.036 [2024-12-09 17:38:57.940598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.036 qpair failed and we were unable to recover it. 00:28:29.036 [2024-12-09 17:38:57.940838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.036 [2024-12-09 17:38:57.940877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.037 qpair failed and we were unable to recover it. 00:28:29.037 [2024-12-09 17:38:57.941056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.037 [2024-12-09 17:38:57.941090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.037 qpair failed and we were unable to recover it. 00:28:29.037 [2024-12-09 17:38:57.941286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.037 [2024-12-09 17:38:57.941321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.037 qpair failed and we were unable to recover it. 00:28:29.037 [2024-12-09 17:38:57.941612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.037 [2024-12-09 17:38:57.941645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.037 qpair failed and we were unable to recover it. 00:28:29.037 [2024-12-09 17:38:57.941877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.037 [2024-12-09 17:38:57.941909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.037 qpair failed and we were unable to recover it. 00:28:29.037 [2024-12-09 17:38:57.942163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.037 [2024-12-09 17:38:57.942196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.037 qpair failed and we were unable to recover it. 00:28:29.037 [2024-12-09 17:38:57.942478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.037 [2024-12-09 17:38:57.942511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.037 qpair failed and we were unable to recover it. 00:28:29.037 [2024-12-09 17:38:57.942751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.037 [2024-12-09 17:38:57.942783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.037 qpair failed and we were unable to recover it. 00:28:29.037 [2024-12-09 17:38:57.943025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.037 [2024-12-09 17:38:57.943065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.037 qpair failed and we were unable to recover it. 00:28:29.037 [2024-12-09 17:38:57.943362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.037 [2024-12-09 17:38:57.943397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.037 qpair failed and we were unable to recover it. 00:28:29.037 [2024-12-09 17:38:57.943657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.037 [2024-12-09 17:38:57.943690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.037 qpair failed and we were unable to recover it. 00:28:29.037 [2024-12-09 17:38:57.943926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.037 [2024-12-09 17:38:57.943958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.037 qpair failed and we were unable to recover it. 00:28:29.037 [2024-12-09 17:38:57.944179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.037 [2024-12-09 17:38:57.944212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.037 qpair failed and we were unable to recover it. 00:28:29.037 [2024-12-09 17:38:57.944434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.037 [2024-12-09 17:38:57.944467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.037 qpair failed and we were unable to recover it. 00:28:29.037 [2024-12-09 17:38:57.944617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.037 [2024-12-09 17:38:57.944648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.037 qpair failed and we were unable to recover it. 00:28:29.037 [2024-12-09 17:38:57.944840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.037 [2024-12-09 17:38:57.944873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.037 qpair failed and we were unable to recover it. 00:28:29.037 [2024-12-09 17:38:57.945091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.037 [2024-12-09 17:38:57.945124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.037 qpair failed and we were unable to recover it. 00:28:29.037 [2024-12-09 17:38:57.945343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.037 [2024-12-09 17:38:57.945377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.037 qpair failed and we were unable to recover it. 00:28:29.037 [2024-12-09 17:38:57.945522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.037 [2024-12-09 17:38:57.945555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.037 qpair failed and we were unable to recover it. 00:28:29.037 [2024-12-09 17:38:57.945754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.037 [2024-12-09 17:38:57.945788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.037 qpair failed and we were unable to recover it. 00:28:29.037 [2024-12-09 17:38:57.946071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.037 [2024-12-09 17:38:57.946102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.037 qpair failed and we were unable to recover it. 00:28:29.037 [2024-12-09 17:38:57.946299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.037 [2024-12-09 17:38:57.946333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.037 qpair failed and we were unable to recover it. 00:28:29.037 [2024-12-09 17:38:57.946478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.037 [2024-12-09 17:38:57.946510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.037 qpair failed and we were unable to recover it. 00:28:29.037 [2024-12-09 17:38:57.946649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.037 [2024-12-09 17:38:57.946681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.037 qpair failed and we were unable to recover it. 00:28:29.037 [2024-12-09 17:38:57.946810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.037 [2024-12-09 17:38:57.946843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.037 qpair failed and we were unable to recover it. 00:28:29.037 [2024-12-09 17:38:57.947103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.037 [2024-12-09 17:38:57.947135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.037 qpair failed and we were unable to recover it. 00:28:29.037 [2024-12-09 17:38:57.947381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.037 [2024-12-09 17:38:57.947415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.037 qpair failed and we were unable to recover it. 00:28:29.037 [2024-12-09 17:38:57.947539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.037 [2024-12-09 17:38:57.947571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.037 qpair failed and we were unable to recover it. 00:28:29.037 [2024-12-09 17:38:57.947786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.037 [2024-12-09 17:38:57.947818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.037 qpair failed and we were unable to recover it. 00:28:29.037 [2024-12-09 17:38:57.948082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.037 [2024-12-09 17:38:57.948114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.037 qpair failed and we were unable to recover it. 00:28:29.037 [2024-12-09 17:38:57.948319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.037 [2024-12-09 17:38:57.948352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.037 qpair failed and we were unable to recover it. 00:28:29.037 [2024-12-09 17:38:57.948541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.037 [2024-12-09 17:38:57.948573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.037 qpair failed and we were unable to recover it. 00:28:29.037 [2024-12-09 17:38:57.948784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.037 [2024-12-09 17:38:57.948817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.037 qpair failed and we were unable to recover it. 00:28:29.037 [2024-12-09 17:38:57.948926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.037 [2024-12-09 17:38:57.948960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.037 qpair failed and we were unable to recover it. 00:28:29.037 [2024-12-09 17:38:57.949233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.037 [2024-12-09 17:38:57.949266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.037 qpair failed and we were unable to recover it. 00:28:29.037 [2024-12-09 17:38:57.949466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.037 [2024-12-09 17:38:57.949499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.037 qpair failed and we were unable to recover it. 00:28:29.037 [2024-12-09 17:38:57.949743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.037 [2024-12-09 17:38:57.949775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.037 qpair failed and we were unable to recover it. 00:28:29.038 [2024-12-09 17:38:57.950084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.038 [2024-12-09 17:38:57.950116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.038 qpair failed and we were unable to recover it. 00:28:29.038 [2024-12-09 17:38:57.950322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.038 [2024-12-09 17:38:57.950355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.038 qpair failed and we were unable to recover it. 00:28:29.038 [2024-12-09 17:38:57.950546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.038 [2024-12-09 17:38:57.950578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.038 qpair failed and we were unable to recover it. 00:28:29.038 [2024-12-09 17:38:57.950724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.038 [2024-12-09 17:38:57.950756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.038 qpair failed and we were unable to recover it. 00:28:29.038 [2024-12-09 17:38:57.950876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.038 [2024-12-09 17:38:57.950908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.038 qpair failed and we were unable to recover it. 00:28:29.038 [2024-12-09 17:38:57.951102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.038 [2024-12-09 17:38:57.951133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.038 qpair failed and we were unable to recover it. 00:28:29.038 [2024-12-09 17:38:57.951271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.038 [2024-12-09 17:38:57.951305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.038 qpair failed and we were unable to recover it. 00:28:29.038 [2024-12-09 17:38:57.951579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.038 [2024-12-09 17:38:57.951611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.038 qpair failed and we were unable to recover it. 00:28:29.038 [2024-12-09 17:38:57.951829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.038 [2024-12-09 17:38:57.951860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.038 qpair failed and we were unable to recover it. 00:28:29.038 [2024-12-09 17:38:57.952144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.038 [2024-12-09 17:38:57.952176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.038 qpair failed and we were unable to recover it. 00:28:29.038 [2024-12-09 17:38:57.952426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.038 [2024-12-09 17:38:57.952459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.038 qpair failed and we were unable to recover it. 00:28:29.038 [2024-12-09 17:38:57.952660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.038 [2024-12-09 17:38:57.952699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.038 qpair failed and we were unable to recover it. 00:28:29.038 [2024-12-09 17:38:57.952889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.038 [2024-12-09 17:38:57.952921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.038 qpair failed and we were unable to recover it. 00:28:29.038 [2024-12-09 17:38:57.953111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.038 [2024-12-09 17:38:57.953142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.038 qpair failed and we were unable to recover it. 00:28:29.038 [2024-12-09 17:38:57.953348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.038 [2024-12-09 17:38:57.953382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.038 qpair failed and we were unable to recover it. 00:28:29.038 [2024-12-09 17:38:57.953651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.038 [2024-12-09 17:38:57.953683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.038 qpair failed and we were unable to recover it. 00:28:29.038 [2024-12-09 17:38:57.954044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.038 [2024-12-09 17:38:57.954077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.038 qpair failed and we were unable to recover it. 00:28:29.038 [2024-12-09 17:38:57.954292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.038 [2024-12-09 17:38:57.954325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.038 qpair failed and we were unable to recover it. 00:28:29.038 [2024-12-09 17:38:57.954463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.038 [2024-12-09 17:38:57.954494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.038 qpair failed and we were unable to recover it. 00:28:29.038 [2024-12-09 17:38:57.954639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.038 [2024-12-09 17:38:57.954671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.038 qpair failed and we were unable to recover it. 00:28:29.038 [2024-12-09 17:38:57.954864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.038 [2024-12-09 17:38:57.954896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.038 qpair failed and we were unable to recover it. 00:28:29.038 [2024-12-09 17:38:57.955162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.038 [2024-12-09 17:38:57.955193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.038 qpair failed and we were unable to recover it. 00:28:29.038 [2024-12-09 17:38:57.955341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.038 [2024-12-09 17:38:57.955373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.038 qpair failed and we were unable to recover it. 00:28:29.038 [2024-12-09 17:38:57.955518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.038 [2024-12-09 17:38:57.955550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.038 qpair failed and we were unable to recover it. 00:28:29.038 [2024-12-09 17:38:57.955732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.038 [2024-12-09 17:38:57.955765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.038 qpair failed and we were unable to recover it. 00:28:29.038 [2024-12-09 17:38:57.956037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.038 [2024-12-09 17:38:57.956069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.038 qpair failed and we were unable to recover it. 00:28:29.038 [2024-12-09 17:38:57.956278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.038 [2024-12-09 17:38:57.956314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.038 qpair failed and we were unable to recover it. 00:28:29.038 [2024-12-09 17:38:57.956441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.038 [2024-12-09 17:38:57.956472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.038 qpair failed and we were unable to recover it. 00:28:29.038 [2024-12-09 17:38:57.956685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.038 [2024-12-09 17:38:57.956719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.038 qpair failed and we were unable to recover it. 00:28:29.038 [2024-12-09 17:38:57.956916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.038 [2024-12-09 17:38:57.956948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.038 qpair failed and we were unable to recover it. 00:28:29.038 [2024-12-09 17:38:57.957057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.038 [2024-12-09 17:38:57.957088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.038 qpair failed and we were unable to recover it. 00:28:29.038 [2024-12-09 17:38:57.957297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.038 [2024-12-09 17:38:57.957330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.038 qpair failed and we were unable to recover it. 00:28:29.038 [2024-12-09 17:38:57.957596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.038 [2024-12-09 17:38:57.957628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.038 qpair failed and we were unable to recover it. 00:28:29.038 [2024-12-09 17:38:57.957760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.038 [2024-12-09 17:38:57.957790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.038 qpair failed and we were unable to recover it. 00:28:29.038 [2024-12-09 17:38:57.958055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.038 [2024-12-09 17:38:57.958087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.038 qpair failed and we were unable to recover it. 00:28:29.038 [2024-12-09 17:38:57.958240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.038 [2024-12-09 17:38:57.958273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.038 qpair failed and we were unable to recover it. 00:28:29.038 [2024-12-09 17:38:57.958470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.038 [2024-12-09 17:38:57.958502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.038 qpair failed and we were unable to recover it. 00:28:29.038 [2024-12-09 17:38:57.958651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.038 [2024-12-09 17:38:57.958682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.038 qpair failed and we were unable to recover it. 00:28:29.038 [2024-12-09 17:38:57.959044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.038 [2024-12-09 17:38:57.959078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.038 qpair failed and we were unable to recover it. 00:28:29.039 [2024-12-09 17:38:57.959297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.039 [2024-12-09 17:38:57.959333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.039 qpair failed and we were unable to recover it. 00:28:29.039 [2024-12-09 17:38:57.959480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.039 [2024-12-09 17:38:57.959512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.039 qpair failed and we were unable to recover it. 00:28:29.039 [2024-12-09 17:38:57.959704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.039 [2024-12-09 17:38:57.959736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.039 qpair failed and we were unable to recover it. 00:28:29.039 [2024-12-09 17:38:57.960078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.039 [2024-12-09 17:38:57.960111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.039 qpair failed and we were unable to recover it. 00:28:29.039 [2024-12-09 17:38:57.960306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.039 [2024-12-09 17:38:57.960339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.039 qpair failed and we were unable to recover it. 00:28:29.039 [2024-12-09 17:38:57.960468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.039 [2024-12-09 17:38:57.960502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.039 qpair failed and we were unable to recover it. 00:28:29.039 [2024-12-09 17:38:57.960646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.039 [2024-12-09 17:38:57.960677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.039 qpair failed and we were unable to recover it. 00:28:29.039 [2024-12-09 17:38:57.960950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.039 [2024-12-09 17:38:57.960983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.039 qpair failed and we were unable to recover it. 00:28:29.039 [2024-12-09 17:38:57.961254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.039 [2024-12-09 17:38:57.961288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.039 qpair failed and we were unable to recover it. 00:28:29.039 [2024-12-09 17:38:57.961428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.039 [2024-12-09 17:38:57.961460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.039 qpair failed and we were unable to recover it. 00:28:29.039 [2024-12-09 17:38:57.961680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.039 [2024-12-09 17:38:57.961712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.039 qpair failed and we were unable to recover it. 00:28:29.039 [2024-12-09 17:38:57.961926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.039 [2024-12-09 17:38:57.961958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.039 qpair failed and we were unable to recover it. 00:28:29.039 [2024-12-09 17:38:57.962230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.039 [2024-12-09 17:38:57.962270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.039 qpair failed and we were unable to recover it. 00:28:29.039 [2024-12-09 17:38:57.962404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.039 [2024-12-09 17:38:57.962436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.039 qpair failed and we were unable to recover it. 00:28:29.039 [2024-12-09 17:38:57.962619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.039 [2024-12-09 17:38:57.962651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.039 qpair failed and we were unable to recover it. 00:28:29.039 [2024-12-09 17:38:57.962827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.039 [2024-12-09 17:38:57.962860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.039 qpair failed and we were unable to recover it. 00:28:29.039 [2024-12-09 17:38:57.963077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.039 [2024-12-09 17:38:57.963108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.039 qpair failed and we were unable to recover it. 00:28:29.039 [2024-12-09 17:38:57.963356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.039 [2024-12-09 17:38:57.963390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.039 qpair failed and we were unable to recover it. 00:28:29.039 [2024-12-09 17:38:57.963586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.039 [2024-12-09 17:38:57.963619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.039 qpair failed and we were unable to recover it. 00:28:29.039 [2024-12-09 17:38:57.963776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.039 [2024-12-09 17:38:57.963808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.039 qpair failed and we were unable to recover it. 00:28:29.039 [2024-12-09 17:38:57.964004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.039 [2024-12-09 17:38:57.964037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.039 qpair failed and we were unable to recover it. 00:28:29.039 [2024-12-09 17:38:57.964241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.039 [2024-12-09 17:38:57.964276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.039 qpair failed and we were unable to recover it. 00:28:29.039 [2024-12-09 17:38:57.964470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.039 [2024-12-09 17:38:57.964503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.039 qpair failed and we were unable to recover it. 00:28:29.039 [2024-12-09 17:38:57.964646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.039 [2024-12-09 17:38:57.964679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.039 qpair failed and we were unable to recover it. 00:28:29.039 [2024-12-09 17:38:57.964898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.039 [2024-12-09 17:38:57.964931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.039 qpair failed and we were unable to recover it. 00:28:29.039 [2024-12-09 17:38:57.965109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.039 [2024-12-09 17:38:57.965139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.039 qpair failed and we were unable to recover it. 00:28:29.039 [2024-12-09 17:38:57.965347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.039 [2024-12-09 17:38:57.965382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.039 qpair failed and we were unable to recover it. 00:28:29.039 [2024-12-09 17:38:57.965513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.039 [2024-12-09 17:38:57.965546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.039 qpair failed and we were unable to recover it. 00:28:29.039 [2024-12-09 17:38:57.965725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.039 [2024-12-09 17:38:57.965758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.039 qpair failed and we were unable to recover it. 00:28:29.039 [2024-12-09 17:38:57.965955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.039 [2024-12-09 17:38:57.965988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.039 qpair failed and we were unable to recover it. 00:28:29.039 [2024-12-09 17:38:57.966164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.039 [2024-12-09 17:38:57.966196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.039 qpair failed and we were unable to recover it. 00:28:29.039 [2024-12-09 17:38:57.966316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.039 [2024-12-09 17:38:57.966349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.039 qpair failed and we were unable to recover it. 00:28:29.039 [2024-12-09 17:38:57.966485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.039 [2024-12-09 17:38:57.966518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.039 qpair failed and we were unable to recover it. 00:28:29.039 [2024-12-09 17:38:57.966739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.039 [2024-12-09 17:38:57.966770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.039 qpair failed and we were unable to recover it. 00:28:29.039 [2024-12-09 17:38:57.967038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.039 [2024-12-09 17:38:57.967071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.039 qpair failed and we were unable to recover it. 00:28:29.039 [2024-12-09 17:38:57.967370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.039 [2024-12-09 17:38:57.967403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.039 qpair failed and we were unable to recover it. 00:28:29.039 [2024-12-09 17:38:57.967603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.039 [2024-12-09 17:38:57.967637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.039 qpair failed and we were unable to recover it. 00:28:29.039 [2024-12-09 17:38:57.967827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.039 [2024-12-09 17:38:57.967859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.039 qpair failed and we were unable to recover it. 00:28:29.039 [2024-12-09 17:38:57.968097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.040 [2024-12-09 17:38:57.968130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.040 qpair failed and we were unable to recover it. 00:28:29.040 [2024-12-09 17:38:57.968246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.040 [2024-12-09 17:38:57.968280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.040 qpair failed and we were unable to recover it. 00:28:29.040 [2024-12-09 17:38:57.968427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.040 [2024-12-09 17:38:57.968461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.040 qpair failed and we were unable to recover it. 00:28:29.040 [2024-12-09 17:38:57.968644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.040 [2024-12-09 17:38:57.968676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.040 qpair failed and we were unable to recover it. 00:28:29.040 [2024-12-09 17:38:57.968826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.040 [2024-12-09 17:38:57.968858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.040 qpair failed and we were unable to recover it. 00:28:29.040 [2024-12-09 17:38:57.969066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.040 [2024-12-09 17:38:57.969099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.040 qpair failed and we were unable to recover it. 00:28:29.040 [2024-12-09 17:38:57.969309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.040 [2024-12-09 17:38:57.969342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.040 qpair failed and we were unable to recover it. 00:28:29.040 [2024-12-09 17:38:57.969477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.040 [2024-12-09 17:38:57.969509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.040 qpair failed and we were unable to recover it. 00:28:29.040 [2024-12-09 17:38:57.969627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.040 [2024-12-09 17:38:57.969660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.040 qpair failed and we were unable to recover it. 00:28:29.040 [2024-12-09 17:38:57.969807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.040 [2024-12-09 17:38:57.969840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.040 qpair failed and we were unable to recover it. 00:28:29.040 [2024-12-09 17:38:57.970026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.040 [2024-12-09 17:38:57.970059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.040 qpair failed and we were unable to recover it. 00:28:29.040 [2024-12-09 17:38:57.970290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.040 [2024-12-09 17:38:57.970324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.040 qpair failed and we were unable to recover it. 00:28:29.040 [2024-12-09 17:38:57.970470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.040 [2024-12-09 17:38:57.970501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.040 qpair failed and we were unable to recover it. 00:28:29.040 [2024-12-09 17:38:57.970702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.040 [2024-12-09 17:38:57.970734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.040 qpair failed and we were unable to recover it. 00:28:29.040 [2024-12-09 17:38:57.970935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.040 [2024-12-09 17:38:57.970974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.040 qpair failed and we were unable to recover it. 00:28:29.040 [2024-12-09 17:38:57.971161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.040 [2024-12-09 17:38:57.971193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.040 qpair failed and we were unable to recover it. 00:28:29.040 [2024-12-09 17:38:57.971385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.040 [2024-12-09 17:38:57.971418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.040 qpair failed and we were unable to recover it. 00:28:29.040 [2024-12-09 17:38:57.971558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.040 [2024-12-09 17:38:57.971592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.040 qpair failed and we were unable to recover it. 00:28:29.040 [2024-12-09 17:38:57.971769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.040 [2024-12-09 17:38:57.971802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.040 qpair failed and we were unable to recover it. 00:28:29.040 [2024-12-09 17:38:57.972067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.040 [2024-12-09 17:38:57.972099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.040 qpair failed and we were unable to recover it. 00:28:29.040 [2024-12-09 17:38:57.972356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.040 [2024-12-09 17:38:57.972388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.040 qpair failed and we were unable to recover it. 00:28:29.040 [2024-12-09 17:38:57.972541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.040 [2024-12-09 17:38:57.972573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.040 qpair failed and we were unable to recover it. 00:28:29.040 [2024-12-09 17:38:57.972772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.040 [2024-12-09 17:38:57.972804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.040 qpair failed and we were unable to recover it. 00:28:29.040 [2024-12-09 17:38:57.972984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.040 [2024-12-09 17:38:57.973017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.040 qpair failed and we were unable to recover it. 00:28:29.040 [2024-12-09 17:38:57.973263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.040 [2024-12-09 17:38:57.973297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.040 qpair failed and we were unable to recover it. 00:28:29.040 [2024-12-09 17:38:57.973498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.040 [2024-12-09 17:38:57.973531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.040 qpair failed and we were unable to recover it. 00:28:29.040 [2024-12-09 17:38:57.973798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.040 [2024-12-09 17:38:57.973831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.040 qpair failed and we were unable to recover it. 00:28:29.040 [2024-12-09 17:38:57.974090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.040 [2024-12-09 17:38:57.974122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.040 qpair failed and we were unable to recover it. 00:28:29.040 [2024-12-09 17:38:57.974342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.040 [2024-12-09 17:38:57.974376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.040 qpair failed and we were unable to recover it. 00:28:29.040 [2024-12-09 17:38:57.974518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.040 [2024-12-09 17:38:57.974551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.040 qpair failed and we were unable to recover it. 00:28:29.040 [2024-12-09 17:38:57.974745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.040 [2024-12-09 17:38:57.974776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.040 qpair failed and we were unable to recover it. 00:28:29.040 [2024-12-09 17:38:57.974995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.040 [2024-12-09 17:38:57.975028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.040 qpair failed and we were unable to recover it. 00:28:29.040 [2024-12-09 17:38:57.975233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.040 [2024-12-09 17:38:57.975266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.040 qpair failed and we were unable to recover it. 00:28:29.040 [2024-12-09 17:38:57.975410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.041 [2024-12-09 17:38:57.975442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.041 qpair failed and we were unable to recover it. 00:28:29.041 [2024-12-09 17:38:57.975565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.041 [2024-12-09 17:38:57.975597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.041 qpair failed and we were unable to recover it. 00:28:29.041 [2024-12-09 17:38:57.975730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.041 [2024-12-09 17:38:57.975763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.041 qpair failed and we were unable to recover it. 00:28:29.041 [2024-12-09 17:38:57.975947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.041 [2024-12-09 17:38:57.975979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.041 qpair failed and we were unable to recover it. 00:28:29.041 [2024-12-09 17:38:57.976110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.041 [2024-12-09 17:38:57.976142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.041 qpair failed and we were unable to recover it. 00:28:29.041 [2024-12-09 17:38:57.976331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.041 [2024-12-09 17:38:57.976364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.041 qpair failed and we were unable to recover it. 00:28:29.041 [2024-12-09 17:38:57.976580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.041 [2024-12-09 17:38:57.976612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.041 qpair failed and we were unable to recover it. 00:28:29.041 [2024-12-09 17:38:57.976831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.041 [2024-12-09 17:38:57.976864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.041 qpair failed and we were unable to recover it. 00:28:29.041 [2024-12-09 17:38:57.977004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.041 [2024-12-09 17:38:57.977037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.041 qpair failed and we were unable to recover it. 00:28:29.041 [2024-12-09 17:38:57.977284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.041 [2024-12-09 17:38:57.977318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.041 qpair failed and we were unable to recover it. 00:28:29.041 [2024-12-09 17:38:57.977562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.041 [2024-12-09 17:38:57.977595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.041 qpair failed and we were unable to recover it. 00:28:29.041 [2024-12-09 17:38:57.977791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.041 [2024-12-09 17:38:57.977823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.041 qpair failed and we were unable to recover it. 00:28:29.041 [2024-12-09 17:38:57.978045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.041 [2024-12-09 17:38:57.978077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.041 qpair failed and we were unable to recover it. 00:28:29.041 [2024-12-09 17:38:57.978267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.041 [2024-12-09 17:38:57.978301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.041 qpair failed and we were unable to recover it. 00:28:29.041 [2024-12-09 17:38:57.978431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.041 [2024-12-09 17:38:57.978463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.041 qpair failed and we were unable to recover it. 00:28:29.041 [2024-12-09 17:38:57.978598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.041 [2024-12-09 17:38:57.978631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.041 qpair failed and we were unable to recover it. 00:28:29.041 [2024-12-09 17:38:57.978876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.041 [2024-12-09 17:38:57.978909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.041 qpair failed and we were unable to recover it. 00:28:29.041 [2024-12-09 17:38:57.979106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.041 [2024-12-09 17:38:57.979138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.041 qpair failed and we were unable to recover it. 00:28:29.041 [2024-12-09 17:38:57.979321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.041 [2024-12-09 17:38:57.979356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.041 qpair failed and we were unable to recover it. 00:28:29.041 [2024-12-09 17:38:57.979477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.041 [2024-12-09 17:38:57.979510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.041 qpair failed and we were unable to recover it. 00:28:29.041 [2024-12-09 17:38:57.979708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.041 [2024-12-09 17:38:57.979740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.041 qpair failed and we were unable to recover it. 00:28:29.041 [2024-12-09 17:38:57.979861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.041 [2024-12-09 17:38:57.979893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.041 qpair failed and we were unable to recover it. 00:28:29.041 [2024-12-09 17:38:57.980044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.041 [2024-12-09 17:38:57.980079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.041 qpair failed and we were unable to recover it. 00:28:29.041 [2024-12-09 17:38:57.980279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.041 [2024-12-09 17:38:57.980324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.041 qpair failed and we were unable to recover it. 00:28:29.041 [2024-12-09 17:38:57.980485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.041 [2024-12-09 17:38:57.980527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.041 qpair failed and we were unable to recover it. 00:28:29.041 [2024-12-09 17:38:57.980808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.041 [2024-12-09 17:38:57.980851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.041 qpair failed and we were unable to recover it. 00:28:29.041 [2024-12-09 17:38:57.981004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.041 [2024-12-09 17:38:57.981046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.041 qpair failed and we were unable to recover it. 00:28:29.041 [2024-12-09 17:38:57.981263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.041 [2024-12-09 17:38:57.981298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.041 qpair failed and we were unable to recover it. 00:28:29.041 [2024-12-09 17:38:57.981430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.041 [2024-12-09 17:38:57.981463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.041 qpair failed and we were unable to recover it. 00:28:29.041 [2024-12-09 17:38:57.981656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.041 [2024-12-09 17:38:57.981689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.041 qpair failed and we were unable to recover it. 00:28:29.041 [2024-12-09 17:38:57.981807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.041 [2024-12-09 17:38:57.981839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.041 qpair failed and we were unable to recover it. 00:28:29.041 [2024-12-09 17:38:57.982016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.041 [2024-12-09 17:38:57.982047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.041 qpair failed and we were unable to recover it. 00:28:29.041 [2024-12-09 17:38:57.982166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.041 [2024-12-09 17:38:57.982199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.041 qpair failed and we were unable to recover it. 00:28:29.041 [2024-12-09 17:38:57.982348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.041 [2024-12-09 17:38:57.982382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.041 qpair failed and we were unable to recover it. 00:28:29.041 [2024-12-09 17:38:57.982570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.041 [2024-12-09 17:38:57.982601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.041 qpair failed and we were unable to recover it. 00:28:29.041 [2024-12-09 17:38:57.982806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.041 [2024-12-09 17:38:57.982838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.041 qpair failed and we were unable to recover it. 00:28:29.041 [2024-12-09 17:38:57.982954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.041 [2024-12-09 17:38:57.982988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.041 qpair failed and we were unable to recover it. 00:28:29.041 [2024-12-09 17:38:57.983165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.041 [2024-12-09 17:38:57.983197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.041 qpair failed and we were unable to recover it. 00:28:29.041 [2024-12-09 17:38:57.983336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.041 [2024-12-09 17:38:57.983369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.041 qpair failed and we were unable to recover it. 00:28:29.041 [2024-12-09 17:38:57.983547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.042 [2024-12-09 17:38:57.983579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.042 qpair failed and we were unable to recover it. 00:28:29.042 [2024-12-09 17:38:57.983758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.042 [2024-12-09 17:38:57.983790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.042 qpair failed and we were unable to recover it. 00:28:29.042 [2024-12-09 17:38:57.983921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.042 [2024-12-09 17:38:57.983953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.042 qpair failed and we were unable to recover it. 00:28:29.042 [2024-12-09 17:38:57.984079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.042 [2024-12-09 17:38:57.984112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.042 qpair failed and we were unable to recover it. 00:28:29.042 [2024-12-09 17:38:57.984293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.042 [2024-12-09 17:38:57.984327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.042 qpair failed and we were unable to recover it. 00:28:29.042 [2024-12-09 17:38:57.984538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.042 [2024-12-09 17:38:57.984571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.042 qpair failed and we were unable to recover it. 00:28:29.042 [2024-12-09 17:38:57.984698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.042 [2024-12-09 17:38:57.984730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.042 qpair failed and we were unable to recover it. 00:28:29.042 [2024-12-09 17:38:57.984841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.042 [2024-12-09 17:38:57.984874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.042 qpair failed and we were unable to recover it. 00:28:29.042 [2024-12-09 17:38:57.984992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.042 [2024-12-09 17:38:57.985024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.042 qpair failed and we were unable to recover it. 00:28:29.042 [2024-12-09 17:38:57.985148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.042 [2024-12-09 17:38:57.985186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.042 qpair failed and we were unable to recover it. 00:28:29.042 [2024-12-09 17:38:57.985333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.042 [2024-12-09 17:38:57.985367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.042 qpair failed and we were unable to recover it. 00:28:29.042 [2024-12-09 17:38:57.985582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.042 [2024-12-09 17:38:57.985615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.042 qpair failed and we were unable to recover it. 00:28:29.042 [2024-12-09 17:38:57.985749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.042 [2024-12-09 17:38:57.985781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.042 qpair failed and we were unable to recover it. 00:28:29.042 [2024-12-09 17:38:57.985927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.042 [2024-12-09 17:38:57.985959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.042 qpair failed and we were unable to recover it. 00:28:29.042 [2024-12-09 17:38:57.986142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.042 [2024-12-09 17:38:57.986174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.042 qpair failed and we were unable to recover it. 00:28:29.042 [2024-12-09 17:38:57.986323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.042 [2024-12-09 17:38:57.986357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.042 qpair failed and we were unable to recover it. 00:28:29.042 [2024-12-09 17:38:57.986501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.042 [2024-12-09 17:38:57.986533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.042 qpair failed and we were unable to recover it. 00:28:29.042 [2024-12-09 17:38:57.986669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.042 [2024-12-09 17:38:57.986701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.042 qpair failed and we were unable to recover it. 00:28:29.042 [2024-12-09 17:38:57.986828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.042 [2024-12-09 17:38:57.986861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.042 qpair failed and we were unable to recover it. 00:28:29.042 [2024-12-09 17:38:57.987046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.042 [2024-12-09 17:38:57.987078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.042 qpair failed and we were unable to recover it. 00:28:29.042 [2024-12-09 17:38:57.987272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.042 [2024-12-09 17:38:57.987306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.042 qpair failed and we were unable to recover it. 00:28:29.042 [2024-12-09 17:38:57.987440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.042 [2024-12-09 17:38:57.987472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.042 qpair failed and we were unable to recover it. 00:28:29.042 [2024-12-09 17:38:57.987675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.042 [2024-12-09 17:38:57.987707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.042 qpair failed and we were unable to recover it. 00:28:29.042 [2024-12-09 17:38:57.987827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.042 [2024-12-09 17:38:57.987860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.042 qpair failed and we were unable to recover it. 00:28:29.042 [2024-12-09 17:38:57.988048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.042 [2024-12-09 17:38:57.988080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.042 qpair failed and we were unable to recover it. 00:28:29.042 [2024-12-09 17:38:57.988198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.042 [2024-12-09 17:38:57.988242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.042 qpair failed and we were unable to recover it. 00:28:29.042 [2024-12-09 17:38:57.988371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.042 [2024-12-09 17:38:57.988408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.042 qpair failed and we were unable to recover it. 00:28:29.042 [2024-12-09 17:38:57.988519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.042 [2024-12-09 17:38:57.988556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.042 qpair failed and we were unable to recover it. 00:28:29.042 [2024-12-09 17:38:57.988784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.042 [2024-12-09 17:38:57.988818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.042 qpair failed and we were unable to recover it. 00:28:29.042 [2024-12-09 17:38:57.988992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.042 [2024-12-09 17:38:57.989025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.042 qpair failed and we were unable to recover it. 00:28:29.042 [2024-12-09 17:38:57.989140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.042 [2024-12-09 17:38:57.989173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.042 qpair failed and we were unable to recover it. 00:28:29.042 [2024-12-09 17:38:57.989384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.042 [2024-12-09 17:38:57.989418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.042 qpair failed and we were unable to recover it. 00:28:29.042 [2024-12-09 17:38:57.989603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.042 [2024-12-09 17:38:57.989636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.042 qpair failed and we were unable to recover it. 00:28:29.042 [2024-12-09 17:38:57.989901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.042 [2024-12-09 17:38:57.989934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.042 qpair failed and we were unable to recover it. 00:28:29.042 [2024-12-09 17:38:57.990133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.042 [2024-12-09 17:38:57.990167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.042 qpair failed and we were unable to recover it. 00:28:29.042 [2024-12-09 17:38:57.990389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.042 [2024-12-09 17:38:57.990422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.042 qpair failed and we were unable to recover it. 00:28:29.042 [2024-12-09 17:38:57.990545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.042 [2024-12-09 17:38:57.990578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.042 qpair failed and we were unable to recover it. 00:28:29.042 [2024-12-09 17:38:57.990829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.042 [2024-12-09 17:38:57.990861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.042 qpair failed and we were unable to recover it. 00:28:29.042 [2024-12-09 17:38:57.991069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.042 [2024-12-09 17:38:57.991102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.042 qpair failed and we were unable to recover it. 00:28:29.042 [2024-12-09 17:38:57.991248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.042 [2024-12-09 17:38:57.991282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.043 qpair failed and we were unable to recover it. 00:28:29.043 [2024-12-09 17:38:57.991397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.043 [2024-12-09 17:38:57.991429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.043 qpair failed and we were unable to recover it. 00:28:29.043 [2024-12-09 17:38:57.991637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.043 [2024-12-09 17:38:57.991671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.043 qpair failed and we were unable to recover it. 00:28:29.043 [2024-12-09 17:38:57.991850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.043 [2024-12-09 17:38:57.991882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.043 qpair failed and we were unable to recover it. 00:28:29.043 [2024-12-09 17:38:57.992032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.043 [2024-12-09 17:38:57.992066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.043 qpair failed and we were unable to recover it. 00:28:29.043 [2024-12-09 17:38:57.992177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.043 [2024-12-09 17:38:57.992209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.043 qpair failed and we were unable to recover it. 00:28:29.043 [2024-12-09 17:38:57.992468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.043 [2024-12-09 17:38:57.992500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.043 qpair failed and we were unable to recover it. 00:28:29.043 [2024-12-09 17:38:57.992721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.043 [2024-12-09 17:38:57.992753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.043 qpair failed and we were unable to recover it. 00:28:29.043 [2024-12-09 17:38:57.992880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.043 [2024-12-09 17:38:57.992913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.043 qpair failed and we were unable to recover it. 00:28:29.043 [2024-12-09 17:38:57.993039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.043 [2024-12-09 17:38:57.993071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.043 qpair failed and we were unable to recover it. 00:28:29.043 [2024-12-09 17:38:57.993253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.043 [2024-12-09 17:38:57.993299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.043 qpair failed and we were unable to recover it. 00:28:29.043 [2024-12-09 17:38:57.993497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.043 [2024-12-09 17:38:57.993530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.043 qpair failed and we were unable to recover it. 00:28:29.043 [2024-12-09 17:38:57.993678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.043 [2024-12-09 17:38:57.993709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.043 qpair failed and we were unable to recover it. 00:28:29.043 [2024-12-09 17:38:57.993889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.043 [2024-12-09 17:38:57.993922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.043 qpair failed and we were unable to recover it. 00:28:29.043 [2024-12-09 17:38:57.994033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.043 [2024-12-09 17:38:57.994066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.043 qpair failed and we were unable to recover it. 00:28:29.043 [2024-12-09 17:38:57.994191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.043 [2024-12-09 17:38:57.994252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.043 qpair failed and we were unable to recover it. 00:28:29.043 [2024-12-09 17:38:57.994368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.043 [2024-12-09 17:38:57.994401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.043 qpair failed and we were unable to recover it. 00:28:29.043 [2024-12-09 17:38:57.994522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.043 [2024-12-09 17:38:57.994554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.043 qpair failed and we were unable to recover it. 00:28:29.043 [2024-12-09 17:38:57.994735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.043 [2024-12-09 17:38:57.994767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.043 qpair failed and we were unable to recover it. 00:28:29.043 [2024-12-09 17:38:57.994892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.043 [2024-12-09 17:38:57.994924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.043 qpair failed and we were unable to recover it. 00:28:29.043 [2024-12-09 17:38:57.995138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.043 [2024-12-09 17:38:57.995170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.043 qpair failed and we were unable to recover it. 00:28:29.043 [2024-12-09 17:38:57.995360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.043 [2024-12-09 17:38:57.995394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.043 qpair failed and we were unable to recover it. 00:28:29.043 [2024-12-09 17:38:57.995507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.043 [2024-12-09 17:38:57.995538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.043 qpair failed and we were unable to recover it. 00:28:29.043 [2024-12-09 17:38:57.995731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.043 [2024-12-09 17:38:57.995763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.043 qpair failed and we were unable to recover it. 00:28:29.043 [2024-12-09 17:38:57.995946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.043 [2024-12-09 17:38:57.995978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.043 qpair failed and we were unable to recover it. 00:28:29.043 [2024-12-09 17:38:57.996170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.043 [2024-12-09 17:38:57.996203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.043 qpair failed and we were unable to recover it. 00:28:29.043 [2024-12-09 17:38:57.996402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.043 [2024-12-09 17:38:57.996434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.043 qpair failed and we were unable to recover it. 00:28:29.043 [2024-12-09 17:38:57.996738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.043 [2024-12-09 17:38:57.996771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.043 qpair failed and we were unable to recover it. 00:28:29.043 [2024-12-09 17:38:57.996907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.043 [2024-12-09 17:38:57.996939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.043 qpair failed and we were unable to recover it. 00:28:29.043 [2024-12-09 17:38:57.997196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.043 [2024-12-09 17:38:57.997241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.043 qpair failed and we were unable to recover it. 00:28:29.043 [2024-12-09 17:38:57.997527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.043 [2024-12-09 17:38:57.997560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.043 qpair failed and we were unable to recover it. 00:28:29.043 [2024-12-09 17:38:57.997696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.043 [2024-12-09 17:38:57.997728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.043 qpair failed and we were unable to recover it. 00:28:29.043 [2024-12-09 17:38:57.997997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.043 [2024-12-09 17:38:57.998029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.043 qpair failed and we were unable to recover it. 00:28:29.043 [2024-12-09 17:38:57.998153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.043 [2024-12-09 17:38:57.998185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.043 qpair failed and we were unable to recover it. 00:28:29.043 [2024-12-09 17:38:57.998373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.043 [2024-12-09 17:38:57.998406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.043 qpair failed and we were unable to recover it. 00:28:29.043 [2024-12-09 17:38:57.998653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.043 [2024-12-09 17:38:57.998685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.043 qpair failed and we were unable to recover it. 00:28:29.043 [2024-12-09 17:38:57.998798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.043 [2024-12-09 17:38:57.998831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.043 qpair failed and we were unable to recover it. 00:28:29.043 [2024-12-09 17:38:57.998961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.043 [2024-12-09 17:38:57.998994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.043 qpair failed and we were unable to recover it. 00:28:29.043 [2024-12-09 17:38:57.999266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.043 [2024-12-09 17:38:57.999300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.043 qpair failed and we were unable to recover it. 00:28:29.043 [2024-12-09 17:38:57.999437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.044 [2024-12-09 17:38:57.999469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.044 qpair failed and we were unable to recover it. 00:28:29.044 [2024-12-09 17:38:57.999580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.044 [2024-12-09 17:38:57.999612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.044 qpair failed and we were unable to recover it. 00:28:29.044 [2024-12-09 17:38:57.999793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.044 [2024-12-09 17:38:57.999825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.044 qpair failed and we were unable to recover it. 00:28:29.044 [2024-12-09 17:38:58.000003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.044 [2024-12-09 17:38:58.000036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.044 qpair failed and we were unable to recover it. 00:28:29.044 [2024-12-09 17:38:58.000214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.044 [2024-12-09 17:38:58.000257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.044 qpair failed and we were unable to recover it. 00:28:29.044 [2024-12-09 17:38:58.000383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.044 [2024-12-09 17:38:58.000415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.044 qpair failed and we were unable to recover it. 00:28:29.044 [2024-12-09 17:38:58.000633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.044 [2024-12-09 17:38:58.000664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.044 qpair failed and we were unable to recover it. 00:28:29.044 [2024-12-09 17:38:58.000771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.044 [2024-12-09 17:38:58.000803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.044 qpair failed and we were unable to recover it. 00:28:29.044 [2024-12-09 17:38:58.000936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.044 [2024-12-09 17:38:58.000968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.044 qpair failed and we were unable to recover it. 00:28:29.044 [2024-12-09 17:38:58.001078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.044 [2024-12-09 17:38:58.001109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.044 qpair failed and we were unable to recover it. 00:28:29.044 [2024-12-09 17:38:58.001319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.044 [2024-12-09 17:38:58.001352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.044 qpair failed and we were unable to recover it. 00:28:29.044 [2024-12-09 17:38:58.001478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.044 [2024-12-09 17:38:58.001515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.044 qpair failed and we were unable to recover it. 00:28:29.044 [2024-12-09 17:38:58.001636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.044 [2024-12-09 17:38:58.001667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.044 qpair failed and we were unable to recover it. 00:28:29.044 [2024-12-09 17:38:58.001906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.044 [2024-12-09 17:38:58.001938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.044 qpair failed and we were unable to recover it. 00:28:29.044 [2024-12-09 17:38:58.002184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.044 [2024-12-09 17:38:58.002245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.044 qpair failed and we were unable to recover it. 00:28:29.044 [2024-12-09 17:38:58.002372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.044 [2024-12-09 17:38:58.002405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.044 qpair failed and we were unable to recover it. 00:28:29.044 [2024-12-09 17:38:58.002652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.044 [2024-12-09 17:38:58.002683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.044 qpair failed and we were unable to recover it. 00:28:29.044 [2024-12-09 17:38:58.002858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.044 [2024-12-09 17:38:58.002891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.044 qpair failed and we were unable to recover it. 00:28:29.044 [2024-12-09 17:38:58.003107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.044 [2024-12-09 17:38:58.003139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.044 qpair failed and we were unable to recover it. 00:28:29.044 [2024-12-09 17:38:58.003259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.044 [2024-12-09 17:38:58.003293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.044 qpair failed and we were unable to recover it. 00:28:29.044 [2024-12-09 17:38:58.003429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.044 [2024-12-09 17:38:58.003461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.044 qpair failed and we were unable to recover it. 00:28:29.044 [2024-12-09 17:38:58.003640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.044 [2024-12-09 17:38:58.003672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.044 qpair failed and we were unable to recover it. 00:28:29.044 [2024-12-09 17:38:58.003858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.044 [2024-12-09 17:38:58.003890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.044 qpair failed and we were unable to recover it. 00:28:29.044 [2024-12-09 17:38:58.004157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.044 [2024-12-09 17:38:58.004189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.044 qpair failed and we were unable to recover it. 00:28:29.044 [2024-12-09 17:38:58.004315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.044 [2024-12-09 17:38:58.004348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.044 qpair failed and we were unable to recover it. 00:28:29.044 [2024-12-09 17:38:58.004531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.044 [2024-12-09 17:38:58.004563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.044 qpair failed and we were unable to recover it. 00:28:29.044 [2024-12-09 17:38:58.004754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.044 [2024-12-09 17:38:58.004786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.044 qpair failed and we were unable to recover it. 00:28:29.044 [2024-12-09 17:38:58.004960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.044 [2024-12-09 17:38:58.004993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.044 qpair failed and we were unable to recover it. 00:28:29.044 [2024-12-09 17:38:58.005115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.044 [2024-12-09 17:38:58.005145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.044 qpair failed and we were unable to recover it. 00:28:29.044 [2024-12-09 17:38:58.005263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.044 [2024-12-09 17:38:58.005297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.044 qpair failed and we were unable to recover it. 00:28:29.044 [2024-12-09 17:38:58.005407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.044 [2024-12-09 17:38:58.005438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.044 qpair failed and we were unable to recover it. 00:28:29.044 [2024-12-09 17:38:58.005567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.044 [2024-12-09 17:38:58.005598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.044 qpair failed and we were unable to recover it. 00:28:29.045 [2024-12-09 17:38:58.005742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.045 [2024-12-09 17:38:58.005774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.045 qpair failed and we were unable to recover it. 00:28:29.045 [2024-12-09 17:38:58.005902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.045 [2024-12-09 17:38:58.005934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.045 qpair failed and we were unable to recover it. 00:28:29.045 [2024-12-09 17:38:58.006101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.045 [2024-12-09 17:38:58.006133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.045 qpair failed and we were unable to recover it. 00:28:29.045 [2024-12-09 17:38:58.006370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.045 [2024-12-09 17:38:58.006405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.045 qpair failed and we were unable to recover it. 00:28:29.045 [2024-12-09 17:38:58.006507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.045 [2024-12-09 17:38:58.006538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.045 qpair failed and we were unable to recover it. 00:28:29.045 [2024-12-09 17:38:58.006657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.045 [2024-12-09 17:38:58.006690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.045 qpair failed and we were unable to recover it. 00:28:29.045 [2024-12-09 17:38:58.006815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.045 [2024-12-09 17:38:58.006849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.045 qpair failed and we were unable to recover it. 00:28:29.045 [2024-12-09 17:38:58.007039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.045 [2024-12-09 17:38:58.007072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.045 qpair failed and we were unable to recover it. 00:28:29.045 [2024-12-09 17:38:58.007248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.045 [2024-12-09 17:38:58.007283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.045 qpair failed and we were unable to recover it. 00:28:29.045 [2024-12-09 17:38:58.007389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.045 [2024-12-09 17:38:58.007421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.045 qpair failed and we were unable to recover it. 00:28:29.045 [2024-12-09 17:38:58.007565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.045 [2024-12-09 17:38:58.007597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.045 qpair failed and we were unable to recover it. 00:28:29.045 [2024-12-09 17:38:58.007787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.045 [2024-12-09 17:38:58.007820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.045 qpair failed and we were unable to recover it. 00:28:29.045 [2024-12-09 17:38:58.007928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.045 [2024-12-09 17:38:58.007961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.045 qpair failed and we were unable to recover it. 00:28:29.045 [2024-12-09 17:38:58.008164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.045 [2024-12-09 17:38:58.008197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.045 qpair failed and we were unable to recover it. 00:28:29.045 [2024-12-09 17:38:58.008344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.045 [2024-12-09 17:38:58.008378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.045 qpair failed and we were unable to recover it. 00:28:29.045 [2024-12-09 17:38:58.008568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.045 [2024-12-09 17:38:58.008599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.045 qpair failed and we were unable to recover it. 00:28:29.045 [2024-12-09 17:38:58.008726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.045 [2024-12-09 17:38:58.008760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.045 qpair failed and we were unable to recover it. 00:28:29.045 [2024-12-09 17:38:58.008941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.045 [2024-12-09 17:38:58.008974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.045 qpair failed and we were unable to recover it. 00:28:29.045 [2024-12-09 17:38:58.009167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.045 [2024-12-09 17:38:58.009199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.045 qpair failed and we were unable to recover it. 00:28:29.045 [2024-12-09 17:38:58.009407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.045 [2024-12-09 17:38:58.009448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.045 qpair failed and we were unable to recover it. 00:28:29.045 [2024-12-09 17:38:58.009569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.045 [2024-12-09 17:38:58.009601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.045 qpair failed and we were unable to recover it. 00:28:29.045 [2024-12-09 17:38:58.009784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.045 [2024-12-09 17:38:58.009817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.045 qpair failed and we were unable to recover it. 00:28:29.045 [2024-12-09 17:38:58.009997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.045 [2024-12-09 17:38:58.010028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.045 qpair failed and we were unable to recover it. 00:28:29.045 [2024-12-09 17:38:58.010230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.045 [2024-12-09 17:38:58.010264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.045 qpair failed and we were unable to recover it. 00:28:29.045 [2024-12-09 17:38:58.010389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.045 [2024-12-09 17:38:58.010420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.045 qpair failed and we were unable to recover it. 00:28:29.045 [2024-12-09 17:38:58.010611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.045 [2024-12-09 17:38:58.010643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.045 qpair failed and we were unable to recover it. 00:28:29.045 [2024-12-09 17:38:58.010773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.045 [2024-12-09 17:38:58.010804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.045 qpair failed and we were unable to recover it. 00:28:29.045 [2024-12-09 17:38:58.010988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.045 [2024-12-09 17:38:58.011020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.045 qpair failed and we were unable to recover it. 00:28:29.045 [2024-12-09 17:38:58.011200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.045 [2024-12-09 17:38:58.011244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.045 qpair failed and we were unable to recover it. 00:28:29.045 [2024-12-09 17:38:58.011460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.045 [2024-12-09 17:38:58.011493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.045 qpair failed and we were unable to recover it. 00:28:29.045 [2024-12-09 17:38:58.011695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.045 [2024-12-09 17:38:58.011726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.045 qpair failed and we were unable to recover it. 00:28:29.045 [2024-12-09 17:38:58.011837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.045 [2024-12-09 17:38:58.011869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.045 qpair failed and we were unable to recover it. 00:28:29.045 [2024-12-09 17:38:58.012004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.045 [2024-12-09 17:38:58.012035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.045 qpair failed and we were unable to recover it. 00:28:29.045 [2024-12-09 17:38:58.012174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.045 [2024-12-09 17:38:58.012205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.045 qpair failed and we were unable to recover it. 00:28:29.045 [2024-12-09 17:38:58.012463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.045 [2024-12-09 17:38:58.012494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.045 qpair failed and we were unable to recover it. 00:28:29.045 [2024-12-09 17:38:58.012677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.045 [2024-12-09 17:38:58.012710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.045 qpair failed and we were unable to recover it. 00:28:29.045 [2024-12-09 17:38:58.012893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.045 [2024-12-09 17:38:58.012923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.045 qpair failed and we were unable to recover it. 00:28:29.045 [2024-12-09 17:38:58.013100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.045 [2024-12-09 17:38:58.013132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.045 qpair failed and we were unable to recover it. 00:28:29.045 [2024-12-09 17:38:58.013260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.046 [2024-12-09 17:38:58.013295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.046 qpair failed and we were unable to recover it. 00:28:29.046 [2024-12-09 17:38:58.013434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.046 [2024-12-09 17:38:58.013466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.046 qpair failed and we were unable to recover it. 00:28:29.046 [2024-12-09 17:38:58.013582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.046 [2024-12-09 17:38:58.013613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.046 qpair failed and we were unable to recover it. 00:28:29.046 [2024-12-09 17:38:58.013794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.046 [2024-12-09 17:38:58.013825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.046 qpair failed and we were unable to recover it. 00:28:29.046 [2024-12-09 17:38:58.014006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.046 [2024-12-09 17:38:58.014037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.046 qpair failed and we were unable to recover it. 00:28:29.046 [2024-12-09 17:38:58.014152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.046 [2024-12-09 17:38:58.014183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.046 qpair failed and we were unable to recover it. 00:28:29.046 [2024-12-09 17:38:58.014305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.046 [2024-12-09 17:38:58.014337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.046 qpair failed and we were unable to recover it. 00:28:29.046 [2024-12-09 17:38:58.014530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.046 [2024-12-09 17:38:58.014562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.046 qpair failed and we were unable to recover it. 00:28:29.046 [2024-12-09 17:38:58.014744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.046 [2024-12-09 17:38:58.014775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.046 qpair failed and we were unable to recover it. 00:28:29.046 [2024-12-09 17:38:58.014891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.046 [2024-12-09 17:38:58.014924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.046 qpair failed and we were unable to recover it. 00:28:29.046 [2024-12-09 17:38:58.015101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.046 [2024-12-09 17:38:58.015133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.046 qpair failed and we were unable to recover it. 00:28:29.046 [2024-12-09 17:38:58.015262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.046 [2024-12-09 17:38:58.015295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.046 qpair failed and we were unable to recover it. 00:28:29.046 [2024-12-09 17:38:58.015495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.046 [2024-12-09 17:38:58.015526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.046 qpair failed and we were unable to recover it. 00:28:29.046 [2024-12-09 17:38:58.015711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.046 [2024-12-09 17:38:58.015743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.046 qpair failed and we were unable to recover it. 00:28:29.046 [2024-12-09 17:38:58.015873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.046 [2024-12-09 17:38:58.015903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.046 qpair failed and we were unable to recover it. 00:28:29.046 [2024-12-09 17:38:58.016127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.046 [2024-12-09 17:38:58.016159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.046 qpair failed and we were unable to recover it. 00:28:29.046 [2024-12-09 17:38:58.016343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.046 [2024-12-09 17:38:58.016377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.046 qpair failed and we were unable to recover it. 00:28:29.046 [2024-12-09 17:38:58.016487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.046 [2024-12-09 17:38:58.016518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.046 qpair failed and we were unable to recover it. 00:28:29.046 [2024-12-09 17:38:58.016646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.046 [2024-12-09 17:38:58.016678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.046 qpair failed and we were unable to recover it. 00:28:29.046 [2024-12-09 17:38:58.016852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.046 [2024-12-09 17:38:58.016885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.046 qpair failed and we were unable to recover it. 00:28:29.046 [2024-12-09 17:38:58.017061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.046 [2024-12-09 17:38:58.017093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.046 qpair failed and we were unable to recover it. 00:28:29.046 [2024-12-09 17:38:58.017208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.046 [2024-12-09 17:38:58.017257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.046 qpair failed and we were unable to recover it. 00:28:29.046 [2024-12-09 17:38:58.017458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.046 [2024-12-09 17:38:58.017490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.046 qpair failed and we were unable to recover it. 00:28:29.046 [2024-12-09 17:38:58.017662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.046 [2024-12-09 17:38:58.017692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.046 qpair failed and we were unable to recover it. 00:28:29.046 [2024-12-09 17:38:58.017896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.046 [2024-12-09 17:38:58.017927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.046 qpair failed and we were unable to recover it. 00:28:29.046 [2024-12-09 17:38:58.018175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.046 [2024-12-09 17:38:58.018206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.046 qpair failed and we were unable to recover it. 00:28:29.046 [2024-12-09 17:38:58.018327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.046 [2024-12-09 17:38:58.018358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.046 qpair failed and we were unable to recover it. 00:28:29.046 [2024-12-09 17:38:58.018565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.046 [2024-12-09 17:38:58.018597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.046 qpair failed and we were unable to recover it. 00:28:29.046 [2024-12-09 17:38:58.018774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.046 [2024-12-09 17:38:58.018806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.046 qpair failed and we were unable to recover it. 00:28:29.046 [2024-12-09 17:38:58.019067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.046 [2024-12-09 17:38:58.019098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.046 qpair failed and we were unable to recover it. 00:28:29.046 [2024-12-09 17:38:58.019240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.046 [2024-12-09 17:38:58.019274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.046 qpair failed and we were unable to recover it. 00:28:29.046 [2024-12-09 17:38:58.019476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.046 [2024-12-09 17:38:58.019508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.046 qpair failed and we were unable to recover it. 00:28:29.046 [2024-12-09 17:38:58.019695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.046 [2024-12-09 17:38:58.019728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.046 qpair failed and we were unable to recover it. 00:28:29.046 [2024-12-09 17:38:58.019855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.046 [2024-12-09 17:38:58.019887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.046 qpair failed and we were unable to recover it. 00:28:29.046 [2024-12-09 17:38:58.020128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.046 [2024-12-09 17:38:58.020160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.046 qpair failed and we were unable to recover it. 00:28:29.046 [2024-12-09 17:38:58.020468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.046 [2024-12-09 17:38:58.020501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.046 qpair failed and we were unable to recover it. 00:28:29.046 [2024-12-09 17:38:58.020613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.046 [2024-12-09 17:38:58.020645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.046 qpair failed and we were unable to recover it. 00:28:29.046 [2024-12-09 17:38:58.020836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.046 [2024-12-09 17:38:58.020867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.046 qpair failed and we were unable to recover it. 00:28:29.046 [2024-12-09 17:38:58.021049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.046 [2024-12-09 17:38:58.021081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.046 qpair failed and we were unable to recover it. 00:28:29.046 [2024-12-09 17:38:58.021333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.047 [2024-12-09 17:38:58.021367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.047 qpair failed and we were unable to recover it. 00:28:29.047 [2024-12-09 17:38:58.021653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.047 [2024-12-09 17:38:58.021685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.047 qpair failed and we were unable to recover it. 00:28:29.047 [2024-12-09 17:38:58.021821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.047 [2024-12-09 17:38:58.021853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.047 qpair failed and we were unable to recover it. 00:28:29.047 [2024-12-09 17:38:58.022110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.047 [2024-12-09 17:38:58.022141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.047 qpair failed and we were unable to recover it. 00:28:29.047 [2024-12-09 17:38:58.022433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.047 [2024-12-09 17:38:58.022466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.047 qpair failed and we were unable to recover it. 00:28:29.047 [2024-12-09 17:38:58.022736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.047 [2024-12-09 17:38:58.022768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.047 qpair failed and we were unable to recover it. 00:28:29.047 [2024-12-09 17:38:58.023004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.047 [2024-12-09 17:38:58.023036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.047 qpair failed and we were unable to recover it. 00:28:29.047 [2024-12-09 17:38:58.023208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.047 [2024-12-09 17:38:58.023249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.047 qpair failed and we were unable to recover it. 00:28:29.047 [2024-12-09 17:38:58.023447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.047 [2024-12-09 17:38:58.023478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.047 qpair failed and we were unable to recover it. 00:28:29.047 [2024-12-09 17:38:58.023671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.047 [2024-12-09 17:38:58.023703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.047 qpair failed and we were unable to recover it. 00:28:29.047 [2024-12-09 17:38:58.023933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.047 [2024-12-09 17:38:58.023965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.047 qpair failed and we were unable to recover it. 00:28:29.047 [2024-12-09 17:38:58.024149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.047 [2024-12-09 17:38:58.024181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.047 qpair failed and we were unable to recover it. 00:28:29.047 [2024-12-09 17:38:58.024431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.047 [2024-12-09 17:38:58.024463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.047 qpair failed and we were unable to recover it. 00:28:29.047 [2024-12-09 17:38:58.024663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.047 [2024-12-09 17:38:58.024695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.047 qpair failed and we were unable to recover it. 00:28:29.047 [2024-12-09 17:38:58.024899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.047 [2024-12-09 17:38:58.024931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.047 qpair failed and we were unable to recover it. 00:28:29.047 [2024-12-09 17:38:58.025105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.047 [2024-12-09 17:38:58.025138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.047 qpair failed and we were unable to recover it. 00:28:29.047 [2024-12-09 17:38:58.025377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.047 [2024-12-09 17:38:58.025411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.047 qpair failed and we were unable to recover it. 00:28:29.047 [2024-12-09 17:38:58.025702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.047 [2024-12-09 17:38:58.025733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.047 qpair failed and we were unable to recover it. 00:28:29.047 [2024-12-09 17:38:58.026048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.047 [2024-12-09 17:38:58.026080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.047 qpair failed and we were unable to recover it. 00:28:29.047 [2024-12-09 17:38:58.026263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.047 [2024-12-09 17:38:58.026296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.047 qpair failed and we were unable to recover it. 00:28:29.047 [2024-12-09 17:38:58.026513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.047 [2024-12-09 17:38:58.026545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.047 qpair failed and we were unable to recover it. 00:28:29.047 [2024-12-09 17:38:58.026729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.047 [2024-12-09 17:38:58.026760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.047 qpair failed and we were unable to recover it. 00:28:29.047 [2024-12-09 17:38:58.027031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.047 [2024-12-09 17:38:58.027068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.047 qpair failed and we were unable to recover it. 00:28:29.047 [2024-12-09 17:38:58.027258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.047 [2024-12-09 17:38:58.027290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.047 qpair failed and we were unable to recover it. 00:28:29.047 [2024-12-09 17:38:58.027481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.047 [2024-12-09 17:38:58.027513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.047 qpair failed and we were unable to recover it. 00:28:29.047 [2024-12-09 17:38:58.027775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.047 [2024-12-09 17:38:58.027807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.047 qpair failed and we were unable to recover it. 00:28:29.047 [2024-12-09 17:38:58.027983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.047 [2024-12-09 17:38:58.028014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.047 qpair failed and we were unable to recover it. 00:28:29.047 [2024-12-09 17:38:58.028288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.047 [2024-12-09 17:38:58.028321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.047 qpair failed and we were unable to recover it. 00:28:29.047 [2024-12-09 17:38:58.028469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.047 [2024-12-09 17:38:58.028501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.047 qpair failed and we were unable to recover it. 00:28:29.047 [2024-12-09 17:38:58.028644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.047 [2024-12-09 17:38:58.028676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.047 qpair failed and we were unable to recover it. 00:28:29.047 [2024-12-09 17:38:58.028947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.047 [2024-12-09 17:38:58.028978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.047 qpair failed and we were unable to recover it. 00:28:29.047 [2024-12-09 17:38:58.029177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.047 [2024-12-09 17:38:58.029208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.047 qpair failed and we were unable to recover it. 00:28:29.047 [2024-12-09 17:38:58.029485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.047 [2024-12-09 17:38:58.029518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.047 qpair failed and we were unable to recover it. 00:28:29.047 [2024-12-09 17:38:58.029761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.047 [2024-12-09 17:38:58.029794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.047 qpair failed and we were unable to recover it. 00:28:29.047 [2024-12-09 17:38:58.029945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.047 [2024-12-09 17:38:58.029976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.047 qpair failed and we were unable to recover it. 00:28:29.047 [2024-12-09 17:38:58.030240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.047 [2024-12-09 17:38:58.030273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.047 qpair failed and we were unable to recover it. 00:28:29.047 [2024-12-09 17:38:58.030457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.047 [2024-12-09 17:38:58.030489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.047 qpair failed and we were unable to recover it. 00:28:29.047 [2024-12-09 17:38:58.030610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.047 [2024-12-09 17:38:58.030643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.047 qpair failed and we were unable to recover it. 00:28:29.047 [2024-12-09 17:38:58.030880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.047 [2024-12-09 17:38:58.030913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.047 qpair failed and we were unable to recover it. 00:28:29.047 [2024-12-09 17:38:58.031116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.047 [2024-12-09 17:38:58.031148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.048 qpair failed and we were unable to recover it. 00:28:29.048 [2024-12-09 17:38:58.031324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.048 [2024-12-09 17:38:58.031357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.048 qpair failed and we were unable to recover it. 00:28:29.048 [2024-12-09 17:38:58.031554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.048 [2024-12-09 17:38:58.031585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.048 qpair failed and we were unable to recover it. 00:28:29.048 [2024-12-09 17:38:58.031722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.048 [2024-12-09 17:38:58.031753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.048 qpair failed and we were unable to recover it. 00:28:29.048 [2024-12-09 17:38:58.031957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.048 [2024-12-09 17:38:58.031988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.048 qpair failed and we were unable to recover it. 00:28:29.048 [2024-12-09 17:38:58.032177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.048 [2024-12-09 17:38:58.032209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.048 qpair failed and we were unable to recover it. 00:28:29.048 [2024-12-09 17:38:58.032373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.048 [2024-12-09 17:38:58.032405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.048 qpair failed and we were unable to recover it. 00:28:29.048 [2024-12-09 17:38:58.032540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.048 [2024-12-09 17:38:58.032574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.048 qpair failed and we were unable to recover it. 00:28:29.048 [2024-12-09 17:38:58.032838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.048 [2024-12-09 17:38:58.032870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.048 qpair failed and we were unable to recover it. 00:28:29.048 [2024-12-09 17:38:58.033014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.048 [2024-12-09 17:38:58.033048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.048 qpair failed and we were unable to recover it. 00:28:29.048 [2024-12-09 17:38:58.033318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.048 [2024-12-09 17:38:58.033352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.048 qpair failed and we were unable to recover it. 00:28:29.048 [2024-12-09 17:38:58.033475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.048 [2024-12-09 17:38:58.033508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.048 qpair failed and we were unable to recover it. 00:28:29.048 [2024-12-09 17:38:58.033748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.048 [2024-12-09 17:38:58.033781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.048 qpair failed and we were unable to recover it. 00:28:29.048 [2024-12-09 17:38:58.034027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.048 [2024-12-09 17:38:58.034060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.048 qpair failed and we were unable to recover it. 00:28:29.048 [2024-12-09 17:38:58.034239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.048 [2024-12-09 17:38:58.034275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.048 qpair failed and we were unable to recover it. 00:28:29.048 [2024-12-09 17:38:58.034402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.048 [2024-12-09 17:38:58.034436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.048 qpair failed and we were unable to recover it. 00:28:29.048 [2024-12-09 17:38:58.034577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.048 [2024-12-09 17:38:58.034609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.048 qpair failed and we were unable to recover it. 00:28:29.048 [2024-12-09 17:38:58.034750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.048 [2024-12-09 17:38:58.034786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.048 qpair failed and we were unable to recover it. 00:28:29.048 [2024-12-09 17:38:58.035030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.048 [2024-12-09 17:38:58.035062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.048 qpair failed and we were unable to recover it. 00:28:29.048 [2024-12-09 17:38:58.035254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.048 [2024-12-09 17:38:58.035288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.048 qpair failed and we were unable to recover it. 00:28:29.048 [2024-12-09 17:38:58.035535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.048 [2024-12-09 17:38:58.035567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.048 qpair failed and we were unable to recover it. 00:28:29.048 [2024-12-09 17:38:58.035741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.048 [2024-12-09 17:38:58.035772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.048 qpair failed and we were unable to recover it. 00:28:29.048 [2024-12-09 17:38:58.036062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.048 [2024-12-09 17:38:58.036094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.048 qpair failed and we were unable to recover it. 00:28:29.048 [2024-12-09 17:38:58.036321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.048 [2024-12-09 17:38:58.036364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.048 qpair failed and we were unable to recover it. 00:28:29.048 [2024-12-09 17:38:58.036542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.048 [2024-12-09 17:38:58.036573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.048 qpair failed and we were unable to recover it. 00:28:29.048 [2024-12-09 17:38:58.036768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.048 [2024-12-09 17:38:58.036800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.048 qpair failed and we were unable to recover it. 00:28:29.048 [2024-12-09 17:38:58.037067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.048 [2024-12-09 17:38:58.037099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.048 qpair failed and we were unable to recover it. 00:28:29.048 [2024-12-09 17:38:58.037347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.048 [2024-12-09 17:38:58.037381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.048 qpair failed and we were unable to recover it. 00:28:29.048 [2024-12-09 17:38:58.037527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.048 [2024-12-09 17:38:58.037559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.048 qpair failed and we were unable to recover it. 00:28:29.048 [2024-12-09 17:38:58.037665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.048 [2024-12-09 17:38:58.037696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.048 qpair failed and we were unable to recover it. 00:28:29.048 [2024-12-09 17:38:58.037905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.048 [2024-12-09 17:38:58.037936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.048 qpair failed and we were unable to recover it. 00:28:29.048 [2024-12-09 17:38:58.038227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.048 [2024-12-09 17:38:58.038260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.048 qpair failed and we were unable to recover it. 00:28:29.048 [2024-12-09 17:38:58.038533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.048 [2024-12-09 17:38:58.038566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.048 qpair failed and we were unable to recover it. 00:28:29.048 [2024-12-09 17:38:58.038712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.048 [2024-12-09 17:38:58.038743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.048 qpair failed and we were unable to recover it. 00:28:29.048 [2024-12-09 17:38:58.039004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.048 [2024-12-09 17:38:58.039036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.048 qpair failed and we were unable to recover it. 00:28:29.048 [2024-12-09 17:38:58.039278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.048 [2024-12-09 17:38:58.039312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.048 qpair failed and we were unable to recover it. 00:28:29.048 [2024-12-09 17:38:58.039558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.048 [2024-12-09 17:38:58.039590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.048 qpair failed and we were unable to recover it. 00:28:29.048 [2024-12-09 17:38:58.039857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.048 [2024-12-09 17:38:58.039889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.048 qpair failed and we were unable to recover it. 00:28:29.048 [2024-12-09 17:38:58.040030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.048 [2024-12-09 17:38:58.040063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.048 qpair failed and we were unable to recover it. 00:28:29.048 [2024-12-09 17:38:58.040354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.048 [2024-12-09 17:38:58.040387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.049 qpair failed and we were unable to recover it. 00:28:29.049 [2024-12-09 17:38:58.040510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.049 [2024-12-09 17:38:58.040542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.049 qpair failed and we were unable to recover it. 00:28:29.049 [2024-12-09 17:38:58.040733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.049 [2024-12-09 17:38:58.040764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.049 qpair failed and we were unable to recover it. 00:28:29.049 [2024-12-09 17:38:58.040983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.049 [2024-12-09 17:38:58.041015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.049 qpair failed and we were unable to recover it. 00:28:29.049 [2024-12-09 17:38:58.041215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.049 [2024-12-09 17:38:58.041256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.049 qpair failed and we were unable to recover it. 00:28:29.049 [2024-12-09 17:38:58.041452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.049 [2024-12-09 17:38:58.041484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.049 qpair failed and we were unable to recover it. 00:28:29.049 [2024-12-09 17:38:58.041683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.049 [2024-12-09 17:38:58.041715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.049 qpair failed and we were unable to recover it. 00:28:29.049 [2024-12-09 17:38:58.041981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.049 [2024-12-09 17:38:58.042013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.049 qpair failed and we were unable to recover it. 00:28:29.049 [2024-12-09 17:38:58.042244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.049 [2024-12-09 17:38:58.042277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.049 qpair failed and we were unable to recover it. 00:28:29.049 [2024-12-09 17:38:58.042524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.049 [2024-12-09 17:38:58.042555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.049 qpair failed and we were unable to recover it. 00:28:29.049 [2024-12-09 17:38:58.042781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.049 [2024-12-09 17:38:58.042812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.049 qpair failed and we were unable to recover it. 00:28:29.049 [2024-12-09 17:38:58.043065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.049 [2024-12-09 17:38:58.043097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.049 qpair failed and we were unable to recover it. 00:28:29.049 [2024-12-09 17:38:58.043282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.049 [2024-12-09 17:38:58.043315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.049 qpair failed and we were unable to recover it. 00:28:29.049 [2024-12-09 17:38:58.043492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.049 [2024-12-09 17:38:58.043524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.049 qpair failed and we were unable to recover it. 00:28:29.049 [2024-12-09 17:38:58.043805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.049 [2024-12-09 17:38:58.043837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.049 qpair failed and we were unable to recover it. 00:28:29.049 [2024-12-09 17:38:58.044121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.049 [2024-12-09 17:38:58.044152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.049 qpair failed and we were unable to recover it. 00:28:29.049 [2024-12-09 17:38:58.044435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.049 [2024-12-09 17:38:58.044467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.049 qpair failed and we were unable to recover it. 00:28:29.049 [2024-12-09 17:38:58.044679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.049 [2024-12-09 17:38:58.044710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.049 qpair failed and we were unable to recover it. 00:28:29.049 [2024-12-09 17:38:58.044951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.049 [2024-12-09 17:38:58.044983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.049 qpair failed and we were unable to recover it. 00:28:29.049 [2024-12-09 17:38:58.045160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.049 [2024-12-09 17:38:58.045191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.049 qpair failed and we were unable to recover it. 00:28:29.049 [2024-12-09 17:38:58.045487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.049 [2024-12-09 17:38:58.045519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.049 qpair failed and we were unable to recover it. 00:28:29.049 [2024-12-09 17:38:58.045812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.049 [2024-12-09 17:38:58.045843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.049 qpair failed and we were unable to recover it. 00:28:29.049 [2024-12-09 17:38:58.046029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.049 [2024-12-09 17:38:58.046061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.049 qpair failed and we were unable to recover it. 00:28:29.049 [2024-12-09 17:38:58.046307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.049 [2024-12-09 17:38:58.046340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.049 qpair failed and we were unable to recover it. 00:28:29.049 [2024-12-09 17:38:58.046635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.049 [2024-12-09 17:38:58.046672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.049 qpair failed and we were unable to recover it. 00:28:29.049 [2024-12-09 17:38:58.046958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.049 [2024-12-09 17:38:58.046989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.049 qpair failed and we were unable to recover it. 00:28:29.049 [2024-12-09 17:38:58.047181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.049 [2024-12-09 17:38:58.047212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.049 qpair failed and we were unable to recover it. 00:28:29.049 [2024-12-09 17:38:58.047398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.049 [2024-12-09 17:38:58.047430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.049 qpair failed and we were unable to recover it. 00:28:29.049 [2024-12-09 17:38:58.047617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.049 [2024-12-09 17:38:58.047647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.049 qpair failed and we were unable to recover it. 00:28:29.049 [2024-12-09 17:38:58.047913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.049 [2024-12-09 17:38:58.047944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.049 qpair failed and we were unable to recover it. 00:28:29.049 [2024-12-09 17:38:58.048156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.049 [2024-12-09 17:38:58.048188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.049 qpair failed and we were unable to recover it. 00:28:29.049 [2024-12-09 17:38:58.048426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.049 [2024-12-09 17:38:58.048458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.049 qpair failed and we were unable to recover it. 00:28:29.049 [2024-12-09 17:38:58.048728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.049 [2024-12-09 17:38:58.048759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.049 qpair failed and we were unable to recover it. 00:28:29.049 [2024-12-09 17:38:58.049057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.049 [2024-12-09 17:38:58.049089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.049 qpair failed and we were unable to recover it. 00:28:29.049 [2024-12-09 17:38:58.049333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.050 [2024-12-09 17:38:58.049366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.050 qpair failed and we were unable to recover it. 00:28:29.050 [2024-12-09 17:38:58.049648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.050 [2024-12-09 17:38:58.049680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.050 qpair failed and we were unable to recover it. 00:28:29.050 [2024-12-09 17:38:58.049951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.050 [2024-12-09 17:38:58.049983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.050 qpair failed and we were unable to recover it. 00:28:29.050 [2024-12-09 17:38:58.050272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.050 [2024-12-09 17:38:58.050304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.050 qpair failed and we were unable to recover it. 00:28:29.050 [2024-12-09 17:38:58.050576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.050 [2024-12-09 17:38:58.050609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.050 qpair failed and we were unable to recover it. 00:28:29.050 [2024-12-09 17:38:58.050799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.050 [2024-12-09 17:38:58.050834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.050 qpair failed and we were unable to recover it. 00:28:29.050 [2024-12-09 17:38:58.051011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.050 [2024-12-09 17:38:58.051042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.050 qpair failed and we were unable to recover it. 00:28:29.050 [2024-12-09 17:38:58.051312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.050 [2024-12-09 17:38:58.051345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.050 qpair failed and we were unable to recover it. 00:28:29.050 [2024-12-09 17:38:58.051534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.050 [2024-12-09 17:38:58.051565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.050 qpair failed and we were unable to recover it. 00:28:29.050 [2024-12-09 17:38:58.051759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.050 [2024-12-09 17:38:58.051792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.050 qpair failed and we were unable to recover it. 00:28:29.050 [2024-12-09 17:38:58.052042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.050 [2024-12-09 17:38:58.052074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.050 qpair failed and we were unable to recover it. 00:28:29.050 [2024-12-09 17:38:58.052346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.050 [2024-12-09 17:38:58.052396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.050 qpair failed and we were unable to recover it. 00:28:29.050 [2024-12-09 17:38:58.052614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.050 [2024-12-09 17:38:58.052646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.050 qpair failed and we were unable to recover it. 00:28:29.050 [2024-12-09 17:38:58.052825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.050 [2024-12-09 17:38:58.052856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.050 qpair failed and we were unable to recover it. 00:28:29.050 [2024-12-09 17:38:58.053125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.050 [2024-12-09 17:38:58.053156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.050 qpair failed and we were unable to recover it. 00:28:29.050 [2024-12-09 17:38:58.053451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.050 [2024-12-09 17:38:58.053485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.050 qpair failed and we were unable to recover it. 00:28:29.050 [2024-12-09 17:38:58.053673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.050 [2024-12-09 17:38:58.053705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.050 qpair failed and we were unable to recover it. 00:28:29.050 [2024-12-09 17:38:58.053905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.050 [2024-12-09 17:38:58.053938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.050 qpair failed and we were unable to recover it. 00:28:29.050 [2024-12-09 17:38:58.054204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.050 [2024-12-09 17:38:58.054248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.050 qpair failed and we were unable to recover it. 00:28:29.050 [2024-12-09 17:38:58.054442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.050 [2024-12-09 17:38:58.054474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.050 qpair failed and we were unable to recover it. 00:28:29.050 [2024-12-09 17:38:58.054672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.050 [2024-12-09 17:38:58.054704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.050 qpair failed and we were unable to recover it. 00:28:29.050 [2024-12-09 17:38:58.054973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.050 [2024-12-09 17:38:58.055005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.050 qpair failed and we were unable to recover it. 00:28:29.050 [2024-12-09 17:38:58.055277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.050 [2024-12-09 17:38:58.055311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.050 qpair failed and we were unable to recover it. 00:28:29.050 [2024-12-09 17:38:58.055505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.050 [2024-12-09 17:38:58.055538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.050 qpair failed and we were unable to recover it. 00:28:29.050 [2024-12-09 17:38:58.055727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.050 [2024-12-09 17:38:58.055759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.050 qpair failed and we were unable to recover it. 00:28:29.050 [2024-12-09 17:38:58.055903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.050 [2024-12-09 17:38:58.055935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.050 qpair failed and we were unable to recover it. 00:28:29.050 [2024-12-09 17:38:58.056149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.050 [2024-12-09 17:38:58.056181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.050 qpair failed and we were unable to recover it. 00:28:29.050 [2024-12-09 17:38:58.056487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.050 [2024-12-09 17:38:58.056520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.050 qpair failed and we were unable to recover it. 00:28:29.050 [2024-12-09 17:38:58.056725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.050 [2024-12-09 17:38:58.056757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.050 qpair failed and we were unable to recover it. 00:28:29.050 [2024-12-09 17:38:58.057001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.050 [2024-12-09 17:38:58.057032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.050 qpair failed and we were unable to recover it. 00:28:29.050 [2024-12-09 17:38:58.057327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.050 [2024-12-09 17:38:58.057366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.050 qpair failed and we were unable to recover it. 00:28:29.050 [2024-12-09 17:38:58.057571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.050 [2024-12-09 17:38:58.057603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.050 qpair failed and we were unable to recover it. 00:28:29.050 [2024-12-09 17:38:58.057799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.050 [2024-12-09 17:38:58.057830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.050 qpair failed and we were unable to recover it. 00:28:29.050 [2024-12-09 17:38:58.058020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.050 [2024-12-09 17:38:58.058052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.050 qpair failed and we were unable to recover it. 00:28:29.050 [2024-12-09 17:38:58.058238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.050 [2024-12-09 17:38:58.058271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.050 qpair failed and we were unable to recover it. 00:28:29.050 [2024-12-09 17:38:58.058461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.050 [2024-12-09 17:38:58.058493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.050 qpair failed and we were unable to recover it. 00:28:29.050 [2024-12-09 17:38:58.058682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.050 [2024-12-09 17:38:58.058715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.050 qpair failed and we were unable to recover it. 00:28:29.050 [2024-12-09 17:38:58.058931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.050 [2024-12-09 17:38:58.058963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.050 qpair failed and we were unable to recover it. 00:28:29.050 [2024-12-09 17:38:58.059240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.050 [2024-12-09 17:38:58.059273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.050 qpair failed and we were unable to recover it. 00:28:29.050 [2024-12-09 17:38:58.059557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.050 [2024-12-09 17:38:58.059588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.051 qpair failed and we were unable to recover it. 00:28:29.051 [2024-12-09 17:38:58.059798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.051 [2024-12-09 17:38:58.059830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.051 qpair failed and we were unable to recover it. 00:28:29.051 [2024-12-09 17:38:58.060023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.051 [2024-12-09 17:38:58.060054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.051 qpair failed and we were unable to recover it. 00:28:29.051 [2024-12-09 17:38:58.060335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.051 [2024-12-09 17:38:58.060369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.051 qpair failed and we were unable to recover it. 00:28:29.051 [2024-12-09 17:38:58.060566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.051 [2024-12-09 17:38:58.060598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.051 qpair failed and we were unable to recover it. 00:28:29.051 [2024-12-09 17:38:58.060920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.051 [2024-12-09 17:38:58.060953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.051 qpair failed and we were unable to recover it. 00:28:29.051 [2024-12-09 17:38:58.061097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.051 [2024-12-09 17:38:58.061130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.051 qpair failed and we were unable to recover it. 00:28:29.051 [2024-12-09 17:38:58.061373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.051 [2024-12-09 17:38:58.061407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.051 qpair failed and we were unable to recover it. 00:28:29.051 [2024-12-09 17:38:58.061705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.051 [2024-12-09 17:38:58.061736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.051 qpair failed and we were unable to recover it. 00:28:29.051 [2024-12-09 17:38:58.061933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.051 [2024-12-09 17:38:58.061965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.051 qpair failed and we were unable to recover it. 00:28:29.051 [2024-12-09 17:38:58.062270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.051 [2024-12-09 17:38:58.062304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.051 qpair failed and we were unable to recover it. 00:28:29.051 [2024-12-09 17:38:58.062448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.051 [2024-12-09 17:38:58.062479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.051 qpair failed and we were unable to recover it. 00:28:29.051 [2024-12-09 17:38:58.062728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.051 [2024-12-09 17:38:58.062760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.051 qpair failed and we were unable to recover it. 00:28:29.051 [2024-12-09 17:38:58.063029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.051 [2024-12-09 17:38:58.063061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.051 qpair failed and we were unable to recover it. 00:28:29.051 [2024-12-09 17:38:58.063351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.051 [2024-12-09 17:38:58.063384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.051 qpair failed and we were unable to recover it. 00:28:29.051 [2024-12-09 17:38:58.063658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.051 [2024-12-09 17:38:58.063690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.051 qpair failed and we were unable to recover it. 00:28:29.051 [2024-12-09 17:38:58.063980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.051 [2024-12-09 17:38:58.064012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.051 qpair failed and we were unable to recover it. 00:28:29.051 [2024-12-09 17:38:58.064258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.051 [2024-12-09 17:38:58.064290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.051 qpair failed and we were unable to recover it. 00:28:29.051 [2024-12-09 17:38:58.064564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.051 [2024-12-09 17:38:58.064596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.051 qpair failed and we were unable to recover it. 00:28:29.051 [2024-12-09 17:38:58.064844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.051 [2024-12-09 17:38:58.064876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.051 qpair failed and we were unable to recover it. 00:28:29.051 [2024-12-09 17:38:58.065077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.051 [2024-12-09 17:38:58.065109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.051 qpair failed and we were unable to recover it. 00:28:29.051 [2024-12-09 17:38:58.065379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.051 [2024-12-09 17:38:58.065412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.051 qpair failed and we were unable to recover it. 00:28:29.051 [2024-12-09 17:38:58.065612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.051 [2024-12-09 17:38:58.065644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.051 qpair failed and we were unable to recover it. 00:28:29.051 [2024-12-09 17:38:58.065829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.051 [2024-12-09 17:38:58.065860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.051 qpair failed and we were unable to recover it. 00:28:29.051 [2024-12-09 17:38:58.066128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.051 [2024-12-09 17:38:58.066160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.051 qpair failed and we were unable to recover it. 00:28:29.051 [2024-12-09 17:38:58.066291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.051 [2024-12-09 17:38:58.066324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.051 qpair failed and we were unable to recover it. 00:28:29.051 [2024-12-09 17:38:58.066592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.051 [2024-12-09 17:38:58.066623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.051 qpair failed and we were unable to recover it. 00:28:29.051 [2024-12-09 17:38:58.066837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.051 [2024-12-09 17:38:58.066869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.051 qpair failed and we were unable to recover it. 00:28:29.051 [2024-12-09 17:38:58.067048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.051 [2024-12-09 17:38:58.067080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.051 qpair failed and we were unable to recover it. 00:28:29.051 [2024-12-09 17:38:58.067286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.051 [2024-12-09 17:38:58.067319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.051 qpair failed and we were unable to recover it. 00:28:29.051 [2024-12-09 17:38:58.067507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.051 [2024-12-09 17:38:58.067538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.051 qpair failed and we were unable to recover it. 00:28:29.051 [2024-12-09 17:38:58.067734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.051 [2024-12-09 17:38:58.067771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.051 qpair failed and we were unable to recover it. 00:28:29.051 [2024-12-09 17:38:58.067967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.051 [2024-12-09 17:38:58.067999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.051 qpair failed and we were unable to recover it. 00:28:29.051 [2024-12-09 17:38:58.068268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.051 [2024-12-09 17:38:58.068301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.051 qpair failed and we were unable to recover it. 00:28:29.051 [2024-12-09 17:38:58.068587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.051 [2024-12-09 17:38:58.068618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.051 qpair failed and we were unable to recover it. 00:28:29.051 [2024-12-09 17:38:58.068891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.051 [2024-12-09 17:38:58.068922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.051 qpair failed and we were unable to recover it. 00:28:29.051 [2024-12-09 17:38:58.069105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.051 [2024-12-09 17:38:58.069137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.051 qpair failed and we were unable to recover it. 00:28:29.051 [2024-12-09 17:38:58.069333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.051 [2024-12-09 17:38:58.069366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.051 qpair failed and we were unable to recover it. 00:28:29.051 [2024-12-09 17:38:58.069635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.051 [2024-12-09 17:38:58.069667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.051 qpair failed and we were unable to recover it. 00:28:29.051 [2024-12-09 17:38:58.069867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.051 [2024-12-09 17:38:58.069898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.052 qpair failed and we were unable to recover it. 00:28:29.052 [2024-12-09 17:38:58.070195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.052 [2024-12-09 17:38:58.070237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.052 qpair failed and we were unable to recover it. 00:28:29.052 [2024-12-09 17:38:58.070498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.052 [2024-12-09 17:38:58.070530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.052 qpair failed and we were unable to recover it. 00:28:29.052 [2024-12-09 17:38:58.070711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.052 [2024-12-09 17:38:58.070742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.052 qpair failed and we were unable to recover it. 00:28:29.052 [2024-12-09 17:38:58.070987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.052 [2024-12-09 17:38:58.071018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.052 qpair failed and we were unable to recover it. 00:28:29.052 [2024-12-09 17:38:58.071160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.052 [2024-12-09 17:38:58.071192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.052 qpair failed and we were unable to recover it. 00:28:29.052 [2024-12-09 17:38:58.071427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.052 [2024-12-09 17:38:58.071460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.052 qpair failed and we were unable to recover it. 00:28:29.052 [2024-12-09 17:38:58.071713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.052 [2024-12-09 17:38:58.071744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.052 qpair failed and we were unable to recover it. 00:28:29.052 [2024-12-09 17:38:58.071943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.052 [2024-12-09 17:38:58.071975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.052 qpair failed and we were unable to recover it. 00:28:29.052 [2024-12-09 17:38:58.072231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.052 [2024-12-09 17:38:58.072265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.052 qpair failed and we were unable to recover it. 00:28:29.052 [2024-12-09 17:38:58.072534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.052 [2024-12-09 17:38:58.072567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.052 qpair failed and we were unable to recover it. 00:28:29.052 [2024-12-09 17:38:58.072813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.052 [2024-12-09 17:38:58.072845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.052 qpair failed and we were unable to recover it. 00:28:29.052 [2024-12-09 17:38:58.073098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.052 [2024-12-09 17:38:58.073130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.052 qpair failed and we were unable to recover it. 00:28:29.052 [2024-12-09 17:38:58.073306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.052 [2024-12-09 17:38:58.073339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.052 qpair failed and we were unable to recover it. 00:28:29.052 [2024-12-09 17:38:58.073520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.052 [2024-12-09 17:38:58.073552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.052 qpair failed and we were unable to recover it. 00:28:29.052 [2024-12-09 17:38:58.073801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.052 [2024-12-09 17:38:58.073833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.052 qpair failed and we were unable to recover it. 00:28:29.052 [2024-12-09 17:38:58.074127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.052 [2024-12-09 17:38:58.074158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.052 qpair failed and we were unable to recover it. 00:28:29.052 [2024-12-09 17:38:58.074382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.052 [2024-12-09 17:38:58.074415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.052 qpair failed and we were unable to recover it. 00:28:29.052 [2024-12-09 17:38:58.074659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.052 [2024-12-09 17:38:58.074691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.052 qpair failed and we were unable to recover it. 00:28:29.052 [2024-12-09 17:38:58.074990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.052 [2024-12-09 17:38:58.075022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.052 qpair failed and we were unable to recover it. 00:28:29.052 [2024-12-09 17:38:58.075301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.052 [2024-12-09 17:38:58.075335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.052 qpair failed and we were unable to recover it. 00:28:29.052 [2024-12-09 17:38:58.075524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.052 [2024-12-09 17:38:58.075556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.052 qpair failed and we were unable to recover it. 00:28:29.052 [2024-12-09 17:38:58.075679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.052 [2024-12-09 17:38:58.075710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.052 qpair failed and we were unable to recover it. 00:28:29.052 [2024-12-09 17:38:58.075902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.052 [2024-12-09 17:38:58.075934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.052 qpair failed and we were unable to recover it. 00:28:29.052 [2024-12-09 17:38:58.076128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.052 [2024-12-09 17:38:58.076160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.052 qpair failed and we were unable to recover it. 00:28:29.052 [2024-12-09 17:38:58.076366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.052 [2024-12-09 17:38:58.076399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.052 qpair failed and we were unable to recover it. 00:28:29.052 [2024-12-09 17:38:58.076665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.052 [2024-12-09 17:38:58.076697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.052 qpair failed and we were unable to recover it. 00:28:29.052 [2024-12-09 17:38:58.076981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.052 [2024-12-09 17:38:58.077012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.052 qpair failed and we were unable to recover it. 00:28:29.052 [2024-12-09 17:38:58.077291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.052 [2024-12-09 17:38:58.077324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.052 qpair failed and we were unable to recover it. 00:28:29.052 [2024-12-09 17:38:58.077544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.052 [2024-12-09 17:38:58.077576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.052 qpair failed and we were unable to recover it. 00:28:29.052 [2024-12-09 17:38:58.077820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.052 [2024-12-09 17:38:58.077853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.052 qpair failed and we were unable to recover it. 00:28:29.052 [2024-12-09 17:38:58.078118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.052 [2024-12-09 17:38:58.078150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.052 qpair failed and we were unable to recover it. 00:28:29.052 [2024-12-09 17:38:58.078401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.052 [2024-12-09 17:38:58.078439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.052 qpair failed and we were unable to recover it. 00:28:29.052 [2024-12-09 17:38:58.078657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.052 [2024-12-09 17:38:58.078690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.052 qpair failed and we were unable to recover it. 00:28:29.052 [2024-12-09 17:38:58.078937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.052 [2024-12-09 17:38:58.078970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.052 qpair failed and we were unable to recover it. 00:28:29.052 [2024-12-09 17:38:58.079240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.052 [2024-12-09 17:38:58.079273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.052 qpair failed and we were unable to recover it. 00:28:29.052 [2024-12-09 17:38:58.079465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.052 [2024-12-09 17:38:58.079497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.052 qpair failed and we were unable to recover it. 00:28:29.052 [2024-12-09 17:38:58.079759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.052 [2024-12-09 17:38:58.079791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.052 qpair failed and we were unable to recover it. 00:28:29.052 [2024-12-09 17:38:58.080081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.052 [2024-12-09 17:38:58.080113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.052 qpair failed and we were unable to recover it. 00:28:29.052 [2024-12-09 17:38:58.080314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.052 [2024-12-09 17:38:58.080347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.053 qpair failed and we were unable to recover it. 00:28:29.053 [2024-12-09 17:38:58.080524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.053 [2024-12-09 17:38:58.080556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.053 qpair failed and we were unable to recover it. 00:28:29.053 [2024-12-09 17:38:58.080749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.053 [2024-12-09 17:38:58.080780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.053 qpair failed and we were unable to recover it. 00:28:29.053 [2024-12-09 17:38:58.081072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.053 [2024-12-09 17:38:58.081104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.053 qpair failed and we were unable to recover it. 00:28:29.053 [2024-12-09 17:38:58.081377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.053 [2024-12-09 17:38:58.081410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.053 qpair failed and we were unable to recover it. 00:28:29.053 [2024-12-09 17:38:58.081521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.053 [2024-12-09 17:38:58.081553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.053 qpair failed and we were unable to recover it. 00:28:29.053 [2024-12-09 17:38:58.081818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.053 [2024-12-09 17:38:58.081850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.053 qpair failed and we were unable to recover it. 00:28:29.053 [2024-12-09 17:38:58.082052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.053 [2024-12-09 17:38:58.082084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.053 qpair failed and we were unable to recover it. 00:28:29.053 [2024-12-09 17:38:58.082280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.053 [2024-12-09 17:38:58.082314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.053 qpair failed and we were unable to recover it. 00:28:29.053 [2024-12-09 17:38:58.082588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.053 [2024-12-09 17:38:58.082620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.053 qpair failed and we were unable to recover it. 00:28:29.053 [2024-12-09 17:38:58.082901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.053 [2024-12-09 17:38:58.082933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.053 qpair failed and we were unable to recover it. 00:28:29.053 [2024-12-09 17:38:58.083112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.053 [2024-12-09 17:38:58.083145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.053 qpair failed and we were unable to recover it. 00:28:29.053 [2024-12-09 17:38:58.083359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.053 [2024-12-09 17:38:58.083392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.053 qpair failed and we were unable to recover it. 00:28:29.053 [2024-12-09 17:38:58.083524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.053 [2024-12-09 17:38:58.083555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.053 qpair failed and we were unable to recover it. 00:28:29.053 [2024-12-09 17:38:58.083835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.053 [2024-12-09 17:38:58.083867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.053 qpair failed and we were unable to recover it. 00:28:29.053 [2024-12-09 17:38:58.084137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.053 [2024-12-09 17:38:58.084169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.053 qpair failed and we were unable to recover it. 00:28:29.053 [2024-12-09 17:38:58.084467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.053 [2024-12-09 17:38:58.084499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.053 qpair failed and we were unable to recover it. 00:28:29.053 [2024-12-09 17:38:58.084623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.053 [2024-12-09 17:38:58.084654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.053 qpair failed and we were unable to recover it. 00:28:29.053 [2024-12-09 17:38:58.084921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.053 [2024-12-09 17:38:58.084953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.053 qpair failed and we were unable to recover it. 00:28:29.053 [2024-12-09 17:38:58.085156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.053 [2024-12-09 17:38:58.085188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.053 qpair failed and we were unable to recover it. 00:28:29.053 [2024-12-09 17:38:58.085552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.053 [2024-12-09 17:38:58.085628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.053 qpair failed and we were unable to recover it. 00:28:29.053 [2024-12-09 17:38:58.085837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.053 [2024-12-09 17:38:58.085874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.053 qpair failed and we were unable to recover it. 00:28:29.053 [2024-12-09 17:38:58.086157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.053 [2024-12-09 17:38:58.086191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.053 qpair failed and we were unable to recover it. 00:28:29.053 [2024-12-09 17:38:58.086332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.053 [2024-12-09 17:38:58.086366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.053 qpair failed and we were unable to recover it. 00:28:29.053 [2024-12-09 17:38:58.086555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.053 [2024-12-09 17:38:58.086587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.053 qpair failed and we were unable to recover it. 00:28:29.053 [2024-12-09 17:38:58.086777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.053 [2024-12-09 17:38:58.086810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.053 qpair failed and we were unable to recover it. 00:28:29.053 [2024-12-09 17:38:58.087136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.053 [2024-12-09 17:38:58.087168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.053 qpair failed and we were unable to recover it. 00:28:29.053 [2024-12-09 17:38:58.087432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.053 [2024-12-09 17:38:58.087465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.053 qpair failed and we were unable to recover it. 00:28:29.053 [2024-12-09 17:38:58.087689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.053 [2024-12-09 17:38:58.087721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.053 qpair failed and we were unable to recover it. 00:28:29.053 [2024-12-09 17:38:58.087990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.053 [2024-12-09 17:38:58.088023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.053 qpair failed and we were unable to recover it. 00:28:29.053 [2024-12-09 17:38:58.088315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.053 [2024-12-09 17:38:58.088348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.053 qpair failed and we were unable to recover it. 00:28:29.053 [2024-12-09 17:38:58.088543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.053 [2024-12-09 17:38:58.088576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.053 qpair failed and we were unable to recover it. 00:28:29.053 [2024-12-09 17:38:58.088833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.053 [2024-12-09 17:38:58.088864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.053 qpair failed and we were unable to recover it. 00:28:29.053 [2024-12-09 17:38:58.089037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.053 [2024-12-09 17:38:58.089069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.053 qpair failed and we were unable to recover it. 00:28:29.053 [2024-12-09 17:38:58.089357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.053 [2024-12-09 17:38:58.089390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.053 qpair failed and we were unable to recover it. 00:28:29.053 [2024-12-09 17:38:58.089598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.053 [2024-12-09 17:38:58.089631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.054 qpair failed and we were unable to recover it. 00:28:29.054 [2024-12-09 17:38:58.089823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.054 [2024-12-09 17:38:58.089854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.054 qpair failed and we were unable to recover it. 00:28:29.054 [2024-12-09 17:38:58.090034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.054 [2024-12-09 17:38:58.090066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.054 qpair failed and we were unable to recover it. 00:28:29.054 [2024-12-09 17:38:58.090334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.054 [2024-12-09 17:38:58.090368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.054 qpair failed and we were unable to recover it. 00:28:29.054 [2024-12-09 17:38:58.090643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.054 [2024-12-09 17:38:58.090675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.054 qpair failed and we were unable to recover it. 00:28:29.054 [2024-12-09 17:38:58.090965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.054 [2024-12-09 17:38:58.090997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.054 qpair failed and we were unable to recover it. 00:28:29.054 [2024-12-09 17:38:58.091275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.054 [2024-12-09 17:38:58.091310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.054 qpair failed and we were unable to recover it. 00:28:29.054 [2024-12-09 17:38:58.091504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.054 [2024-12-09 17:38:58.091536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.054 qpair failed and we were unable to recover it. 00:28:29.054 [2024-12-09 17:38:58.091790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.054 [2024-12-09 17:38:58.091822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.054 qpair failed and we were unable to recover it. 00:28:29.054 [2024-12-09 17:38:58.091974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.054 [2024-12-09 17:38:58.092006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.054 qpair failed and we were unable to recover it. 00:28:29.054 [2024-12-09 17:38:58.092129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.054 [2024-12-09 17:38:58.092180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.054 qpair failed and we were unable to recover it. 00:28:29.054 [2024-12-09 17:38:58.092464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.054 [2024-12-09 17:38:58.092497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.054 qpair failed and we were unable to recover it. 00:28:29.054 [2024-12-09 17:38:58.092758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.054 [2024-12-09 17:38:58.092796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.054 qpair failed and we were unable to recover it. 00:28:29.054 [2024-12-09 17:38:58.093099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.054 [2024-12-09 17:38:58.093131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.054 qpair failed and we were unable to recover it. 00:28:29.054 [2024-12-09 17:38:58.093278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.054 [2024-12-09 17:38:58.093312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.054 qpair failed and we were unable to recover it. 00:28:29.054 [2024-12-09 17:38:58.093586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.054 [2024-12-09 17:38:58.093617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.054 qpair failed and we were unable to recover it. 00:28:29.054 [2024-12-09 17:38:58.093866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.054 [2024-12-09 17:38:58.093899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.054 qpair failed and we were unable to recover it. 00:28:29.054 [2024-12-09 17:38:58.094090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.054 [2024-12-09 17:38:58.094122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.054 qpair failed and we were unable to recover it. 00:28:29.054 [2024-12-09 17:38:58.094372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.054 [2024-12-09 17:38:58.094406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.054 qpair failed and we were unable to recover it. 00:28:29.054 [2024-12-09 17:38:58.094584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.054 [2024-12-09 17:38:58.094615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.054 qpair failed and we were unable to recover it. 00:28:29.054 [2024-12-09 17:38:58.094899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.054 [2024-12-09 17:38:58.094932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.054 qpair failed and we were unable to recover it. 00:28:29.054 [2024-12-09 17:38:58.095122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.054 [2024-12-09 17:38:58.095154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.054 qpair failed and we were unable to recover it. 00:28:29.054 [2024-12-09 17:38:58.095409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.054 [2024-12-09 17:38:58.095443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.054 qpair failed and we were unable to recover it. 00:28:29.054 [2024-12-09 17:38:58.095667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.054 [2024-12-09 17:38:58.095698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.054 qpair failed and we were unable to recover it. 00:28:29.054 [2024-12-09 17:38:58.095968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.054 [2024-12-09 17:38:58.096000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.054 qpair failed and we were unable to recover it. 00:28:29.054 [2024-12-09 17:38:58.096271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.054 [2024-12-09 17:38:58.096303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.054 qpair failed and we were unable to recover it. 00:28:29.054 [2024-12-09 17:38:58.096574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.054 [2024-12-09 17:38:58.096608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.054 qpair failed and we were unable to recover it. 00:28:29.054 [2024-12-09 17:38:58.096786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.054 [2024-12-09 17:38:58.096817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.054 qpair failed and we were unable to recover it. 00:28:29.054 [2024-12-09 17:38:58.097121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.054 [2024-12-09 17:38:58.097153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.054 qpair failed and we were unable to recover it. 00:28:29.054 [2024-12-09 17:38:58.097364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.054 [2024-12-09 17:38:58.097398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.054 qpair failed and we were unable to recover it. 00:28:29.054 [2024-12-09 17:38:58.097583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.054 [2024-12-09 17:38:58.097615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.054 qpair failed and we were unable to recover it. 00:28:29.054 [2024-12-09 17:38:58.097862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.054 [2024-12-09 17:38:58.097894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.054 qpair failed and we were unable to recover it. 00:28:29.054 [2024-12-09 17:38:58.098142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.054 [2024-12-09 17:38:58.098175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.054 qpair failed and we were unable to recover it. 00:28:29.054 [2024-12-09 17:38:58.098402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.054 [2024-12-09 17:38:58.098435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.054 qpair failed and we were unable to recover it. 00:28:29.054 [2024-12-09 17:38:58.098760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.054 [2024-12-09 17:38:58.098792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.054 qpair failed and we were unable to recover it. 00:28:29.054 [2024-12-09 17:38:58.099079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.054 [2024-12-09 17:38:58.099110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.054 qpair failed and we were unable to recover it. 00:28:29.054 [2024-12-09 17:38:58.099386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.054 [2024-12-09 17:38:58.099420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.054 qpair failed and we were unable to recover it. 00:28:29.054 [2024-12-09 17:38:58.099709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.054 [2024-12-09 17:38:58.099740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.054 qpair failed and we were unable to recover it. 00:28:29.054 [2024-12-09 17:38:58.099919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.054 [2024-12-09 17:38:58.099951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.054 qpair failed and we were unable to recover it. 00:28:29.054 [2024-12-09 17:38:58.100151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.054 [2024-12-09 17:38:58.100183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.054 qpair failed and we were unable to recover it. 00:28:29.054 [2024-12-09 17:38:58.100452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.055 [2024-12-09 17:38:58.100485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.055 qpair failed and we were unable to recover it. 00:28:29.055 [2024-12-09 17:38:58.100785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.055 [2024-12-09 17:38:58.100816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.055 qpair failed and we were unable to recover it. 00:28:29.055 [2024-12-09 17:38:58.101020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.055 [2024-12-09 17:38:58.101051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.055 qpair failed and we were unable to recover it. 00:28:29.055 [2024-12-09 17:38:58.101301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.055 [2024-12-09 17:38:58.101334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.055 qpair failed and we were unable to recover it. 00:28:29.055 [2024-12-09 17:38:58.101516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.055 [2024-12-09 17:38:58.101548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.055 qpair failed and we were unable to recover it. 00:28:29.055 [2024-12-09 17:38:58.101826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.055 [2024-12-09 17:38:58.101858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.055 qpair failed and we were unable to recover it. 00:28:29.055 [2024-12-09 17:38:58.102127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.055 [2024-12-09 17:38:58.102158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.055 qpair failed and we were unable to recover it. 00:28:29.055 [2024-12-09 17:38:58.102408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.055 [2024-12-09 17:38:58.102441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.055 qpair failed and we were unable to recover it. 00:28:29.055 [2024-12-09 17:38:58.102648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.055 [2024-12-09 17:38:58.102679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.055 qpair failed and we were unable to recover it. 00:28:29.055 [2024-12-09 17:38:58.102953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.055 [2024-12-09 17:38:58.102985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.055 qpair failed and we were unable to recover it. 00:28:29.055 [2024-12-09 17:38:58.103175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.055 [2024-12-09 17:38:58.103207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.055 qpair failed and we were unable to recover it. 00:28:29.055 [2024-12-09 17:38:58.103433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.055 [2024-12-09 17:38:58.103466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.055 qpair failed and we were unable to recover it. 00:28:29.055 [2024-12-09 17:38:58.103744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.055 [2024-12-09 17:38:58.103776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.055 qpair failed and we were unable to recover it. 00:28:29.055 [2024-12-09 17:38:58.104056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.055 [2024-12-09 17:38:58.104093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.055 qpair failed and we were unable to recover it. 00:28:29.055 [2024-12-09 17:38:58.104315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.055 [2024-12-09 17:38:58.104348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.055 qpair failed and we were unable to recover it. 00:28:29.055 [2024-12-09 17:38:58.104602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.055 [2024-12-09 17:38:58.104634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.055 qpair failed and we were unable to recover it. 00:28:29.055 [2024-12-09 17:38:58.104831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.055 [2024-12-09 17:38:58.104862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.055 qpair failed and we were unable to recover it. 00:28:29.055 [2024-12-09 17:38:58.105112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.055 [2024-12-09 17:38:58.105145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.055 qpair failed and we were unable to recover it. 00:28:29.055 [2024-12-09 17:38:58.105418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.055 [2024-12-09 17:38:58.105451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.055 qpair failed and we were unable to recover it. 00:28:29.055 [2024-12-09 17:38:58.105734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.055 [2024-12-09 17:38:58.105767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.055 qpair failed and we were unable to recover it. 00:28:29.055 [2024-12-09 17:38:58.106044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.055 [2024-12-09 17:38:58.106076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.055 qpair failed and we were unable to recover it. 00:28:29.055 [2024-12-09 17:38:58.106276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.055 [2024-12-09 17:38:58.106310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.055 qpair failed and we were unable to recover it. 00:28:29.055 [2024-12-09 17:38:58.106575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.055 [2024-12-09 17:38:58.106606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.055 qpair failed and we were unable to recover it. 00:28:29.055 [2024-12-09 17:38:58.106881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.055 [2024-12-09 17:38:58.106914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.055 qpair failed and we were unable to recover it. 00:28:29.055 [2024-12-09 17:38:58.107110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.055 [2024-12-09 17:38:58.107142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.055 qpair failed and we were unable to recover it. 00:28:29.055 [2024-12-09 17:38:58.107401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.055 [2024-12-09 17:38:58.107434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.055 qpair failed and we were unable to recover it. 00:28:29.055 [2024-12-09 17:38:58.107683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.055 [2024-12-09 17:38:58.107715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.055 qpair failed and we were unable to recover it. 00:28:29.055 [2024-12-09 17:38:58.108021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.055 [2024-12-09 17:38:58.108054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.055 qpair failed and we were unable to recover it. 00:28:29.055 [2024-12-09 17:38:58.108301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.055 [2024-12-09 17:38:58.108334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.055 qpair failed and we were unable to recover it. 00:28:29.055 [2024-12-09 17:38:58.108540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.055 [2024-12-09 17:38:58.108571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.055 qpair failed and we were unable to recover it. 00:28:29.055 [2024-12-09 17:38:58.108854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.055 [2024-12-09 17:38:58.108886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.055 qpair failed and we were unable to recover it. 00:28:29.055 [2024-12-09 17:38:58.109177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.055 [2024-12-09 17:38:58.109209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.055 qpair failed and we were unable to recover it. 00:28:29.055 [2024-12-09 17:38:58.109533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.055 [2024-12-09 17:38:58.109566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.055 qpair failed and we were unable to recover it. 00:28:29.055 [2024-12-09 17:38:58.109699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.055 [2024-12-09 17:38:58.109731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.055 qpair failed and we were unable to recover it. 00:28:29.055 [2024-12-09 17:38:58.109983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.055 [2024-12-09 17:38:58.110015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.055 qpair failed and we were unable to recover it. 00:28:29.055 [2024-12-09 17:38:58.110240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.055 [2024-12-09 17:38:58.110274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.055 qpair failed and we were unable to recover it. 00:28:29.055 [2024-12-09 17:38:58.110455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.055 [2024-12-09 17:38:58.110487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.055 qpair failed and we were unable to recover it. 00:28:29.055 [2024-12-09 17:38:58.110641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.055 [2024-12-09 17:38:58.110674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.055 qpair failed and we were unable to recover it. 00:28:29.055 [2024-12-09 17:38:58.110958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.055 [2024-12-09 17:38:58.110990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.055 qpair failed and we were unable to recover it. 00:28:29.055 [2024-12-09 17:38:58.111319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.055 [2024-12-09 17:38:58.111355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.055 qpair failed and we were unable to recover it. 00:28:29.055 [2024-12-09 17:38:58.111644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.056 [2024-12-09 17:38:58.111682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.056 qpair failed and we were unable to recover it. 00:28:29.056 [2024-12-09 17:38:58.111880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.056 [2024-12-09 17:38:58.111912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.056 qpair failed and we were unable to recover it. 00:28:29.056 [2024-12-09 17:38:58.112216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.056 [2024-12-09 17:38:58.112260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.056 qpair failed and we were unable to recover it. 00:28:29.056 [2024-12-09 17:38:58.112538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.056 [2024-12-09 17:38:58.112570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.056 qpair failed and we were unable to recover it. 00:28:29.056 [2024-12-09 17:38:58.112845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.056 [2024-12-09 17:38:58.112877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.056 qpair failed and we were unable to recover it. 00:28:29.056 [2024-12-09 17:38:58.113162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.056 [2024-12-09 17:38:58.113194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.056 qpair failed and we were unable to recover it. 00:28:29.056 [2024-12-09 17:38:58.113510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.056 [2024-12-09 17:38:58.113543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.056 qpair failed and we were unable to recover it. 00:28:29.056 [2024-12-09 17:38:58.113803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.056 [2024-12-09 17:38:58.113835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.056 qpair failed and we were unable to recover it. 00:28:29.056 [2024-12-09 17:38:58.114111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.056 [2024-12-09 17:38:58.114143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.056 qpair failed and we were unable to recover it. 00:28:29.056 [2024-12-09 17:38:58.114423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.056 [2024-12-09 17:38:58.114457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.056 qpair failed and we were unable to recover it. 00:28:29.056 [2024-12-09 17:38:58.114672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.056 [2024-12-09 17:38:58.114704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.056 qpair failed and we were unable to recover it. 00:28:29.056 [2024-12-09 17:38:58.114985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.056 [2024-12-09 17:38:58.115018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.056 qpair failed and we were unable to recover it. 00:28:29.056 [2024-12-09 17:38:58.115305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.056 [2024-12-09 17:38:58.115339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.056 qpair failed and we were unable to recover it. 00:28:29.056 [2024-12-09 17:38:58.115520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.056 [2024-12-09 17:38:58.115553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.056 qpair failed and we were unable to recover it. 00:28:29.056 [2024-12-09 17:38:58.115827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.056 [2024-12-09 17:38:58.115859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.056 qpair failed and we were unable to recover it. 00:28:29.056 [2024-12-09 17:38:58.116136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.056 [2024-12-09 17:38:58.116167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.056 qpair failed and we were unable to recover it. 00:28:29.056 [2024-12-09 17:38:58.116396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.056 [2024-12-09 17:38:58.116430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.056 qpair failed and we were unable to recover it. 00:28:29.056 [2024-12-09 17:38:58.116613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.056 [2024-12-09 17:38:58.116645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.056 qpair failed and we were unable to recover it. 00:28:29.056 [2024-12-09 17:38:58.116902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.056 [2024-12-09 17:38:58.116933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.056 qpair failed and we were unable to recover it. 00:28:29.056 [2024-12-09 17:38:58.117166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.056 [2024-12-09 17:38:58.117198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.056 qpair failed and we were unable to recover it. 00:28:29.056 [2024-12-09 17:38:58.117429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.056 [2024-12-09 17:38:58.117462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.056 qpair failed and we were unable to recover it. 00:28:29.056 [2024-12-09 17:38:58.117788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.056 [2024-12-09 17:38:58.117820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.056 qpair failed and we were unable to recover it. 00:28:29.056 [2024-12-09 17:38:58.118022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.056 [2024-12-09 17:38:58.118054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.056 qpair failed and we were unable to recover it. 00:28:29.056 [2024-12-09 17:38:58.118241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.056 [2024-12-09 17:38:58.118274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.056 qpair failed and we were unable to recover it. 00:28:29.056 [2024-12-09 17:38:58.118551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.056 [2024-12-09 17:38:58.118584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.056 qpair failed and we were unable to recover it. 00:28:29.056 [2024-12-09 17:38:58.118867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.056 [2024-12-09 17:38:58.118899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.056 qpair failed and we were unable to recover it. 00:28:29.056 [2024-12-09 17:38:58.119184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.056 [2024-12-09 17:38:58.119226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.056 qpair failed and we were unable to recover it. 00:28:29.056 [2024-12-09 17:38:58.119498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.056 [2024-12-09 17:38:58.119531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.056 qpair failed and we were unable to recover it. 00:28:29.056 [2024-12-09 17:38:58.119824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.056 [2024-12-09 17:38:58.119857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.056 qpair failed and we were unable to recover it. 00:28:29.056 [2024-12-09 17:38:58.120002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.056 [2024-12-09 17:38:58.120033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.056 qpair failed and we were unable to recover it. 00:28:29.056 [2024-12-09 17:38:58.120332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.056 [2024-12-09 17:38:58.120367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.056 qpair failed and we were unable to recover it. 00:28:29.056 [2024-12-09 17:38:58.120553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.056 [2024-12-09 17:38:58.120586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.056 qpair failed and we were unable to recover it. 00:28:29.056 [2024-12-09 17:38:58.120839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.056 [2024-12-09 17:38:58.120871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.056 qpair failed and we were unable to recover it. 00:28:29.056 [2024-12-09 17:38:58.121151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.056 [2024-12-09 17:38:58.121183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.056 qpair failed and we were unable to recover it. 00:28:29.056 [2024-12-09 17:38:58.121470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.056 [2024-12-09 17:38:58.121503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.056 qpair failed and we were unable to recover it. 00:28:29.056 [2024-12-09 17:38:58.121699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.056 [2024-12-09 17:38:58.121731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.056 qpair failed and we were unable to recover it. 00:28:29.056 [2024-12-09 17:38:58.121914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.056 [2024-12-09 17:38:58.121945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.056 qpair failed and we were unable to recover it. 00:28:29.056 [2024-12-09 17:38:58.122198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.056 [2024-12-09 17:38:58.122249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.056 qpair failed and we were unable to recover it. 00:28:29.056 [2024-12-09 17:38:58.122529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.056 [2024-12-09 17:38:58.122562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.056 qpair failed and we were unable to recover it. 00:28:29.056 [2024-12-09 17:38:58.122762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.056 [2024-12-09 17:38:58.122794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.056 qpair failed and we were unable to recover it. 00:28:29.057 [2024-12-09 17:38:58.123090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.057 [2024-12-09 17:38:58.123122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.057 qpair failed and we were unable to recover it. 00:28:29.057 [2024-12-09 17:38:58.123371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.057 [2024-12-09 17:38:58.123410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.057 qpair failed and we were unable to recover it. 00:28:29.057 [2024-12-09 17:38:58.123725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.057 [2024-12-09 17:38:58.123757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.057 qpair failed and we were unable to recover it. 00:28:29.057 [2024-12-09 17:38:58.123963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.057 [2024-12-09 17:38:58.123996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.057 qpair failed and we were unable to recover it. 00:28:29.057 [2024-12-09 17:38:58.124273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.057 [2024-12-09 17:38:58.124307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.057 qpair failed and we were unable to recover it. 00:28:29.057 [2024-12-09 17:38:58.124513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.057 [2024-12-09 17:38:58.124545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.057 qpair failed and we were unable to recover it. 00:28:29.057 [2024-12-09 17:38:58.124798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.057 [2024-12-09 17:38:58.124830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.057 qpair failed and we were unable to recover it. 00:28:29.057 [2024-12-09 17:38:58.125011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.057 [2024-12-09 17:38:58.125043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.057 qpair failed and we were unable to recover it. 00:28:29.057 [2024-12-09 17:38:58.125300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.057 [2024-12-09 17:38:58.125333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.057 qpair failed and we were unable to recover it. 00:28:29.057 [2024-12-09 17:38:58.125637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.057 [2024-12-09 17:38:58.125670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.057 qpair failed and we were unable to recover it. 00:28:29.057 [2024-12-09 17:38:58.125941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.057 [2024-12-09 17:38:58.125973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.057 qpair failed and we were unable to recover it. 00:28:29.057 [2024-12-09 17:38:58.126182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.057 [2024-12-09 17:38:58.126213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.057 qpair failed and we were unable to recover it. 00:28:29.057 [2024-12-09 17:38:58.126529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.057 [2024-12-09 17:38:58.126561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.057 qpair failed and we were unable to recover it. 00:28:29.057 [2024-12-09 17:38:58.126759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.057 [2024-12-09 17:38:58.126791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.057 qpair failed and we were unable to recover it. 00:28:29.057 [2024-12-09 17:38:58.126976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.057 [2024-12-09 17:38:58.127009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.057 qpair failed and we were unable to recover it. 00:28:29.057 [2024-12-09 17:38:58.127170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.057 [2024-12-09 17:38:58.127202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.057 qpair failed and we were unable to recover it. 00:28:29.057 [2024-12-09 17:38:58.127434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.057 [2024-12-09 17:38:58.127467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.057 qpair failed and we were unable to recover it. 00:28:29.057 [2024-12-09 17:38:58.127669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.057 [2024-12-09 17:38:58.127700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.057 qpair failed and we were unable to recover it. 00:28:29.057 [2024-12-09 17:38:58.128003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.057 [2024-12-09 17:38:58.128035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.057 qpair failed and we were unable to recover it. 00:28:29.057 [2024-12-09 17:38:58.128290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.057 [2024-12-09 17:38:58.128325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.057 qpair failed and we were unable to recover it. 00:28:29.057 [2024-12-09 17:38:58.128548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.057 [2024-12-09 17:38:58.128581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.057 qpair failed and we were unable to recover it. 00:28:29.057 [2024-12-09 17:38:58.128865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.057 [2024-12-09 17:38:58.128897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.057 qpair failed and we were unable to recover it. 00:28:29.057 [2024-12-09 17:38:58.129180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.057 [2024-12-09 17:38:58.129213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.057 qpair failed and we were unable to recover it. 00:28:29.057 [2024-12-09 17:38:58.129496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.057 [2024-12-09 17:38:58.129528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.057 qpair failed and we were unable to recover it. 00:28:29.057 [2024-12-09 17:38:58.129832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.057 [2024-12-09 17:38:58.129864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.057 qpair failed and we were unable to recover it. 00:28:29.057 [2024-12-09 17:38:58.130129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.057 [2024-12-09 17:38:58.130161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.057 qpair failed and we were unable to recover it. 00:28:29.057 [2024-12-09 17:38:58.130402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.057 [2024-12-09 17:38:58.130434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.057 qpair failed and we were unable to recover it. 00:28:29.057 [2024-12-09 17:38:58.130736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.057 [2024-12-09 17:38:58.130769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.057 qpair failed and we were unable to recover it. 00:28:29.057 [2024-12-09 17:38:58.130975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.057 [2024-12-09 17:38:58.131013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.057 qpair failed and we were unable to recover it. 00:28:29.057 [2024-12-09 17:38:58.131238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.057 [2024-12-09 17:38:58.131272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.057 qpair failed and we were unable to recover it. 00:28:29.057 [2024-12-09 17:38:58.131545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.057 [2024-12-09 17:38:58.131577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.057 qpair failed and we were unable to recover it. 00:28:29.057 [2024-12-09 17:38:58.131840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.057 [2024-12-09 17:38:58.131871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.057 qpair failed and we were unable to recover it. 00:28:29.057 [2024-12-09 17:38:58.132132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.057 [2024-12-09 17:38:58.132165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.057 qpair failed and we were unable to recover it. 00:28:29.057 [2024-12-09 17:38:58.132359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.057 [2024-12-09 17:38:58.132392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.057 qpair failed and we were unable to recover it. 00:28:29.057 [2024-12-09 17:38:58.132611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.057 [2024-12-09 17:38:58.132643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.057 qpair failed and we were unable to recover it. 00:28:29.057 [2024-12-09 17:38:58.132851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.057 [2024-12-09 17:38:58.132882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.057 qpair failed and we were unable to recover it. 00:28:29.057 [2024-12-09 17:38:58.133066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.057 [2024-12-09 17:38:58.133100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.057 qpair failed and we were unable to recover it. 00:28:29.057 [2024-12-09 17:38:58.133303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.057 [2024-12-09 17:38:58.133337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.057 qpair failed and we were unable to recover it. 00:28:29.057 [2024-12-09 17:38:58.133592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.057 [2024-12-09 17:38:58.133624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.057 qpair failed and we were unable to recover it. 00:28:29.058 [2024-12-09 17:38:58.133906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.058 [2024-12-09 17:38:58.133938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.058 qpair failed and we were unable to recover it. 00:28:29.058 [2024-12-09 17:38:58.134140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.058 [2024-12-09 17:38:58.134171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.058 qpair failed and we were unable to recover it. 00:28:29.058 [2024-12-09 17:38:58.134472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.058 [2024-12-09 17:38:58.134505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.058 qpair failed and we were unable to recover it. 00:28:29.058 [2024-12-09 17:38:58.134776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.058 [2024-12-09 17:38:58.134808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.058 qpair failed and we were unable to recover it. 00:28:29.058 [2024-12-09 17:38:58.135029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.058 [2024-12-09 17:38:58.135061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.058 qpair failed and we were unable to recover it. 00:28:29.058 [2024-12-09 17:38:58.135314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.058 [2024-12-09 17:38:58.135349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.058 qpair failed and we were unable to recover it. 00:28:29.058 [2024-12-09 17:38:58.135651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.058 [2024-12-09 17:38:58.135683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.058 qpair failed and we were unable to recover it. 00:28:29.058 [2024-12-09 17:38:58.135970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.058 [2024-12-09 17:38:58.136002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.058 qpair failed and we were unable to recover it. 00:28:29.058 [2024-12-09 17:38:58.136456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.058 [2024-12-09 17:38:58.136492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.058 qpair failed and we were unable to recover it. 00:28:29.058 [2024-12-09 17:38:58.136703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.058 [2024-12-09 17:38:58.136736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.058 qpair failed and we were unable to recover it. 00:28:29.058 [2024-12-09 17:38:58.136997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.058 [2024-12-09 17:38:58.137029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.058 qpair failed and we were unable to recover it. 00:28:29.058 [2024-12-09 17:38:58.137247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.058 [2024-12-09 17:38:58.137282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.058 qpair failed and we were unable to recover it. 00:28:29.058 [2024-12-09 17:38:58.137561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.058 [2024-12-09 17:38:58.137594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.058 qpair failed and we were unable to recover it. 00:28:29.058 [2024-12-09 17:38:58.137876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.058 [2024-12-09 17:38:58.137907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.058 qpair failed and we were unable to recover it. 00:28:29.058 [2024-12-09 17:38:58.138171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.058 [2024-12-09 17:38:58.138204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.058 qpair failed and we were unable to recover it. 00:28:29.058 [2024-12-09 17:38:58.138508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.058 [2024-12-09 17:38:58.138541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.058 qpair failed and we were unable to recover it. 00:28:29.058 [2024-12-09 17:38:58.138735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.058 [2024-12-09 17:38:58.138766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.058 qpair failed and we were unable to recover it. 00:28:29.058 [2024-12-09 17:38:58.139005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.058 [2024-12-09 17:38:58.139037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.058 qpair failed and we were unable to recover it. 00:28:29.058 [2024-12-09 17:38:58.139256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.058 [2024-12-09 17:38:58.139290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.058 qpair failed and we were unable to recover it. 00:28:29.058 [2024-12-09 17:38:58.139545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.058 [2024-12-09 17:38:58.139578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.058 qpair failed and we were unable to recover it. 00:28:29.058 [2024-12-09 17:38:58.139760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.058 [2024-12-09 17:38:58.139791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.058 qpair failed and we were unable to recover it. 00:28:29.058 [2024-12-09 17:38:58.140014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.058 [2024-12-09 17:38:58.140046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.058 qpair failed and we were unable to recover it. 00:28:29.058 [2024-12-09 17:38:58.140327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.058 [2024-12-09 17:38:58.140361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.058 qpair failed and we were unable to recover it. 00:28:29.058 [2024-12-09 17:38:58.140497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.058 [2024-12-09 17:38:58.140530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.058 qpair failed and we were unable to recover it. 00:28:29.058 [2024-12-09 17:38:58.140711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.058 [2024-12-09 17:38:58.140743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.058 qpair failed and we were unable to recover it. 00:28:29.058 [2024-12-09 17:38:58.141022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.058 [2024-12-09 17:38:58.141055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.058 qpair failed and we were unable to recover it. 00:28:29.058 [2024-12-09 17:38:58.141342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.058 [2024-12-09 17:38:58.141375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.058 qpair failed and we were unable to recover it. 00:28:29.058 [2024-12-09 17:38:58.141600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.058 [2024-12-09 17:38:58.141631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.058 qpair failed and we were unable to recover it. 00:28:29.058 [2024-12-09 17:38:58.141912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.058 [2024-12-09 17:38:58.141944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.058 qpair failed and we were unable to recover it. 00:28:29.058 [2024-12-09 17:38:58.142204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.058 [2024-12-09 17:38:58.142246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.058 qpair failed and we were unable to recover it. 00:28:29.058 [2024-12-09 17:38:58.142529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.058 [2024-12-09 17:38:58.142572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.058 qpair failed and we were unable to recover it. 00:28:29.058 [2024-12-09 17:38:58.142805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.058 [2024-12-09 17:38:58.142837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.058 qpair failed and we were unable to recover it. 00:28:29.058 [2024-12-09 17:38:58.143104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.058 [2024-12-09 17:38:58.143137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.059 qpair failed and we were unable to recover it. 00:28:29.059 [2024-12-09 17:38:58.143454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.059 [2024-12-09 17:38:58.143487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.059 qpair failed and we were unable to recover it. 00:28:29.059 [2024-12-09 17:38:58.143746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.059 [2024-12-09 17:38:58.143778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.059 qpair failed and we were unable to recover it. 00:28:29.059 [2024-12-09 17:38:58.144073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.059 [2024-12-09 17:38:58.144105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.059 qpair failed and we were unable to recover it. 00:28:29.059 [2024-12-09 17:38:58.144375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.059 [2024-12-09 17:38:58.144408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.059 qpair failed and we were unable to recover it. 00:28:29.059 [2024-12-09 17:38:58.144677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.059 [2024-12-09 17:38:58.144709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.059 qpair failed and we were unable to recover it. 00:28:29.059 [2024-12-09 17:38:58.144910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.059 [2024-12-09 17:38:58.144942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.059 qpair failed and we were unable to recover it. 00:28:29.059 [2024-12-09 17:38:58.145166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.059 [2024-12-09 17:38:58.145198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.059 qpair failed and we were unable to recover it. 00:28:29.059 [2024-12-09 17:38:58.145461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.059 [2024-12-09 17:38:58.145494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.059 qpair failed and we were unable to recover it. 00:28:29.059 [2024-12-09 17:38:58.145707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.059 [2024-12-09 17:38:58.145739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.059 qpair failed and we were unable to recover it. 00:28:29.059 [2024-12-09 17:38:58.145923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.059 [2024-12-09 17:38:58.145955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.059 qpair failed and we were unable to recover it. 00:28:29.059 [2024-12-09 17:38:58.146148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.059 [2024-12-09 17:38:58.146180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.059 qpair failed and we were unable to recover it. 00:28:29.059 [2024-12-09 17:38:58.146452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.059 [2024-12-09 17:38:58.146485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.059 qpair failed and we were unable to recover it. 00:28:29.059 [2024-12-09 17:38:58.146765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.059 [2024-12-09 17:38:58.146797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.059 qpair failed and we were unable to recover it. 00:28:29.059 [2024-12-09 17:38:58.147074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.059 [2024-12-09 17:38:58.147108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.059 qpair failed and we were unable to recover it. 00:28:29.059 [2024-12-09 17:38:58.147404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.059 [2024-12-09 17:38:58.147439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.059 qpair failed and we were unable to recover it. 00:28:29.059 [2024-12-09 17:38:58.147729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.059 [2024-12-09 17:38:58.147761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.059 qpair failed and we were unable to recover it. 00:28:29.059 [2024-12-09 17:38:58.147956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.059 [2024-12-09 17:38:58.147989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.059 qpair failed and we were unable to recover it. 00:28:29.059 [2024-12-09 17:38:58.148231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.059 [2024-12-09 17:38:58.148266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.059 qpair failed and we were unable to recover it. 00:28:29.059 [2024-12-09 17:38:58.148460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.059 [2024-12-09 17:38:58.148492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.059 qpair failed and we were unable to recover it. 00:28:29.059 [2024-12-09 17:38:58.148725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.059 [2024-12-09 17:38:58.148757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.059 qpair failed and we were unable to recover it. 00:28:29.059 [2024-12-09 17:38:58.149013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.059 [2024-12-09 17:38:58.149047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.059 qpair failed and we were unable to recover it. 00:28:29.059 [2024-12-09 17:38:58.149281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.059 [2024-12-09 17:38:58.149314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.059 qpair failed and we were unable to recover it. 00:28:29.059 [2024-12-09 17:38:58.149517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.059 [2024-12-09 17:38:58.149549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.059 qpair failed and we were unable to recover it. 00:28:29.059 [2024-12-09 17:38:58.149823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.059 [2024-12-09 17:38:58.149855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.059 qpair failed and we were unable to recover it. 00:28:29.059 [2024-12-09 17:38:58.150137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.059 [2024-12-09 17:38:58.150176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.059 qpair failed and we were unable to recover it. 00:28:29.059 [2024-12-09 17:38:58.150406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.059 [2024-12-09 17:38:58.150440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.059 qpair failed and we were unable to recover it. 00:28:29.059 [2024-12-09 17:38:58.150720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.059 [2024-12-09 17:38:58.150752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.059 qpair failed and we were unable to recover it. 00:28:29.059 [2024-12-09 17:38:58.151038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.059 [2024-12-09 17:38:58.151070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.059 qpair failed and we were unable to recover it. 00:28:29.059 [2024-12-09 17:38:58.151354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.059 [2024-12-09 17:38:58.151388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.059 qpair failed and we were unable to recover it. 00:28:29.059 [2024-12-09 17:38:58.151670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.059 [2024-12-09 17:38:58.151702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.059 qpair failed and we were unable to recover it. 00:28:29.059 [2024-12-09 17:38:58.151987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.059 [2024-12-09 17:38:58.152018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.059 qpair failed and we were unable to recover it. 00:28:29.059 [2024-12-09 17:38:58.152247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.059 [2024-12-09 17:38:58.152282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.059 qpair failed and we were unable to recover it. 00:28:29.059 [2024-12-09 17:38:58.152571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.059 [2024-12-09 17:38:58.152605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.059 qpair failed and we were unable to recover it. 00:28:29.059 [2024-12-09 17:38:58.152784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.059 [2024-12-09 17:38:58.152816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.059 qpair failed and we were unable to recover it. 00:28:29.059 [2024-12-09 17:38:58.153116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.059 [2024-12-09 17:38:58.153148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.059 qpair failed and we were unable to recover it. 00:28:29.059 [2024-12-09 17:38:58.153452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.059 [2024-12-09 17:38:58.153486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.059 qpair failed and we were unable to recover it. 00:28:29.059 [2024-12-09 17:38:58.153781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.059 [2024-12-09 17:38:58.153815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.059 qpair failed and we were unable to recover it. 00:28:29.059 [2024-12-09 17:38:58.154082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.059 [2024-12-09 17:38:58.154114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.059 qpair failed and we were unable to recover it. 00:28:29.059 [2024-12-09 17:38:58.154394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.059 [2024-12-09 17:38:58.154429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.059 qpair failed and we were unable to recover it. 00:28:29.060 [2024-12-09 17:38:58.154546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.060 [2024-12-09 17:38:58.154579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.060 qpair failed and we were unable to recover it. 00:28:29.060 [2024-12-09 17:38:58.154863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.060 [2024-12-09 17:38:58.154894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.060 qpair failed and we were unable to recover it. 00:28:29.060 [2024-12-09 17:38:58.155170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.060 [2024-12-09 17:38:58.155204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.060 qpair failed and we were unable to recover it. 00:28:29.060 [2024-12-09 17:38:58.155467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.060 [2024-12-09 17:38:58.155500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.060 qpair failed and we were unable to recover it. 00:28:29.060 [2024-12-09 17:38:58.155781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.060 [2024-12-09 17:38:58.155815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.060 qpair failed and we were unable to recover it. 00:28:29.060 [2024-12-09 17:38:58.155955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.060 [2024-12-09 17:38:58.155987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.060 qpair failed and we were unable to recover it. 00:28:29.060 [2024-12-09 17:38:58.156240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.060 [2024-12-09 17:38:58.156274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.060 qpair failed and we were unable to recover it. 00:28:29.060 [2024-12-09 17:38:58.156456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.060 [2024-12-09 17:38:58.156489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.060 qpair failed and we were unable to recover it. 00:28:29.060 [2024-12-09 17:38:58.156777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.060 [2024-12-09 17:38:58.156810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.060 qpair failed and we were unable to recover it. 00:28:29.060 [2024-12-09 17:38:58.157014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.060 [2024-12-09 17:38:58.157046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.060 qpair failed and we were unable to recover it. 00:28:29.060 [2024-12-09 17:38:58.157332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.060 [2024-12-09 17:38:58.157365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.060 qpair failed and we were unable to recover it. 00:28:29.060 [2024-12-09 17:38:58.157620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.060 [2024-12-09 17:38:58.157652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.060 qpair failed and we were unable to recover it. 00:28:29.060 [2024-12-09 17:38:58.157834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.060 [2024-12-09 17:38:58.157866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.060 qpair failed and we were unable to recover it. 00:28:29.060 [2024-12-09 17:38:58.158058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.060 [2024-12-09 17:38:58.158090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.060 qpair failed and we were unable to recover it. 00:28:29.060 [2024-12-09 17:38:58.158293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.060 [2024-12-09 17:38:58.158327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.060 qpair failed and we were unable to recover it. 00:28:29.060 [2024-12-09 17:38:58.158620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.060 [2024-12-09 17:38:58.158654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.060 qpair failed and we were unable to recover it. 00:28:29.060 [2024-12-09 17:38:58.158856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.060 [2024-12-09 17:38:58.158886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.060 qpair failed and we were unable to recover it. 00:28:29.060 [2024-12-09 17:38:58.159087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.060 [2024-12-09 17:38:58.159119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.060 qpair failed and we were unable to recover it. 00:28:29.060 [2024-12-09 17:38:58.159302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.060 [2024-12-09 17:38:58.159334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.060 qpair failed and we were unable to recover it. 00:28:29.060 [2024-12-09 17:38:58.159562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.060 [2024-12-09 17:38:58.159594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.060 qpair failed and we were unable to recover it. 00:28:29.060 [2024-12-09 17:38:58.159861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.060 [2024-12-09 17:38:58.159893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.060 qpair failed and we were unable to recover it. 00:28:29.060 [2024-12-09 17:38:58.160118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.060 [2024-12-09 17:38:58.160151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.060 qpair failed and we were unable to recover it. 00:28:29.060 [2024-12-09 17:38:58.160383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.060 [2024-12-09 17:38:58.160416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.060 qpair failed and we were unable to recover it. 00:28:29.060 [2024-12-09 17:38:58.160636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.060 [2024-12-09 17:38:58.160668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.060 qpair failed and we were unable to recover it. 00:28:29.060 [2024-12-09 17:38:58.160868] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x51f460 is same with the state(6) to be set 00:28:29.060 [2024-12-09 17:38:58.161212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.060 [2024-12-09 17:38:58.161311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.060 qpair failed and we were unable to recover it. 00:28:29.060 [2024-12-09 17:38:58.161533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.060 [2024-12-09 17:38:58.161571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.060 qpair failed and we were unable to recover it. 00:28:29.060 [2024-12-09 17:38:58.161847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.060 [2024-12-09 17:38:58.161880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.060 qpair failed and we were unable to recover it. 00:28:29.060 [2024-12-09 17:38:58.162022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.060 [2024-12-09 17:38:58.162054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.060 qpair failed and we were unable to recover it. 00:28:29.060 [2024-12-09 17:38:58.162259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.060 [2024-12-09 17:38:58.162295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.060 qpair failed and we were unable to recover it. 00:28:29.060 [2024-12-09 17:38:58.162518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.060 [2024-12-09 17:38:58.162551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.060 qpair failed and we were unable to recover it. 00:28:29.060 [2024-12-09 17:38:58.162792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.060 [2024-12-09 17:38:58.162829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.060 qpair failed and we were unable to recover it. 00:28:29.060 [2024-12-09 17:38:58.163130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.060 [2024-12-09 17:38:58.163163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.060 qpair failed and we were unable to recover it. 00:28:29.060 [2024-12-09 17:38:58.163502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.060 [2024-12-09 17:38:58.163537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.060 qpair failed and we were unable to recover it. 00:28:29.060 [2024-12-09 17:38:58.163738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.060 [2024-12-09 17:38:58.163771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.060 qpair failed and we were unable to recover it. 00:28:29.060 [2024-12-09 17:38:58.164057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.060 [2024-12-09 17:38:58.164092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.060 qpair failed and we were unable to recover it. 00:28:29.060 [2024-12-09 17:38:58.164302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.060 [2024-12-09 17:38:58.164337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.060 qpair failed and we were unable to recover it. 00:28:29.060 [2024-12-09 17:38:58.164564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.060 [2024-12-09 17:38:58.164597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.060 qpair failed and we were unable to recover it. 00:28:29.060 [2024-12-09 17:38:58.164738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.060 [2024-12-09 17:38:58.164771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.060 qpair failed and we were unable to recover it. 00:28:29.060 [2024-12-09 17:38:58.165045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.060 [2024-12-09 17:38:58.165079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.061 qpair failed and we were unable to recover it. 00:28:29.061 [2024-12-09 17:38:58.165435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.061 [2024-12-09 17:38:58.165513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.061 qpair failed and we were unable to recover it. 00:28:29.061 [2024-12-09 17:38:58.165893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.061 [2024-12-09 17:38:58.165970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.061 qpair failed and we were unable to recover it. 00:28:29.061 [2024-12-09 17:38:58.166252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.061 [2024-12-09 17:38:58.166289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.061 qpair failed and we were unable to recover it. 00:28:29.061 [2024-12-09 17:38:58.166551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.061 [2024-12-09 17:38:58.166584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.061 qpair failed and we were unable to recover it. 00:28:29.061 [2024-12-09 17:38:58.166870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.061 [2024-12-09 17:38:58.166901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.061 qpair failed and we were unable to recover it. 00:28:29.061 [2024-12-09 17:38:58.167121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.061 [2024-12-09 17:38:58.167155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.061 qpair failed and we were unable to recover it. 00:28:29.061 [2024-12-09 17:38:58.167449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.061 [2024-12-09 17:38:58.167482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.061 qpair failed and we were unable to recover it. 00:28:29.061 [2024-12-09 17:38:58.167615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.061 [2024-12-09 17:38:58.167647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.061 qpair failed and we were unable to recover it. 00:28:29.061 [2024-12-09 17:38:58.167926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.061 [2024-12-09 17:38:58.167958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.061 qpair failed and we were unable to recover it. 00:28:29.061 [2024-12-09 17:38:58.168226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.061 [2024-12-09 17:38:58.168261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.061 qpair failed and we were unable to recover it. 00:28:29.061 [2024-12-09 17:38:58.168559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.061 [2024-12-09 17:38:58.168591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.061 qpair failed and we were unable to recover it. 00:28:29.061 [2024-12-09 17:38:58.168855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.061 [2024-12-09 17:38:58.168887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.061 qpair failed and we were unable to recover it. 00:28:29.061 [2024-12-09 17:38:58.169194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.061 [2024-12-09 17:38:58.169237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.061 qpair failed and we were unable to recover it. 00:28:29.061 [2024-12-09 17:38:58.169493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.061 [2024-12-09 17:38:58.169525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.061 qpair failed and we were unable to recover it. 00:28:29.061 [2024-12-09 17:38:58.169825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.061 [2024-12-09 17:38:58.169857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.061 qpair failed and we were unable to recover it. 00:28:29.061 [2024-12-09 17:38:58.170135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.061 [2024-12-09 17:38:58.170169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.061 qpair failed and we were unable to recover it. 00:28:29.061 [2024-12-09 17:38:58.170347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.061 [2024-12-09 17:38:58.170382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.061 qpair failed and we were unable to recover it. 00:28:29.061 [2024-12-09 17:38:58.170637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.061 [2024-12-09 17:38:58.170669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.061 qpair failed and we were unable to recover it. 00:28:29.061 [2024-12-09 17:38:58.170961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.061 [2024-12-09 17:38:58.170993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.061 qpair failed and we were unable to recover it. 00:28:29.061 [2024-12-09 17:38:58.171292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.061 [2024-12-09 17:38:58.171327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.061 qpair failed and we were unable to recover it. 00:28:29.061 [2024-12-09 17:38:58.171592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.061 [2024-12-09 17:38:58.171624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.061 qpair failed and we were unable to recover it. 00:28:29.061 [2024-12-09 17:38:58.171900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.061 [2024-12-09 17:38:58.171932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.061 qpair failed and we were unable to recover it. 00:28:29.061 [2024-12-09 17:38:58.172214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.061 [2024-12-09 17:38:58.172256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.061 qpair failed and we were unable to recover it. 00:28:29.061 [2024-12-09 17:38:58.172532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.061 [2024-12-09 17:38:58.172564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.061 qpair failed and we were unable to recover it. 00:28:29.061 [2024-12-09 17:38:58.172760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.061 [2024-12-09 17:38:58.172793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.061 qpair failed and we were unable to recover it. 00:28:29.061 [2024-12-09 17:38:58.173070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.061 [2024-12-09 17:38:58.173102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.061 qpair failed and we were unable to recover it. 00:28:29.061 [2024-12-09 17:38:58.173309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.061 [2024-12-09 17:38:58.173343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.061 qpair failed and we were unable to recover it. 00:28:29.061 [2024-12-09 17:38:58.173574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.061 [2024-12-09 17:38:58.173615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.061 qpair failed and we were unable to recover it. 00:28:29.061 [2024-12-09 17:38:58.173894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.061 [2024-12-09 17:38:58.173925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.061 qpair failed and we were unable to recover it. 00:28:29.061 [2024-12-09 17:38:58.174150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.061 [2024-12-09 17:38:58.174183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.061 qpair failed and we were unable to recover it. 00:28:29.061 [2024-12-09 17:38:58.174477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.061 [2024-12-09 17:38:58.174511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.061 qpair failed and we were unable to recover it. 00:28:29.061 [2024-12-09 17:38:58.174782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.061 [2024-12-09 17:38:58.174816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.061 qpair failed and we were unable to recover it. 00:28:29.061 [2024-12-09 17:38:58.175101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.061 [2024-12-09 17:38:58.175132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.061 qpair failed and we were unable to recover it. 00:28:29.061 [2024-12-09 17:38:58.175392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.061 [2024-12-09 17:38:58.175427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.061 qpair failed and we were unable to recover it. 00:28:29.061 [2024-12-09 17:38:58.175756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.061 [2024-12-09 17:38:58.175789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.061 qpair failed and we were unable to recover it. 00:28:29.061 [2024-12-09 17:38:58.175975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.061 [2024-12-09 17:38:58.176008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.061 qpair failed and we were unable to recover it. 00:28:29.061 [2024-12-09 17:38:58.176208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.061 [2024-12-09 17:38:58.176256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.061 qpair failed and we were unable to recover it. 00:28:29.061 [2024-12-09 17:38:58.176480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.061 [2024-12-09 17:38:58.176512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.061 qpair failed and we were unable to recover it. 00:28:29.061 [2024-12-09 17:38:58.176770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.061 [2024-12-09 17:38:58.176802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.061 qpair failed and we were unable to recover it. 00:28:29.061 [2024-12-09 17:38:58.177112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.062 [2024-12-09 17:38:58.177148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.062 qpair failed and we were unable to recover it. 00:28:29.062 [2024-12-09 17:38:58.177420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.062 [2024-12-09 17:38:58.177454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.062 qpair failed and we were unable to recover it. 00:28:29.062 [2024-12-09 17:38:58.177742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.062 [2024-12-09 17:38:58.177775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.062 qpair failed and we were unable to recover it. 00:28:29.062 [2024-12-09 17:38:58.178058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.062 [2024-12-09 17:38:58.178092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.062 qpair failed and we were unable to recover it. 00:28:29.062 [2024-12-09 17:38:58.178347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.062 [2024-12-09 17:38:58.178381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.062 qpair failed and we were unable to recover it. 00:28:29.062 [2024-12-09 17:38:58.178568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.062 [2024-12-09 17:38:58.178601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.062 qpair failed and we were unable to recover it. 00:28:29.062 [2024-12-09 17:38:58.178784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.062 [2024-12-09 17:38:58.178815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.062 qpair failed and we were unable to recover it. 00:28:29.062 [2024-12-09 17:38:58.179024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.062 [2024-12-09 17:38:58.179057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.062 qpair failed and we were unable to recover it. 00:28:29.062 [2024-12-09 17:38:58.179345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.062 [2024-12-09 17:38:58.179381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.062 qpair failed and we were unable to recover it. 00:28:29.062 [2024-12-09 17:38:58.179598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.062 [2024-12-09 17:38:58.179631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.062 qpair failed and we were unable to recover it. 00:28:29.062 [2024-12-09 17:38:58.179830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.062 [2024-12-09 17:38:58.179862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.062 qpair failed and we were unable to recover it. 00:28:29.062 [2024-12-09 17:38:58.180139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.062 [2024-12-09 17:38:58.180173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.062 qpair failed and we were unable to recover it. 00:28:29.062 [2024-12-09 17:38:58.180481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.062 [2024-12-09 17:38:58.180515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.062 qpair failed and we were unable to recover it. 00:28:29.062 [2024-12-09 17:38:58.180795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.062 [2024-12-09 17:38:58.180827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.062 qpair failed and we were unable to recover it. 00:28:29.062 [2024-12-09 17:38:58.181088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.062 [2024-12-09 17:38:58.181121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.062 qpair failed and we were unable to recover it. 00:28:29.062 [2024-12-09 17:38:58.181398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.062 [2024-12-09 17:38:58.181439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.062 qpair failed and we were unable to recover it. 00:28:29.062 [2024-12-09 17:38:58.181719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.062 [2024-12-09 17:38:58.181750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.062 qpair failed and we were unable to recover it. 00:28:29.062 [2024-12-09 17:38:58.182032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.062 [2024-12-09 17:38:58.182065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.062 qpair failed and we were unable to recover it. 00:28:29.062 [2024-12-09 17:38:58.182359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.062 [2024-12-09 17:38:58.182393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.062 qpair failed and we were unable to recover it. 00:28:29.062 [2024-12-09 17:38:58.182635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.062 [2024-12-09 17:38:58.182669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.062 qpair failed and we were unable to recover it. 00:28:29.062 [2024-12-09 17:38:58.182862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.062 [2024-12-09 17:38:58.182894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.062 qpair failed and we were unable to recover it. 00:28:29.062 [2024-12-09 17:38:58.183175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.062 [2024-12-09 17:38:58.183208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.062 qpair failed and we were unable to recover it. 00:28:29.062 [2024-12-09 17:38:58.183474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.062 [2024-12-09 17:38:58.183512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.062 qpair failed and we were unable to recover it. 00:28:29.062 [2024-12-09 17:38:58.183784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.062 [2024-12-09 17:38:58.183820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.062 qpair failed and we were unable to recover it. 00:28:29.062 [2024-12-09 17:38:58.184009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.062 [2024-12-09 17:38:58.184046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.062 qpair failed and we were unable to recover it. 00:28:29.062 [2024-12-09 17:38:58.184242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.062 [2024-12-09 17:38:58.184277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.062 qpair failed and we were unable to recover it. 00:28:29.062 [2024-12-09 17:38:58.184485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.062 [2024-12-09 17:38:58.184517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.062 qpair failed and we were unable to recover it. 00:28:29.062 [2024-12-09 17:38:58.184776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.062 [2024-12-09 17:38:58.184812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.062 qpair failed and we were unable to recover it. 00:28:29.062 [2024-12-09 17:38:58.184941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.062 [2024-12-09 17:38:58.184974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.062 qpair failed and we were unable to recover it. 00:28:29.062 [2024-12-09 17:38:58.185096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.062 [2024-12-09 17:38:58.185129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.062 qpair failed and we were unable to recover it. 00:28:29.062 [2024-12-09 17:38:58.185268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.062 [2024-12-09 17:38:58.185303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.062 qpair failed and we were unable to recover it. 00:28:29.062 [2024-12-09 17:38:58.185534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.062 [2024-12-09 17:38:58.185566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.062 qpair failed and we were unable to recover it. 00:28:29.062 [2024-12-09 17:38:58.185786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.062 [2024-12-09 17:38:58.185820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.062 qpair failed and we were unable to recover it. 00:28:29.062 [2024-12-09 17:38:58.186034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.062 [2024-12-09 17:38:58.186068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.062 qpair failed and we were unable to recover it. 00:28:29.062 [2024-12-09 17:38:58.186276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.062 [2024-12-09 17:38:58.186311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.062 qpair failed and we were unable to recover it. 00:28:29.062 [2024-12-09 17:38:58.186592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.063 [2024-12-09 17:38:58.186635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.063 qpair failed and we were unable to recover it. 00:28:29.063 [2024-12-09 17:38:58.186930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.063 [2024-12-09 17:38:58.186963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.063 qpair failed and we were unable to recover it. 00:28:29.063 [2024-12-09 17:38:58.187251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.063 [2024-12-09 17:38:58.187287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.063 qpair failed and we were unable to recover it. 00:28:29.063 [2024-12-09 17:38:58.187484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.063 [2024-12-09 17:38:58.187519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.063 qpair failed and we were unable to recover it. 00:28:29.063 [2024-12-09 17:38:58.187749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.063 [2024-12-09 17:38:58.187785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.063 qpair failed and we were unable to recover it. 00:28:29.063 [2024-12-09 17:38:58.188043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.063 [2024-12-09 17:38:58.188078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.063 qpair failed and we were unable to recover it. 00:28:29.063 [2024-12-09 17:38:58.188349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.063 [2024-12-09 17:38:58.188385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.063 qpair failed and we were unable to recover it. 00:28:29.063 [2024-12-09 17:38:58.188642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.063 [2024-12-09 17:38:58.188681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.063 qpair failed and we were unable to recover it. 00:28:29.063 [2024-12-09 17:38:58.188882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.063 [2024-12-09 17:38:58.188916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.063 qpair failed and we were unable to recover it. 00:28:29.063 [2024-12-09 17:38:58.189205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.063 [2024-12-09 17:38:58.189249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.063 qpair failed and we were unable to recover it. 00:28:29.063 [2024-12-09 17:38:58.189515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.063 [2024-12-09 17:38:58.189548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.063 qpair failed and we were unable to recover it. 00:28:29.063 [2024-12-09 17:38:58.189813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.063 [2024-12-09 17:38:58.189847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.063 qpair failed and we were unable to recover it. 00:28:29.063 [2024-12-09 17:38:58.190146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.063 [2024-12-09 17:38:58.190178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.063 qpair failed and we were unable to recover it. 00:28:29.063 [2024-12-09 17:38:58.190446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.063 [2024-12-09 17:38:58.190482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.063 qpair failed and we were unable to recover it. 00:28:29.063 [2024-12-09 17:38:58.190777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.063 [2024-12-09 17:38:58.190809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.063 qpair failed and we were unable to recover it. 00:28:29.063 [2024-12-09 17:38:58.191016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.063 [2024-12-09 17:38:58.191049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.063 qpair failed and we were unable to recover it. 00:28:29.063 [2024-12-09 17:38:58.191310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.063 [2024-12-09 17:38:58.191343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.063 qpair failed and we were unable to recover it. 00:28:29.063 [2024-12-09 17:38:58.191524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.063 [2024-12-09 17:38:58.191557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.063 qpair failed and we were unable to recover it. 00:28:29.063 [2024-12-09 17:38:58.191840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.063 [2024-12-09 17:38:58.191871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.063 qpair failed and we were unable to recover it. 00:28:29.063 [2024-12-09 17:38:58.192161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.063 [2024-12-09 17:38:58.192193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.063 qpair failed and we were unable to recover it. 00:28:29.063 [2024-12-09 17:38:58.192460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.063 [2024-12-09 17:38:58.192494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.063 qpair failed and we were unable to recover it. 00:28:29.063 [2024-12-09 17:38:58.192697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.063 [2024-12-09 17:38:58.192736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.063 qpair failed and we were unable to recover it. 00:28:29.063 [2024-12-09 17:38:58.192942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.063 [2024-12-09 17:38:58.192976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.063 qpair failed and we were unable to recover it. 00:28:29.063 [2024-12-09 17:38:58.193251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.063 [2024-12-09 17:38:58.193286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.063 qpair failed and we were unable to recover it. 00:28:29.063 [2024-12-09 17:38:58.193473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.063 [2024-12-09 17:38:58.193506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.063 qpair failed and we were unable to recover it. 00:28:29.063 [2024-12-09 17:38:58.193684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.063 [2024-12-09 17:38:58.193717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.063 qpair failed and we were unable to recover it. 00:28:29.063 [2024-12-09 17:38:58.193974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.063 [2024-12-09 17:38:58.194006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.063 qpair failed and we were unable to recover it. 00:28:29.063 [2024-12-09 17:38:58.194192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.063 [2024-12-09 17:38:58.194245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.063 qpair failed and we were unable to recover it. 00:28:29.063 [2024-12-09 17:38:58.194521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.063 [2024-12-09 17:38:58.194553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.063 qpair failed and we were unable to recover it. 00:28:29.063 [2024-12-09 17:38:58.194760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.063 [2024-12-09 17:38:58.194793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.063 qpair failed and we were unable to recover it. 00:28:29.063 [2024-12-09 17:38:58.195074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.063 [2024-12-09 17:38:58.195107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.063 qpair failed and we were unable to recover it. 00:28:29.063 [2024-12-09 17:38:58.195312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.063 [2024-12-09 17:38:58.195345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.063 qpair failed and we were unable to recover it. 00:28:29.063 [2024-12-09 17:38:58.195531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.063 [2024-12-09 17:38:58.195563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.063 qpair failed and we were unable to recover it. 00:28:29.063 [2024-12-09 17:38:58.195840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.063 [2024-12-09 17:38:58.195872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.063 qpair failed and we were unable to recover it. 00:28:29.063 [2024-12-09 17:38:58.196160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.063 [2024-12-09 17:38:58.196194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.063 qpair failed and we were unable to recover it. 00:28:29.063 [2024-12-09 17:38:58.196472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.063 [2024-12-09 17:38:58.196505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.063 qpair failed and we were unable to recover it. 00:28:29.063 [2024-12-09 17:38:58.196698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.063 [2024-12-09 17:38:58.196731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.063 qpair failed and we were unable to recover it. 00:28:29.063 [2024-12-09 17:38:58.196987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.063 [2024-12-09 17:38:58.197019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.063 qpair failed and we were unable to recover it. 00:28:29.338 [2024-12-09 17:38:58.197329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.338 [2024-12-09 17:38:58.197363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.338 qpair failed and we were unable to recover it. 00:28:29.338 [2024-12-09 17:38:58.197638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.339 [2024-12-09 17:38:58.197671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.339 qpair failed and we were unable to recover it. 00:28:29.339 [2024-12-09 17:38:58.197904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.339 [2024-12-09 17:38:58.197936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.339 qpair failed and we were unable to recover it. 00:28:29.339 [2024-12-09 17:38:58.198142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.339 [2024-12-09 17:38:58.198172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.339 qpair failed and we were unable to recover it. 00:28:29.339 [2024-12-09 17:38:58.198457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.339 [2024-12-09 17:38:58.198491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.339 qpair failed and we were unable to recover it. 00:28:29.339 [2024-12-09 17:38:58.198673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.339 [2024-12-09 17:38:58.198705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.339 qpair failed and we were unable to recover it. 00:28:29.339 [2024-12-09 17:38:58.198978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.339 [2024-12-09 17:38:58.199011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.339 qpair failed and we were unable to recover it. 00:28:29.339 [2024-12-09 17:38:58.199288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.339 [2024-12-09 17:38:58.199323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.339 qpair failed and we were unable to recover it. 00:28:29.339 [2024-12-09 17:38:58.199611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.339 [2024-12-09 17:38:58.199643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.339 qpair failed and we were unable to recover it. 00:28:29.339 [2024-12-09 17:38:58.199897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.339 [2024-12-09 17:38:58.199930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.339 qpair failed and we were unable to recover it. 00:28:29.339 [2024-12-09 17:38:58.200186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.339 [2024-12-09 17:38:58.200233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.339 qpair failed and we were unable to recover it. 00:28:29.339 [2024-12-09 17:38:58.200440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.339 [2024-12-09 17:38:58.200474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.339 qpair failed and we were unable to recover it. 00:28:29.339 [2024-12-09 17:38:58.200749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.339 [2024-12-09 17:38:58.200781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.339 qpair failed and we were unable to recover it. 00:28:29.339 [2024-12-09 17:38:58.201062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.339 [2024-12-09 17:38:58.201095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.339 qpair failed and we were unable to recover it. 00:28:29.339 [2024-12-09 17:38:58.201386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.339 [2024-12-09 17:38:58.201420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.339 qpair failed and we were unable to recover it. 00:28:29.339 [2024-12-09 17:38:58.201695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.339 [2024-12-09 17:38:58.201726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.339 qpair failed and we were unable to recover it. 00:28:29.339 [2024-12-09 17:38:58.201866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.339 [2024-12-09 17:38:58.201899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.339 qpair failed and we were unable to recover it. 00:28:29.339 [2024-12-09 17:38:58.202081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.339 [2024-12-09 17:38:58.202113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.339 qpair failed and we were unable to recover it. 00:28:29.339 [2024-12-09 17:38:58.202418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.339 [2024-12-09 17:38:58.202452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.339 qpair failed and we were unable to recover it. 00:28:29.339 [2024-12-09 17:38:58.202758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.339 [2024-12-09 17:38:58.202790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.339 qpair failed and we were unable to recover it. 00:28:29.339 [2024-12-09 17:38:58.202998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.339 [2024-12-09 17:38:58.203031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.339 qpair failed and we were unable to recover it. 00:28:29.339 [2024-12-09 17:38:58.203306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.339 [2024-12-09 17:38:58.203340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.339 qpair failed and we were unable to recover it. 00:28:29.339 [2024-12-09 17:38:58.203524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.339 [2024-12-09 17:38:58.203556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.339 qpair failed and we were unable to recover it. 00:28:29.339 [2024-12-09 17:38:58.203738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.339 [2024-12-09 17:38:58.203769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.339 qpair failed and we were unable to recover it. 00:28:29.339 [2024-12-09 17:38:58.203959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.339 [2024-12-09 17:38:58.203993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.339 qpair failed and we were unable to recover it. 00:28:29.339 [2024-12-09 17:38:58.204244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.339 [2024-12-09 17:38:58.204276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.339 qpair failed and we were unable to recover it. 00:28:29.339 [2024-12-09 17:38:58.204462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.339 [2024-12-09 17:38:58.204495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.339 qpair failed and we were unable to recover it. 00:28:29.339 [2024-12-09 17:38:58.204712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.339 [2024-12-09 17:38:58.204743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.339 qpair failed and we were unable to recover it. 00:28:29.339 [2024-12-09 17:38:58.205017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.339 [2024-12-09 17:38:58.205049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.339 qpair failed and we were unable to recover it. 00:28:29.339 [2024-12-09 17:38:58.205329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.339 [2024-12-09 17:38:58.205363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.339 qpair failed and we were unable to recover it. 00:28:29.339 [2024-12-09 17:38:58.205621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.339 [2024-12-09 17:38:58.205655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.339 qpair failed and we were unable to recover it. 00:28:29.339 [2024-12-09 17:38:58.205930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.339 [2024-12-09 17:38:58.205963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.339 qpair failed and we were unable to recover it. 00:28:29.339 [2024-12-09 17:38:58.206249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.339 [2024-12-09 17:38:58.206282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.339 qpair failed and we were unable to recover it. 00:28:29.339 [2024-12-09 17:38:58.206555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.339 [2024-12-09 17:38:58.206587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.339 qpair failed and we were unable to recover it. 00:28:29.339 [2024-12-09 17:38:58.206783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.339 [2024-12-09 17:38:58.206815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.339 qpair failed and we were unable to recover it. 00:28:29.339 [2024-12-09 17:38:58.207020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.339 [2024-12-09 17:38:58.207053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.339 qpair failed and we were unable to recover it. 00:28:29.339 [2024-12-09 17:38:58.207274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.339 [2024-12-09 17:38:58.207307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.339 qpair failed and we were unable to recover it. 00:28:29.339 [2024-12-09 17:38:58.207507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.339 [2024-12-09 17:38:58.207539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.339 qpair failed and we were unable to recover it. 00:28:29.339 [2024-12-09 17:38:58.207733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.339 [2024-12-09 17:38:58.207766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.339 qpair failed and we were unable to recover it. 00:28:29.339 [2024-12-09 17:38:58.207966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.339 [2024-12-09 17:38:58.207999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.340 qpair failed and we were unable to recover it. 00:28:29.340 [2024-12-09 17:38:58.208120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.340 [2024-12-09 17:38:58.208152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.340 qpair failed and we were unable to recover it. 00:28:29.340 [2024-12-09 17:38:58.208432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.340 [2024-12-09 17:38:58.208466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.340 qpair failed and we were unable to recover it. 00:28:29.340 [2024-12-09 17:38:58.208664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.340 [2024-12-09 17:38:58.208696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.340 qpair failed and we were unable to recover it. 00:28:29.340 [2024-12-09 17:38:58.208847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.340 [2024-12-09 17:38:58.208880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.340 qpair failed and we were unable to recover it. 00:28:29.340 [2024-12-09 17:38:58.209148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.340 [2024-12-09 17:38:58.209179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.340 qpair failed and we were unable to recover it. 00:28:29.340 [2024-12-09 17:38:58.209503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.340 [2024-12-09 17:38:58.209538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.340 qpair failed and we were unable to recover it. 00:28:29.340 [2024-12-09 17:38:58.209821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.340 [2024-12-09 17:38:58.209853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.340 qpair failed and we were unable to recover it. 00:28:29.340 [2024-12-09 17:38:58.210138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.340 [2024-12-09 17:38:58.210171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.340 qpair failed and we were unable to recover it. 00:28:29.340 [2024-12-09 17:38:58.210394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.340 [2024-12-09 17:38:58.210427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.340 qpair failed and we were unable to recover it. 00:28:29.340 [2024-12-09 17:38:58.210707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.340 [2024-12-09 17:38:58.210739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.340 qpair failed and we were unable to recover it. 00:28:29.340 [2024-12-09 17:38:58.210954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.340 [2024-12-09 17:38:58.210986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.340 qpair failed and we were unable to recover it. 00:28:29.340 [2024-12-09 17:38:58.211280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.340 [2024-12-09 17:38:58.211321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.340 qpair failed and we were unable to recover it. 00:28:29.340 [2024-12-09 17:38:58.211588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.340 [2024-12-09 17:38:58.211621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.340 qpair failed and we were unable to recover it. 00:28:29.340 [2024-12-09 17:38:58.211806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.340 [2024-12-09 17:38:58.211839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.340 qpair failed and we were unable to recover it. 00:28:29.340 [2024-12-09 17:38:58.212091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.340 [2024-12-09 17:38:58.212122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.340 qpair failed and we were unable to recover it. 00:28:29.340 [2024-12-09 17:38:58.212379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.340 [2024-12-09 17:38:58.212413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.340 qpair failed and we were unable to recover it. 00:28:29.340 [2024-12-09 17:38:58.212716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.340 [2024-12-09 17:38:58.212748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.340 qpair failed and we were unable to recover it. 00:28:29.340 [2024-12-09 17:38:58.213043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.340 [2024-12-09 17:38:58.213075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.340 qpair failed and we were unable to recover it. 00:28:29.340 [2024-12-09 17:38:58.213311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.340 [2024-12-09 17:38:58.213344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.340 qpair failed and we were unable to recover it. 00:28:29.340 [2024-12-09 17:38:58.213618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.340 [2024-12-09 17:38:58.213651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.340 qpair failed and we were unable to recover it. 00:28:29.340 [2024-12-09 17:38:58.213785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.340 [2024-12-09 17:38:58.213817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.340 qpair failed and we were unable to recover it. 00:28:29.340 [2024-12-09 17:38:58.214095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.340 [2024-12-09 17:38:58.214126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.340 qpair failed and we were unable to recover it. 00:28:29.340 [2024-12-09 17:38:58.214324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.340 [2024-12-09 17:38:58.214359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.340 qpair failed and we were unable to recover it. 00:28:29.340 [2024-12-09 17:38:58.214666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.340 [2024-12-09 17:38:58.214698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.340 qpair failed and we were unable to recover it. 00:28:29.340 [2024-12-09 17:38:58.214900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.340 [2024-12-09 17:38:58.214932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.340 qpair failed and we were unable to recover it. 00:28:29.340 [2024-12-09 17:38:58.215130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.340 [2024-12-09 17:38:58.215163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.340 qpair failed and we were unable to recover it. 00:28:29.340 [2024-12-09 17:38:58.215386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.340 [2024-12-09 17:38:58.215420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.340 qpair failed and we were unable to recover it. 00:28:29.340 [2024-12-09 17:38:58.215674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.340 [2024-12-09 17:38:58.215707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.340 qpair failed and we were unable to recover it. 00:28:29.340 [2024-12-09 17:38:58.215906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.340 [2024-12-09 17:38:58.215937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.340 qpair failed and we were unable to recover it. 00:28:29.340 [2024-12-09 17:38:58.216076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.340 [2024-12-09 17:38:58.216109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.340 qpair failed and we were unable to recover it. 00:28:29.340 [2024-12-09 17:38:58.216385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.340 [2024-12-09 17:38:58.216420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.340 qpair failed and we were unable to recover it. 00:28:29.340 [2024-12-09 17:38:58.216670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.340 [2024-12-09 17:38:58.216703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.340 qpair failed and we were unable to recover it. 00:28:29.340 [2024-12-09 17:38:58.216913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.340 [2024-12-09 17:38:58.216945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.340 qpair failed and we were unable to recover it. 00:28:29.340 [2024-12-09 17:38:58.217155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.340 [2024-12-09 17:38:58.217187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.340 qpair failed and we were unable to recover it. 00:28:29.340 [2024-12-09 17:38:58.217503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.340 [2024-12-09 17:38:58.217580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.340 qpair failed and we were unable to recover it. 00:28:29.340 [2024-12-09 17:38:58.217931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.340 [2024-12-09 17:38:58.218006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.340 qpair failed and we were unable to recover it. 00:28:29.340 [2024-12-09 17:38:58.218242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.340 [2024-12-09 17:38:58.218281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.340 qpair failed and we were unable to recover it. 00:28:29.340 [2024-12-09 17:38:58.218508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.340 [2024-12-09 17:38:58.218540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.340 qpair failed and we were unable to recover it. 00:28:29.340 [2024-12-09 17:38:58.218825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.340 [2024-12-09 17:38:58.218874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.340 qpair failed and we were unable to recover it. 00:28:29.340 [2024-12-09 17:38:58.219138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.341 [2024-12-09 17:38:58.219169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.341 qpair failed and we were unable to recover it. 00:28:29.341 [2024-12-09 17:38:58.219457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.341 [2024-12-09 17:38:58.219490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.341 qpair failed and we were unable to recover it. 00:28:29.341 [2024-12-09 17:38:58.219769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.341 [2024-12-09 17:38:58.219801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.341 qpair failed and we were unable to recover it. 00:28:29.341 [2024-12-09 17:38:58.220090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.341 [2024-12-09 17:38:58.220122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.341 qpair failed and we were unable to recover it. 00:28:29.341 [2024-12-09 17:38:58.220406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.341 [2024-12-09 17:38:58.220440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.341 qpair failed and we were unable to recover it. 00:28:29.341 [2024-12-09 17:38:58.220646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.341 [2024-12-09 17:38:58.220678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.341 qpair failed and we were unable to recover it. 00:28:29.341 [2024-12-09 17:38:58.220938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.341 [2024-12-09 17:38:58.220970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.341 qpair failed and we were unable to recover it. 00:28:29.341 [2024-12-09 17:38:58.221272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.341 [2024-12-09 17:38:58.221306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.341 qpair failed and we were unable to recover it. 00:28:29.341 [2024-12-09 17:38:58.221570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.341 [2024-12-09 17:38:58.221602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.341 qpair failed and we were unable to recover it. 00:28:29.341 [2024-12-09 17:38:58.221904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.341 [2024-12-09 17:38:58.221936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.341 qpair failed and we were unable to recover it. 00:28:29.341 [2024-12-09 17:38:58.222203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.341 [2024-12-09 17:38:58.222244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.341 qpair failed and we were unable to recover it. 00:28:29.341 [2024-12-09 17:38:58.222367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.341 [2024-12-09 17:38:58.222400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.341 qpair failed and we were unable to recover it. 00:28:29.341 [2024-12-09 17:38:58.222673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.341 [2024-12-09 17:38:58.222705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.341 qpair failed and we were unable to recover it. 00:28:29.341 [2024-12-09 17:38:58.222905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.341 [2024-12-09 17:38:58.222939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.341 qpair failed and we were unable to recover it. 00:28:29.341 [2024-12-09 17:38:58.223152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.341 [2024-12-09 17:38:58.223184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.341 qpair failed and we were unable to recover it. 00:28:29.341 [2024-12-09 17:38:58.223453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.341 [2024-12-09 17:38:58.223486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.341 qpair failed and we were unable to recover it. 00:28:29.341 [2024-12-09 17:38:58.223773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.341 [2024-12-09 17:38:58.223805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.341 qpair failed and we were unable to recover it. 00:28:29.341 [2024-12-09 17:38:58.223962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.341 [2024-12-09 17:38:58.223994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.341 qpair failed and we were unable to recover it. 00:28:29.341 [2024-12-09 17:38:58.224192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.341 [2024-12-09 17:38:58.224234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.341 qpair failed and we were unable to recover it. 00:28:29.341 [2024-12-09 17:38:58.224523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.341 [2024-12-09 17:38:58.224555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.341 qpair failed and we were unable to recover it. 00:28:29.341 [2024-12-09 17:38:58.224846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.341 [2024-12-09 17:38:58.224877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.341 qpair failed and we were unable to recover it. 00:28:29.341 [2024-12-09 17:38:58.225162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.341 [2024-12-09 17:38:58.225194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.341 qpair failed and we were unable to recover it. 00:28:29.341 [2024-12-09 17:38:58.225478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.341 [2024-12-09 17:38:58.225511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.341 qpair failed and we were unable to recover it. 00:28:29.341 [2024-12-09 17:38:58.225790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.341 [2024-12-09 17:38:58.225823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.341 qpair failed and we were unable to recover it. 00:28:29.341 [2024-12-09 17:38:58.226036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.341 [2024-12-09 17:38:58.226070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.341 qpair failed and we were unable to recover it. 00:28:29.341 [2024-12-09 17:38:58.226355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.341 [2024-12-09 17:38:58.226389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.341 qpair failed and we were unable to recover it. 00:28:29.341 [2024-12-09 17:38:58.226641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.341 [2024-12-09 17:38:58.226689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.341 qpair failed and we were unable to recover it. 00:28:29.341 [2024-12-09 17:38:58.226976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.341 [2024-12-09 17:38:58.227009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.341 qpair failed and we were unable to recover it. 00:28:29.341 [2024-12-09 17:38:58.227274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.341 [2024-12-09 17:38:58.227310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.341 qpair failed and we were unable to recover it. 00:28:29.341 [2024-12-09 17:38:58.227607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.341 [2024-12-09 17:38:58.227639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.341 qpair failed and we were unable to recover it. 00:28:29.341 [2024-12-09 17:38:58.227906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.341 [2024-12-09 17:38:58.227941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.341 qpair failed and we were unable to recover it. 00:28:29.341 [2024-12-09 17:38:58.228185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.341 [2024-12-09 17:38:58.228228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.341 qpair failed and we were unable to recover it. 00:28:29.341 [2024-12-09 17:38:58.228444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.341 [2024-12-09 17:38:58.228478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.341 qpair failed and we were unable to recover it. 00:28:29.341 [2024-12-09 17:38:58.228630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.341 [2024-12-09 17:38:58.228663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.341 qpair failed and we were unable to recover it. 00:28:29.341 [2024-12-09 17:38:58.228948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.341 [2024-12-09 17:38:58.228982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.341 qpair failed and we were unable to recover it. 00:28:29.341 [2024-12-09 17:38:58.229240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.341 [2024-12-09 17:38:58.229274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.341 qpair failed and we were unable to recover it. 00:28:29.341 [2024-12-09 17:38:58.229516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.341 [2024-12-09 17:38:58.229549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.341 qpair failed and we were unable to recover it. 00:28:29.341 [2024-12-09 17:38:58.229752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.341 [2024-12-09 17:38:58.229785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.341 qpair failed and we were unable to recover it. 00:28:29.341 [2024-12-09 17:38:58.230063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.341 [2024-12-09 17:38:58.230096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.341 qpair failed and we were unable to recover it. 00:28:29.341 [2024-12-09 17:38:58.230352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.342 [2024-12-09 17:38:58.230387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.342 qpair failed and we were unable to recover it. 00:28:29.342 [2024-12-09 17:38:58.230695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.342 [2024-12-09 17:38:58.230729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.342 qpair failed and we were unable to recover it. 00:28:29.342 [2024-12-09 17:38:58.231038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.342 [2024-12-09 17:38:58.231070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.342 qpair failed and we were unable to recover it. 00:28:29.342 [2024-12-09 17:38:58.231202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.342 [2024-12-09 17:38:58.231246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.342 qpair failed and we were unable to recover it. 00:28:29.342 [2024-12-09 17:38:58.231454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.342 [2024-12-09 17:38:58.231489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.342 qpair failed and we were unable to recover it. 00:28:29.342 [2024-12-09 17:38:58.231806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.342 [2024-12-09 17:38:58.231840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.342 qpair failed and we were unable to recover it. 00:28:29.342 [2024-12-09 17:38:58.232136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.342 [2024-12-09 17:38:58.232169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.342 qpair failed and we were unable to recover it. 00:28:29.342 [2024-12-09 17:38:58.232471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.342 [2024-12-09 17:38:58.232506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.342 qpair failed and we were unable to recover it. 00:28:29.342 [2024-12-09 17:38:58.232662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.342 [2024-12-09 17:38:58.232694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.342 qpair failed and we were unable to recover it. 00:28:29.342 [2024-12-09 17:38:58.232973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.342 [2024-12-09 17:38:58.233006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.342 qpair failed and we were unable to recover it. 00:28:29.342 [2024-12-09 17:38:58.233246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.342 [2024-12-09 17:38:58.233282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.342 qpair failed and we were unable to recover it. 00:28:29.342 [2024-12-09 17:38:58.233537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.342 [2024-12-09 17:38:58.233572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.342 qpair failed and we were unable to recover it. 00:28:29.342 [2024-12-09 17:38:58.233771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.342 [2024-12-09 17:38:58.233804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.342 qpair failed and we were unable to recover it. 00:28:29.342 [2024-12-09 17:38:58.233997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.342 [2024-12-09 17:38:58.234030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.342 qpair failed and we were unable to recover it. 00:28:29.342 [2024-12-09 17:38:58.234254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.342 [2024-12-09 17:38:58.234289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.342 qpair failed and we were unable to recover it. 00:28:29.342 [2024-12-09 17:38:58.234551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.342 [2024-12-09 17:38:58.234588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.342 qpair failed and we were unable to recover it. 00:28:29.342 [2024-12-09 17:38:58.234871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.342 [2024-12-09 17:38:58.234905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.342 qpair failed and we were unable to recover it. 00:28:29.342 [2024-12-09 17:38:58.235139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.342 [2024-12-09 17:38:58.235174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.342 qpair failed and we were unable to recover it. 00:28:29.342 [2024-12-09 17:38:58.235449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.342 [2024-12-09 17:38:58.235486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.342 qpair failed and we were unable to recover it. 00:28:29.342 [2024-12-09 17:38:58.235784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.342 [2024-12-09 17:38:58.235817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.342 qpair failed and we were unable to recover it. 00:28:29.342 [2024-12-09 17:38:58.236085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.342 [2024-12-09 17:38:58.236118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.342 qpair failed and we were unable to recover it. 00:28:29.342 [2024-12-09 17:38:58.236404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.342 [2024-12-09 17:38:58.236438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.342 qpair failed and we were unable to recover it. 00:28:29.342 [2024-12-09 17:38:58.236647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.342 [2024-12-09 17:38:58.236680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.342 qpair failed and we were unable to recover it. 00:28:29.342 [2024-12-09 17:38:58.236936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.342 [2024-12-09 17:38:58.236970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.342 qpair failed and we were unable to recover it. 00:28:29.342 [2024-12-09 17:38:58.237273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.342 [2024-12-09 17:38:58.237308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.342 qpair failed and we were unable to recover it. 00:28:29.342 [2024-12-09 17:38:58.237567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.342 [2024-12-09 17:38:58.237601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.342 qpair failed and we were unable to recover it. 00:28:29.342 [2024-12-09 17:38:58.237826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.342 [2024-12-09 17:38:58.237860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.342 qpair failed and we were unable to recover it. 00:28:29.342 [2024-12-09 17:38:58.238111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.342 [2024-12-09 17:38:58.238150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.342 qpair failed and we were unable to recover it. 00:28:29.342 [2024-12-09 17:38:58.238480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.342 [2024-12-09 17:38:58.238514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.342 qpair failed and we were unable to recover it. 00:28:29.342 [2024-12-09 17:38:58.238792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.342 [2024-12-09 17:38:58.238826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.342 qpair failed and we were unable to recover it. 00:28:29.342 [2024-12-09 17:38:58.239019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.342 [2024-12-09 17:38:58.239052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.342 qpair failed and we were unable to recover it. 00:28:29.342 [2024-12-09 17:38:58.239253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.342 [2024-12-09 17:38:58.239288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.342 qpair failed and we were unable to recover it. 00:28:29.342 [2024-12-09 17:38:58.239426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.342 [2024-12-09 17:38:58.239458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.342 qpair failed and we were unable to recover it. 00:28:29.342 [2024-12-09 17:38:58.239779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.342 [2024-12-09 17:38:58.239812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.342 qpair failed and we were unable to recover it. 00:28:29.342 [2024-12-09 17:38:58.240007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.342 [2024-12-09 17:38:58.240040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.342 qpair failed and we were unable to recover it. 00:28:29.342 [2024-12-09 17:38:58.240252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.342 [2024-12-09 17:38:58.240286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.342 qpair failed and we were unable to recover it. 00:28:29.342 [2024-12-09 17:38:58.240544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.342 [2024-12-09 17:38:58.240577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.342 qpair failed and we were unable to recover it. 00:28:29.342 [2024-12-09 17:38:58.240853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.342 [2024-12-09 17:38:58.240886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.342 qpair failed and we were unable to recover it. 00:28:29.342 [2024-12-09 17:38:58.241117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.342 [2024-12-09 17:38:58.241150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.343 qpair failed and we were unable to recover it. 00:28:29.343 [2024-12-09 17:38:58.241339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.343 [2024-12-09 17:38:58.241373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.343 qpair failed and we were unable to recover it. 00:28:29.343 [2024-12-09 17:38:58.241654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.343 [2024-12-09 17:38:58.241687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.343 qpair failed and we were unable to recover it. 00:28:29.343 [2024-12-09 17:38:58.241950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.343 [2024-12-09 17:38:58.241983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.343 qpair failed and we were unable to recover it. 00:28:29.343 [2024-12-09 17:38:58.242287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.343 [2024-12-09 17:38:58.242321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.343 qpair failed and we were unable to recover it. 00:28:29.343 [2024-12-09 17:38:58.242615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.343 [2024-12-09 17:38:58.242648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.343 qpair failed and we were unable to recover it. 00:28:29.343 [2024-12-09 17:38:58.242920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.343 [2024-12-09 17:38:58.242953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.343 qpair failed and we were unable to recover it. 00:28:29.343 [2024-12-09 17:38:58.243245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.343 [2024-12-09 17:38:58.243279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.343 qpair failed and we were unable to recover it. 00:28:29.343 [2024-12-09 17:38:58.243552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.343 [2024-12-09 17:38:58.243586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.343 qpair failed and we were unable to recover it. 00:28:29.343 [2024-12-09 17:38:58.243787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.343 [2024-12-09 17:38:58.243819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.343 qpair failed and we were unable to recover it. 00:28:29.343 [2024-12-09 17:38:58.244005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.343 [2024-12-09 17:38:58.244039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.343 qpair failed and we were unable to recover it. 00:28:29.343 [2024-12-09 17:38:58.244322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.343 [2024-12-09 17:38:58.244355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.343 qpair failed and we were unable to recover it. 00:28:29.343 [2024-12-09 17:38:58.244621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.343 [2024-12-09 17:38:58.244655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.343 qpair failed and we were unable to recover it. 00:28:29.343 [2024-12-09 17:38:58.244789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.343 [2024-12-09 17:38:58.244822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.343 qpair failed and we were unable to recover it. 00:28:29.343 [2024-12-09 17:38:58.245074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.343 [2024-12-09 17:38:58.245106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.343 qpair failed and we were unable to recover it. 00:28:29.343 [2024-12-09 17:38:58.245389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.343 [2024-12-09 17:38:58.245424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.343 qpair failed and we were unable to recover it. 00:28:29.343 [2024-12-09 17:38:58.245715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.343 [2024-12-09 17:38:58.245749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.343 qpair failed and we were unable to recover it. 00:28:29.343 [2024-12-09 17:38:58.246019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.343 [2024-12-09 17:38:58.246051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.343 qpair failed and we were unable to recover it. 00:28:29.343 [2024-12-09 17:38:58.246248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.343 [2024-12-09 17:38:58.246282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.343 qpair failed and we were unable to recover it. 00:28:29.343 [2024-12-09 17:38:58.246543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.343 [2024-12-09 17:38:58.246575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.343 qpair failed and we were unable to recover it. 00:28:29.343 [2024-12-09 17:38:58.246854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.343 [2024-12-09 17:38:58.246888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.343 qpair failed and we were unable to recover it. 00:28:29.343 [2024-12-09 17:38:58.247149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.343 [2024-12-09 17:38:58.247182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.343 qpair failed and we were unable to recover it. 00:28:29.343 [2024-12-09 17:38:58.247325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.343 [2024-12-09 17:38:58.247359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.343 qpair failed and we were unable to recover it. 00:28:29.343 [2024-12-09 17:38:58.247553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.343 [2024-12-09 17:38:58.247586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.343 qpair failed and we were unable to recover it. 00:28:29.343 [2024-12-09 17:38:58.247817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.343 [2024-12-09 17:38:58.247850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.343 qpair failed and we were unable to recover it. 00:28:29.343 [2024-12-09 17:38:58.248125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.343 [2024-12-09 17:38:58.248158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.343 qpair failed and we were unable to recover it. 00:28:29.343 [2024-12-09 17:38:58.248467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.343 [2024-12-09 17:38:58.248501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.343 qpair failed and we were unable to recover it. 00:28:29.343 [2024-12-09 17:38:58.248708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.343 [2024-12-09 17:38:58.248741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.343 qpair failed and we were unable to recover it. 00:28:29.343 [2024-12-09 17:38:58.249023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.343 [2024-12-09 17:38:58.249056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.343 qpair failed and we were unable to recover it. 00:28:29.343 [2024-12-09 17:38:58.249279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.343 [2024-12-09 17:38:58.249319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.343 qpair failed and we were unable to recover it. 00:28:29.343 [2024-12-09 17:38:58.249532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.343 [2024-12-09 17:38:58.249565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.343 qpair failed and we were unable to recover it. 00:28:29.343 [2024-12-09 17:38:58.249790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.343 [2024-12-09 17:38:58.249823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.343 qpair failed and we were unable to recover it. 00:28:29.343 [2024-12-09 17:38:58.250104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.343 [2024-12-09 17:38:58.250138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.343 qpair failed and we were unable to recover it. 00:28:29.343 [2024-12-09 17:38:58.250385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.343 [2024-12-09 17:38:58.250419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.343 qpair failed and we were unable to recover it. 00:28:29.344 [2024-12-09 17:38:58.250622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.344 [2024-12-09 17:38:58.250656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.344 qpair failed and we were unable to recover it. 00:28:29.344 [2024-12-09 17:38:58.250850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.344 [2024-12-09 17:38:58.250882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.344 qpair failed and we were unable to recover it. 00:28:29.344 [2024-12-09 17:38:58.251071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.344 [2024-12-09 17:38:58.251104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.344 qpair failed and we were unable to recover it. 00:28:29.344 [2024-12-09 17:38:58.251314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.344 [2024-12-09 17:38:58.251347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.344 qpair failed and we were unable to recover it. 00:28:29.344 [2024-12-09 17:38:58.251632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.344 [2024-12-09 17:38:58.251665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.344 qpair failed and we were unable to recover it. 00:28:29.344 [2024-12-09 17:38:58.251950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.344 [2024-12-09 17:38:58.251983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.344 qpair failed and we were unable to recover it. 00:28:29.344 [2024-12-09 17:38:58.252263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.344 [2024-12-09 17:38:58.252297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.344 qpair failed and we were unable to recover it. 00:28:29.344 [2024-12-09 17:38:58.252586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.344 [2024-12-09 17:38:58.252618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.344 qpair failed and we were unable to recover it. 00:28:29.344 [2024-12-09 17:38:58.252804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.344 [2024-12-09 17:38:58.252837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.344 qpair failed and we were unable to recover it. 00:28:29.344 [2024-12-09 17:38:58.253052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.344 [2024-12-09 17:38:58.253085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.344 qpair failed and we were unable to recover it. 00:28:29.344 [2024-12-09 17:38:58.253367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.344 [2024-12-09 17:38:58.253402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.344 qpair failed and we were unable to recover it. 00:28:29.344 [2024-12-09 17:38:58.253680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.344 [2024-12-09 17:38:58.253712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.344 qpair failed and we were unable to recover it. 00:28:29.344 [2024-12-09 17:38:58.254019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.344 [2024-12-09 17:38:58.254052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.344 qpair failed and we were unable to recover it. 00:28:29.344 [2024-12-09 17:38:58.254316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.344 [2024-12-09 17:38:58.254350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.344 qpair failed and we were unable to recover it. 00:28:29.344 [2024-12-09 17:38:58.254585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.344 [2024-12-09 17:38:58.254618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.344 qpair failed and we were unable to recover it. 00:28:29.344 [2024-12-09 17:38:58.254848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.344 [2024-12-09 17:38:58.254881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.344 qpair failed and we were unable to recover it. 00:28:29.344 [2024-12-09 17:38:58.255186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.344 [2024-12-09 17:38:58.255230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.344 qpair failed and we were unable to recover it. 00:28:29.344 [2024-12-09 17:38:58.255442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.344 [2024-12-09 17:38:58.255474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.344 qpair failed and we were unable to recover it. 00:28:29.344 [2024-12-09 17:38:58.255679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.344 [2024-12-09 17:38:58.255712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.344 qpair failed and we were unable to recover it. 00:28:29.344 [2024-12-09 17:38:58.255967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.344 [2024-12-09 17:38:58.255999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.344 qpair failed and we were unable to recover it. 00:28:29.344 [2024-12-09 17:38:58.256206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.344 [2024-12-09 17:38:58.256265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.344 qpair failed and we were unable to recover it. 00:28:29.344 [2024-12-09 17:38:58.256539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.344 [2024-12-09 17:38:58.256570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.344 qpair failed and we were unable to recover it. 00:28:29.344 [2024-12-09 17:38:58.256836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.344 [2024-12-09 17:38:58.256870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.344 qpair failed and we were unable to recover it. 00:28:29.344 [2024-12-09 17:38:58.257155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.344 [2024-12-09 17:38:58.257188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.344 qpair failed and we were unable to recover it. 00:28:29.344 [2024-12-09 17:38:58.257482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.344 [2024-12-09 17:38:58.257558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.344 qpair failed and we were unable to recover it. 00:28:29.344 [2024-12-09 17:38:58.257844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.344 [2024-12-09 17:38:58.257883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.344 qpair failed and we were unable to recover it. 00:28:29.344 [2024-12-09 17:38:58.258171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.344 [2024-12-09 17:38:58.258204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.344 qpair failed and we were unable to recover it. 00:28:29.344 [2024-12-09 17:38:58.258500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.344 [2024-12-09 17:38:58.258532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.344 qpair failed and we were unable to recover it. 00:28:29.344 [2024-12-09 17:38:58.258824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.344 [2024-12-09 17:38:58.258856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.344 qpair failed and we were unable to recover it. 00:28:29.344 [2024-12-09 17:38:58.259058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.344 [2024-12-09 17:38:58.259090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.344 qpair failed and we were unable to recover it. 00:28:29.344 [2024-12-09 17:38:58.259271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.344 [2024-12-09 17:38:58.259304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.344 qpair failed and we were unable to recover it. 00:28:29.344 [2024-12-09 17:38:58.259487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.344 [2024-12-09 17:38:58.259518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.344 qpair failed and we were unable to recover it. 00:28:29.344 [2024-12-09 17:38:58.259723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.344 [2024-12-09 17:38:58.259754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.344 qpair failed and we were unable to recover it. 00:28:29.344 [2024-12-09 17:38:58.260003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.344 [2024-12-09 17:38:58.260035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.344 qpair failed and we were unable to recover it. 00:28:29.344 [2024-12-09 17:38:58.260335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.344 [2024-12-09 17:38:58.260369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.344 qpair failed and we were unable to recover it. 00:28:29.344 [2024-12-09 17:38:58.260672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.344 [2024-12-09 17:38:58.260720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.344 qpair failed and we were unable to recover it. 00:28:29.344 [2024-12-09 17:38:58.260998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.344 [2024-12-09 17:38:58.261031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.344 qpair failed and we were unable to recover it. 00:28:29.344 [2024-12-09 17:38:58.261274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.344 [2024-12-09 17:38:58.261307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.344 qpair failed and we were unable to recover it. 00:28:29.344 [2024-12-09 17:38:58.261578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.345 [2024-12-09 17:38:58.261609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.345 qpair failed and we were unable to recover it. 00:28:29.345 [2024-12-09 17:38:58.261900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.345 [2024-12-09 17:38:58.261933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.345 qpair failed and we were unable to recover it. 00:28:29.345 [2024-12-09 17:38:58.262186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.345 [2024-12-09 17:38:58.262227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.345 qpair failed and we were unable to recover it. 00:28:29.345 [2024-12-09 17:38:58.262527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.345 [2024-12-09 17:38:58.262560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.345 qpair failed and we were unable to recover it. 00:28:29.345 [2024-12-09 17:38:58.262826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.345 [2024-12-09 17:38:58.262857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.345 qpair failed and we were unable to recover it. 00:28:29.345 [2024-12-09 17:38:58.263091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.345 [2024-12-09 17:38:58.263124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.345 qpair failed and we were unable to recover it. 00:28:29.345 [2024-12-09 17:38:58.263393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.345 [2024-12-09 17:38:58.263426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.345 qpair failed and we were unable to recover it. 00:28:29.345 [2024-12-09 17:38:58.263709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.345 [2024-12-09 17:38:58.263740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.345 qpair failed and we were unable to recover it. 00:28:29.345 [2024-12-09 17:38:58.263984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.345 [2024-12-09 17:38:58.264016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.345 qpair failed and we were unable to recover it. 00:28:29.345 [2024-12-09 17:38:58.264276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.345 [2024-12-09 17:38:58.264310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.345 qpair failed and we were unable to recover it. 00:28:29.345 [2024-12-09 17:38:58.264617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.345 [2024-12-09 17:38:58.264649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.345 qpair failed and we were unable to recover it. 00:28:29.345 [2024-12-09 17:38:58.264938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.345 [2024-12-09 17:38:58.264970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.345 qpair failed and we were unable to recover it. 00:28:29.345 [2024-12-09 17:38:58.265249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.345 [2024-12-09 17:38:58.265283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.345 qpair failed and we were unable to recover it. 00:28:29.345 [2024-12-09 17:38:58.265517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.345 [2024-12-09 17:38:58.265549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.345 qpair failed and we were unable to recover it. 00:28:29.345 [2024-12-09 17:38:58.265732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.345 [2024-12-09 17:38:58.265763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.345 qpair failed and we were unable to recover it. 00:28:29.345 [2024-12-09 17:38:58.266035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.345 [2024-12-09 17:38:58.266067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.345 qpair failed and we were unable to recover it. 00:28:29.345 [2024-12-09 17:38:58.266354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.345 [2024-12-09 17:38:58.266390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.345 qpair failed and we were unable to recover it. 00:28:29.345 [2024-12-09 17:38:58.266669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.345 [2024-12-09 17:38:58.266702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.345 qpair failed and we were unable to recover it. 00:28:29.345 [2024-12-09 17:38:58.266988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.345 [2024-12-09 17:38:58.267019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.345 qpair failed and we were unable to recover it. 00:28:29.345 [2024-12-09 17:38:58.267250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.345 [2024-12-09 17:38:58.267284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.345 qpair failed and we were unable to recover it. 00:28:29.345 [2024-12-09 17:38:58.267565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.345 [2024-12-09 17:38:58.267598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.345 qpair failed and we were unable to recover it. 00:28:29.345 [2024-12-09 17:38:58.267797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.345 [2024-12-09 17:38:58.267829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.345 qpair failed and we were unable to recover it. 00:28:29.345 [2024-12-09 17:38:58.268085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.345 [2024-12-09 17:38:58.268118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.345 qpair failed and we were unable to recover it. 00:28:29.345 [2024-12-09 17:38:58.268397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.345 [2024-12-09 17:38:58.268430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.345 qpair failed and we were unable to recover it. 00:28:29.345 [2024-12-09 17:38:58.268706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.345 [2024-12-09 17:38:58.268739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.345 qpair failed and we were unable to recover it. 00:28:29.345 [2024-12-09 17:38:58.269031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.345 [2024-12-09 17:38:58.269063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.345 qpair failed and we were unable to recover it. 00:28:29.345 [2024-12-09 17:38:58.269192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.345 [2024-12-09 17:38:58.269232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.345 qpair failed and we were unable to recover it. 00:28:29.345 [2024-12-09 17:38:58.269511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.345 [2024-12-09 17:38:58.269542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.345 qpair failed and we were unable to recover it. 00:28:29.345 [2024-12-09 17:38:58.269820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.345 [2024-12-09 17:38:58.269852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.345 qpair failed and we were unable to recover it. 00:28:29.345 [2024-12-09 17:38:58.270136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.345 [2024-12-09 17:38:58.270168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.345 qpair failed and we were unable to recover it. 00:28:29.345 [2024-12-09 17:38:58.270455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.345 [2024-12-09 17:38:58.270489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.345 qpair failed and we were unable to recover it. 00:28:29.345 [2024-12-09 17:38:58.270771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.345 [2024-12-09 17:38:58.270803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.345 qpair failed and we were unable to recover it. 00:28:29.345 [2024-12-09 17:38:58.271010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.345 [2024-12-09 17:38:58.271041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.345 qpair failed and we were unable to recover it. 00:28:29.345 [2024-12-09 17:38:58.271236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.345 [2024-12-09 17:38:58.271270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.345 qpair failed and we were unable to recover it. 00:28:29.345 [2024-12-09 17:38:58.271470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.345 [2024-12-09 17:38:58.271502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.345 qpair failed and we were unable to recover it. 00:28:29.345 [2024-12-09 17:38:58.271756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.345 [2024-12-09 17:38:58.271788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.345 qpair failed and we were unable to recover it. 00:28:29.345 [2024-12-09 17:38:58.272088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.345 [2024-12-09 17:38:58.272121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.345 qpair failed and we were unable to recover it. 00:28:29.345 [2024-12-09 17:38:58.272415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.345 [2024-12-09 17:38:58.272454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.346 qpair failed and we were unable to recover it. 00:28:29.346 [2024-12-09 17:38:58.272754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.346 [2024-12-09 17:38:58.272786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.346 qpair failed and we were unable to recover it. 00:28:29.346 [2024-12-09 17:38:58.273039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.346 [2024-12-09 17:38:58.273072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.346 qpair failed and we were unable to recover it. 00:28:29.346 [2024-12-09 17:38:58.273329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.346 [2024-12-09 17:38:58.273362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.346 qpair failed and we were unable to recover it. 00:28:29.346 [2024-12-09 17:38:58.273669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.346 [2024-12-09 17:38:58.273702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.346 qpair failed and we were unable to recover it. 00:28:29.346 [2024-12-09 17:38:58.273955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.346 [2024-12-09 17:38:58.273988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.346 qpair failed and we were unable to recover it. 00:28:29.346 [2024-12-09 17:38:58.274185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.346 [2024-12-09 17:38:58.274233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.346 qpair failed and we were unable to recover it. 00:28:29.346 [2024-12-09 17:38:58.274454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.346 [2024-12-09 17:38:58.274487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.346 qpair failed and we were unable to recover it. 00:28:29.346 [2024-12-09 17:38:58.274769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.346 [2024-12-09 17:38:58.274801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.346 qpair failed and we were unable to recover it. 00:28:29.346 [2024-12-09 17:38:58.274987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.346 [2024-12-09 17:38:58.275019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.346 qpair failed and we were unable to recover it. 00:28:29.346 [2024-12-09 17:38:58.275292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.346 [2024-12-09 17:38:58.275326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.346 qpair failed and we were unable to recover it. 00:28:29.346 [2024-12-09 17:38:58.275607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.346 [2024-12-09 17:38:58.275639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.346 qpair failed and we were unable to recover it. 00:28:29.346 [2024-12-09 17:38:58.275847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.346 [2024-12-09 17:38:58.275878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.346 qpair failed and we were unable to recover it. 00:28:29.346 [2024-12-09 17:38:58.276153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.346 [2024-12-09 17:38:58.276186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.346 qpair failed and we were unable to recover it. 00:28:29.346 [2024-12-09 17:38:58.276332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.346 [2024-12-09 17:38:58.276365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.346 qpair failed and we were unable to recover it. 00:28:29.346 [2024-12-09 17:38:58.276670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.346 [2024-12-09 17:38:58.276701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.346 qpair failed and we were unable to recover it. 00:28:29.346 [2024-12-09 17:38:58.276967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.346 [2024-12-09 17:38:58.276999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.346 qpair failed and we were unable to recover it. 00:28:29.346 [2024-12-09 17:38:58.277253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.346 [2024-12-09 17:38:58.277287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.346 qpair failed and we were unable to recover it. 00:28:29.346 [2024-12-09 17:38:58.277590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.346 [2024-12-09 17:38:58.277621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.346 qpair failed and we were unable to recover it. 00:28:29.346 [2024-12-09 17:38:58.277905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.346 [2024-12-09 17:38:58.277937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.346 qpair failed and we were unable to recover it. 00:28:29.346 [2024-12-09 17:38:58.278091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.346 [2024-12-09 17:38:58.278123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.346 qpair failed and we were unable to recover it. 00:28:29.346 [2024-12-09 17:38:58.278424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.346 [2024-12-09 17:38:58.278457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.346 qpair failed and we were unable to recover it. 00:28:29.346 [2024-12-09 17:38:58.278751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.346 [2024-12-09 17:38:58.278783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.346 qpair failed and we were unable to recover it. 00:28:29.346 [2024-12-09 17:38:58.279060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.346 [2024-12-09 17:38:58.279092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.346 qpair failed and we were unable to recover it. 00:28:29.346 [2024-12-09 17:38:58.279377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.346 [2024-12-09 17:38:58.279411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.346 qpair failed and we were unable to recover it. 00:28:29.346 [2024-12-09 17:38:58.279543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.346 [2024-12-09 17:38:58.279576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.346 qpair failed and we were unable to recover it. 00:28:29.346 [2024-12-09 17:38:58.279798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.346 [2024-12-09 17:38:58.279830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.346 qpair failed and we were unable to recover it. 00:28:29.346 [2024-12-09 17:38:58.280037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.346 [2024-12-09 17:38:58.280069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.346 qpair failed and we were unable to recover it. 00:28:29.346 [2024-12-09 17:38:58.280375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.346 [2024-12-09 17:38:58.280409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.346 qpair failed and we were unable to recover it. 00:28:29.346 [2024-12-09 17:38:58.280626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.346 [2024-12-09 17:38:58.280658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.346 qpair failed and we were unable to recover it. 00:28:29.346 [2024-12-09 17:38:58.280923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.346 [2024-12-09 17:38:58.280953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.346 qpair failed and we were unable to recover it. 00:28:29.346 [2024-12-09 17:38:58.281259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.346 [2024-12-09 17:38:58.281293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.346 qpair failed and we were unable to recover it. 00:28:29.346 [2024-12-09 17:38:58.281554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.346 [2024-12-09 17:38:58.281591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.346 qpair failed and we were unable to recover it. 00:28:29.346 [2024-12-09 17:38:58.281843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.346 [2024-12-09 17:38:58.281875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.346 qpair failed and we were unable to recover it. 00:28:29.346 [2024-12-09 17:38:58.282099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.346 [2024-12-09 17:38:58.282131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.346 qpair failed and we were unable to recover it. 00:28:29.346 [2024-12-09 17:38:58.282390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.346 [2024-12-09 17:38:58.282424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.346 qpair failed and we were unable to recover it. 00:28:29.346 [2024-12-09 17:38:58.282683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.346 [2024-12-09 17:38:58.282716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.346 qpair failed and we were unable to recover it. 00:28:29.346 [2024-12-09 17:38:58.282995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.346 [2024-12-09 17:38:58.283027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.346 qpair failed and we were unable to recover it. 00:28:29.346 [2024-12-09 17:38:58.283262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.346 [2024-12-09 17:38:58.283296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.346 qpair failed and we were unable to recover it. 00:28:29.346 [2024-12-09 17:38:58.283409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.346 [2024-12-09 17:38:58.283441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.347 qpair failed and we were unable to recover it. 00:28:29.347 [2024-12-09 17:38:58.283745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.347 [2024-12-09 17:38:58.283783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.347 qpair failed and we were unable to recover it. 00:28:29.347 [2024-12-09 17:38:58.284048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.347 [2024-12-09 17:38:58.284081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.347 qpair failed and we were unable to recover it. 00:28:29.347 [2024-12-09 17:38:58.284298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.347 [2024-12-09 17:38:58.284331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.347 qpair failed and we were unable to recover it. 00:28:29.347 [2024-12-09 17:38:58.284536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.347 [2024-12-09 17:38:58.284568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.347 qpair failed and we were unable to recover it. 00:28:29.347 [2024-12-09 17:38:58.284773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.347 [2024-12-09 17:38:58.284804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.347 qpair failed and we were unable to recover it. 00:28:29.347 [2024-12-09 17:38:58.284945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.347 [2024-12-09 17:38:58.284977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.347 qpair failed and we were unable to recover it. 00:28:29.347 [2024-12-09 17:38:58.285113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.347 [2024-12-09 17:38:58.285145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.347 qpair failed and we were unable to recover it. 00:28:29.347 [2024-12-09 17:38:58.285422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.347 [2024-12-09 17:38:58.285455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.347 qpair failed and we were unable to recover it. 00:28:29.347 [2024-12-09 17:38:58.285658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.347 [2024-12-09 17:38:58.285690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.347 qpair failed and we were unable to recover it. 00:28:29.347 [2024-12-09 17:38:58.285968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.347 [2024-12-09 17:38:58.285999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.347 qpair failed and we were unable to recover it. 00:28:29.347 [2024-12-09 17:38:58.286257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.347 [2024-12-09 17:38:58.286290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.347 qpair failed and we were unable to recover it. 00:28:29.347 [2024-12-09 17:38:58.286565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.347 [2024-12-09 17:38:58.286597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.347 qpair failed and we were unable to recover it. 00:28:29.347 [2024-12-09 17:38:58.286806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.347 [2024-12-09 17:38:58.286837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.347 qpair failed and we were unable to recover it. 00:28:29.347 [2024-12-09 17:38:58.287114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.347 [2024-12-09 17:38:58.287146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.347 qpair failed and we were unable to recover it. 00:28:29.347 [2024-12-09 17:38:58.287293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.347 [2024-12-09 17:38:58.287328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.347 qpair failed and we were unable to recover it. 00:28:29.347 [2024-12-09 17:38:58.287533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.347 [2024-12-09 17:38:58.287565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.347 qpair failed and we were unable to recover it. 00:28:29.347 [2024-12-09 17:38:58.287782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.347 [2024-12-09 17:38:58.287816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.347 qpair failed and we were unable to recover it. 00:28:29.347 [2024-12-09 17:38:58.288014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.347 [2024-12-09 17:38:58.288047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.347 qpair failed and we were unable to recover it. 00:28:29.347 [2024-12-09 17:38:58.288253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.347 [2024-12-09 17:38:58.288285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.347 qpair failed and we were unable to recover it. 00:28:29.347 [2024-12-09 17:38:58.288552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.347 [2024-12-09 17:38:58.288585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.347 qpair failed and we were unable to recover it. 00:28:29.347 [2024-12-09 17:38:58.288816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.347 [2024-12-09 17:38:58.288848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.347 qpair failed and we were unable to recover it. 00:28:29.347 [2024-12-09 17:38:58.289125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.347 [2024-12-09 17:38:58.289157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.347 qpair failed and we were unable to recover it. 00:28:29.347 [2024-12-09 17:38:58.289445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.347 [2024-12-09 17:38:58.289478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.347 qpair failed and we were unable to recover it. 00:28:29.347 [2024-12-09 17:38:58.289759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.347 [2024-12-09 17:38:58.289792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.347 qpair failed and we were unable to recover it. 00:28:29.347 [2024-12-09 17:38:58.289984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.347 [2024-12-09 17:38:58.290015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.347 qpair failed and we were unable to recover it. 00:28:29.347 [2024-12-09 17:38:58.290228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.347 [2024-12-09 17:38:58.290263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.347 qpair failed and we were unable to recover it. 00:28:29.347 [2024-12-09 17:38:58.290539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.347 [2024-12-09 17:38:58.290571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.347 qpair failed and we were unable to recover it. 00:28:29.347 [2024-12-09 17:38:58.290825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.347 [2024-12-09 17:38:58.290858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.347 qpair failed and we were unable to recover it. 00:28:29.347 [2024-12-09 17:38:58.291061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.347 [2024-12-09 17:38:58.291093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.347 qpair failed and we were unable to recover it. 00:28:29.347 [2024-12-09 17:38:58.291371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.347 [2024-12-09 17:38:58.291404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.347 qpair failed and we were unable to recover it. 00:28:29.347 [2024-12-09 17:38:58.291682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.347 [2024-12-09 17:38:58.291714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.347 qpair failed and we were unable to recover it. 00:28:29.347 [2024-12-09 17:38:58.292009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.347 [2024-12-09 17:38:58.292041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.347 qpair failed and we were unable to recover it. 00:28:29.347 [2024-12-09 17:38:58.292319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.347 [2024-12-09 17:38:58.292352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.347 qpair failed and we were unable to recover it. 00:28:29.347 [2024-12-09 17:38:58.292643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.347 [2024-12-09 17:38:58.292675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.347 qpair failed and we were unable to recover it. 00:28:29.347 [2024-12-09 17:38:58.292907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.348 [2024-12-09 17:38:58.292938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.348 qpair failed and we were unable to recover it. 00:28:29.348 [2024-12-09 17:38:58.293226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.348 [2024-12-09 17:38:58.293259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.348 qpair failed and we were unable to recover it. 00:28:29.348 [2024-12-09 17:38:58.293445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.348 [2024-12-09 17:38:58.293477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.348 qpair failed and we were unable to recover it. 00:28:29.348 [2024-12-09 17:38:58.293627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.348 [2024-12-09 17:38:58.293659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.348 qpair failed and we were unable to recover it. 00:28:29.348 [2024-12-09 17:38:58.293858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.348 [2024-12-09 17:38:58.293889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.348 qpair failed and we were unable to recover it. 00:28:29.348 [2024-12-09 17:38:58.294171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.348 [2024-12-09 17:38:58.294203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.348 qpair failed and we were unable to recover it. 00:28:29.348 [2024-12-09 17:38:58.294428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.348 [2024-12-09 17:38:58.294466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.348 qpair failed and we were unable to recover it. 00:28:29.348 [2024-12-09 17:38:58.294695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.348 [2024-12-09 17:38:58.294727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.348 qpair failed and we were unable to recover it. 00:28:29.348 [2024-12-09 17:38:58.294854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.348 [2024-12-09 17:38:58.294886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.348 qpair failed and we were unable to recover it. 00:28:29.348 [2024-12-09 17:38:58.295065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.348 [2024-12-09 17:38:58.295097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.348 qpair failed and we were unable to recover it. 00:28:29.348 [2024-12-09 17:38:58.295364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.348 [2024-12-09 17:38:58.295399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.348 qpair failed and we were unable to recover it. 00:28:29.348 [2024-12-09 17:38:58.295598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.348 [2024-12-09 17:38:58.295630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.348 qpair failed and we were unable to recover it. 00:28:29.348 [2024-12-09 17:38:58.295910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.348 [2024-12-09 17:38:58.295943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.348 qpair failed and we were unable to recover it. 00:28:29.348 [2024-12-09 17:38:58.296189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.348 [2024-12-09 17:38:58.296231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.348 qpair failed and we were unable to recover it. 00:28:29.348 [2024-12-09 17:38:58.296381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.348 [2024-12-09 17:38:58.296412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.348 qpair failed and we were unable to recover it. 00:28:29.348 [2024-12-09 17:38:58.296678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.348 [2024-12-09 17:38:58.296711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.348 qpair failed and we were unable to recover it. 00:28:29.348 [2024-12-09 17:38:58.296852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.348 [2024-12-09 17:38:58.296884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.348 qpair failed and we were unable to recover it. 00:28:29.348 [2024-12-09 17:38:58.297097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.348 [2024-12-09 17:38:58.297129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.348 qpair failed and we were unable to recover it. 00:28:29.348 [2024-12-09 17:38:58.297427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.348 [2024-12-09 17:38:58.297461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.348 qpair failed and we were unable to recover it. 00:28:29.348 [2024-12-09 17:38:58.297665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.348 [2024-12-09 17:38:58.297697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.348 qpair failed and we were unable to recover it. 00:28:29.348 [2024-12-09 17:38:58.297998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.348 [2024-12-09 17:38:58.298030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.348 qpair failed and we were unable to recover it. 00:28:29.348 [2024-12-09 17:38:58.298231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.348 [2024-12-09 17:38:58.298265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.348 qpair failed and we were unable to recover it. 00:28:29.348 [2024-12-09 17:38:58.298491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.348 [2024-12-09 17:38:58.298524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.348 qpair failed and we were unable to recover it. 00:28:29.348 [2024-12-09 17:38:58.298677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.348 [2024-12-09 17:38:58.298708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.348 qpair failed and we were unable to recover it. 00:28:29.348 [2024-12-09 17:38:58.298985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.348 [2024-12-09 17:38:58.299016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.348 qpair failed and we were unable to recover it. 00:28:29.348 [2024-12-09 17:38:58.299208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.348 [2024-12-09 17:38:58.299247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.348 qpair failed and we were unable to recover it. 00:28:29.348 [2024-12-09 17:38:58.299507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.348 [2024-12-09 17:38:58.299539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.348 qpair failed and we were unable to recover it. 00:28:29.348 [2024-12-09 17:38:58.299734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.348 [2024-12-09 17:38:58.299766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.348 qpair failed and we were unable to recover it. 00:28:29.348 [2024-12-09 17:38:58.300019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.348 [2024-12-09 17:38:58.300052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.348 qpair failed and we were unable to recover it. 00:28:29.348 [2024-12-09 17:38:58.300275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.348 [2024-12-09 17:38:58.300309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.348 qpair failed and we were unable to recover it. 00:28:29.348 [2024-12-09 17:38:58.300582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.348 [2024-12-09 17:38:58.300614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.348 qpair failed and we were unable to recover it. 00:28:29.348 [2024-12-09 17:38:58.300823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.348 [2024-12-09 17:38:58.300854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.348 qpair failed and we were unable to recover it. 00:28:29.348 [2024-12-09 17:38:58.301138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.348 [2024-12-09 17:38:58.301171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.348 qpair failed and we were unable to recover it. 00:28:29.348 [2024-12-09 17:38:58.301456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.348 [2024-12-09 17:38:58.301488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.348 qpair failed and we were unable to recover it. 00:28:29.348 [2024-12-09 17:38:58.301771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.348 [2024-12-09 17:38:58.301803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.348 qpair failed and we were unable to recover it. 00:28:29.348 [2024-12-09 17:38:58.302090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.348 [2024-12-09 17:38:58.302122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.348 qpair failed and we were unable to recover it. 00:28:29.348 [2024-12-09 17:38:58.302353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.348 [2024-12-09 17:38:58.302387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.348 qpair failed and we were unable to recover it. 00:28:29.348 [2024-12-09 17:38:58.302604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.348 [2024-12-09 17:38:58.302636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.348 qpair failed and we were unable to recover it. 00:28:29.348 [2024-12-09 17:38:58.302755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.348 [2024-12-09 17:38:58.302788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.348 qpair failed and we were unable to recover it. 00:28:29.349 [2024-12-09 17:38:58.303096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.349 [2024-12-09 17:38:58.303127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.349 qpair failed and we were unable to recover it. 00:28:29.349 [2024-12-09 17:38:58.303395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.349 [2024-12-09 17:38:58.303430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.349 qpair failed and we were unable to recover it. 00:28:29.349 [2024-12-09 17:38:58.303725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.349 [2024-12-09 17:38:58.303756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.349 qpair failed and we were unable to recover it. 00:28:29.349 [2024-12-09 17:38:58.304028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.349 [2024-12-09 17:38:58.304060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.349 qpair failed and we were unable to recover it. 00:28:29.349 [2024-12-09 17:38:58.304361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.349 [2024-12-09 17:38:58.304395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.349 qpair failed and we were unable to recover it. 00:28:29.349 [2024-12-09 17:38:58.304664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.349 [2024-12-09 17:38:58.304695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.349 qpair failed and we were unable to recover it. 00:28:29.349 [2024-12-09 17:38:58.304989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.349 [2024-12-09 17:38:58.305021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.349 qpair failed and we were unable to recover it. 00:28:29.349 [2024-12-09 17:38:58.305299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.349 [2024-12-09 17:38:58.305338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.349 qpair failed and we were unable to recover it. 00:28:29.349 [2024-12-09 17:38:58.305619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.349 [2024-12-09 17:38:58.305652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.349 qpair failed and we were unable to recover it. 00:28:29.349 [2024-12-09 17:38:58.305793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.349 [2024-12-09 17:38:58.305825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.349 qpair failed and we were unable to recover it. 00:28:29.349 [2024-12-09 17:38:58.306015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.349 [2024-12-09 17:38:58.306046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.349 qpair failed and we were unable to recover it. 00:28:29.349 [2024-12-09 17:38:58.306251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.349 [2024-12-09 17:38:58.306284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.349 qpair failed and we were unable to recover it. 00:28:29.349 [2024-12-09 17:38:58.306542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.349 [2024-12-09 17:38:58.306575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.349 qpair failed and we were unable to recover it. 00:28:29.349 [2024-12-09 17:38:58.306842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.349 [2024-12-09 17:38:58.306874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.349 qpair failed and we were unable to recover it. 00:28:29.349 [2024-12-09 17:38:58.307129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.349 [2024-12-09 17:38:58.307161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.349 qpair failed and we were unable to recover it. 00:28:29.349 [2024-12-09 17:38:58.307389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.349 [2024-12-09 17:38:58.307422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.349 qpair failed and we were unable to recover it. 00:28:29.349 [2024-12-09 17:38:58.307621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.349 [2024-12-09 17:38:58.307653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.349 qpair failed and we were unable to recover it. 00:28:29.349 [2024-12-09 17:38:58.307765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.349 [2024-12-09 17:38:58.307796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.349 qpair failed and we were unable to recover it. 00:28:29.349 [2024-12-09 17:38:58.308081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.349 [2024-12-09 17:38:58.308113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.349 qpair failed and we were unable to recover it. 00:28:29.349 [2024-12-09 17:38:58.308302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.349 [2024-12-09 17:38:58.308335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.349 qpair failed and we were unable to recover it. 00:28:29.349 [2024-12-09 17:38:58.308594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.349 [2024-12-09 17:38:58.308627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.349 qpair failed and we were unable to recover it. 00:28:29.349 [2024-12-09 17:38:58.308918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.349 [2024-12-09 17:38:58.308950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.349 qpair failed and we were unable to recover it. 00:28:29.349 [2024-12-09 17:38:58.309233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.349 [2024-12-09 17:38:58.309266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.349 qpair failed and we were unable to recover it. 00:28:29.349 [2024-12-09 17:38:58.309486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.349 [2024-12-09 17:38:58.309518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.349 qpair failed and we were unable to recover it. 00:28:29.349 [2024-12-09 17:38:58.309824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.349 [2024-12-09 17:38:58.309855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.349 qpair failed and we were unable to recover it. 00:28:29.349 [2024-12-09 17:38:58.310060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.349 [2024-12-09 17:38:58.310092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.349 qpair failed and we were unable to recover it. 00:28:29.349 [2024-12-09 17:38:58.310369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.349 [2024-12-09 17:38:58.310402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.349 qpair failed and we were unable to recover it. 00:28:29.349 [2024-12-09 17:38:58.310586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.349 [2024-12-09 17:38:58.310618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.349 qpair failed and we were unable to recover it. 00:28:29.349 [2024-12-09 17:38:58.310893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.349 [2024-12-09 17:38:58.310925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.349 qpair failed and we were unable to recover it. 00:28:29.349 [2024-12-09 17:38:58.311143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.349 [2024-12-09 17:38:58.311175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.349 qpair failed and we were unable to recover it. 00:28:29.349 [2024-12-09 17:38:58.311480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.349 [2024-12-09 17:38:58.311513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.349 qpair failed and we were unable to recover it. 00:28:29.349 [2024-12-09 17:38:58.311744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.349 [2024-12-09 17:38:58.311775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.349 qpair failed and we were unable to recover it. 00:28:29.349 [2024-12-09 17:38:58.312050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.349 [2024-12-09 17:38:58.312082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.349 qpair failed and we were unable to recover it. 00:28:29.349 [2024-12-09 17:38:58.312311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.349 [2024-12-09 17:38:58.312345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.349 qpair failed and we were unable to recover it. 00:28:29.349 [2024-12-09 17:38:58.312626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.349 [2024-12-09 17:38:58.312658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.349 qpair failed and we were unable to recover it. 00:28:29.349 [2024-12-09 17:38:58.312938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.349 [2024-12-09 17:38:58.312971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.349 qpair failed and we were unable to recover it. 00:28:29.349 [2024-12-09 17:38:58.313177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.349 [2024-12-09 17:38:58.313210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.349 qpair failed and we were unable to recover it. 00:28:29.349 [2024-12-09 17:38:58.313502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.349 [2024-12-09 17:38:58.313534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.349 qpair failed and we were unable to recover it. 00:28:29.349 [2024-12-09 17:38:58.313754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.349 [2024-12-09 17:38:58.313786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.349 qpair failed and we were unable to recover it. 00:28:29.349 [2024-12-09 17:38:58.314040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.350 [2024-12-09 17:38:58.314073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.350 qpair failed and we were unable to recover it. 00:28:29.350 [2024-12-09 17:38:58.314354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.350 [2024-12-09 17:38:58.314387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.350 qpair failed and we were unable to recover it. 00:28:29.350 [2024-12-09 17:38:58.314512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.350 [2024-12-09 17:38:58.314543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.350 qpair failed and we were unable to recover it. 00:28:29.350 [2024-12-09 17:38:58.314797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.350 [2024-12-09 17:38:58.314830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.350 qpair failed and we were unable to recover it. 00:28:29.350 [2024-12-09 17:38:58.314963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.350 [2024-12-09 17:38:58.314995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.350 qpair failed and we were unable to recover it. 00:28:29.350 [2024-12-09 17:38:58.315189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.350 [2024-12-09 17:38:58.315241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.350 qpair failed and we were unable to recover it. 00:28:29.350 [2024-12-09 17:38:58.315519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.350 [2024-12-09 17:38:58.315552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.350 qpair failed and we were unable to recover it. 00:28:29.350 [2024-12-09 17:38:58.315821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.350 [2024-12-09 17:38:58.315852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.350 qpair failed and we were unable to recover it. 00:28:29.350 [2024-12-09 17:38:58.316140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.350 [2024-12-09 17:38:58.316183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.350 qpair failed and we were unable to recover it. 00:28:29.350 [2024-12-09 17:38:58.316398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.350 [2024-12-09 17:38:58.316431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.350 qpair failed and we were unable to recover it. 00:28:29.350 [2024-12-09 17:38:58.316619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.350 [2024-12-09 17:38:58.316650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.350 qpair failed and we were unable to recover it. 00:28:29.350 [2024-12-09 17:38:58.316928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.350 [2024-12-09 17:38:58.316960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.350 qpair failed and we were unable to recover it. 00:28:29.350 [2024-12-09 17:38:58.317173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.350 [2024-12-09 17:38:58.317204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.350 qpair failed and we were unable to recover it. 00:28:29.350 [2024-12-09 17:38:58.317497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.350 [2024-12-09 17:38:58.317529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.350 qpair failed and we were unable to recover it. 00:28:29.350 [2024-12-09 17:38:58.317804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.350 [2024-12-09 17:38:58.317836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.350 qpair failed and we were unable to recover it. 00:28:29.350 [2024-12-09 17:38:58.318132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.350 [2024-12-09 17:38:58.318164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.350 qpair failed and we were unable to recover it. 00:28:29.350 [2024-12-09 17:38:58.318360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.350 [2024-12-09 17:38:58.318394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.350 qpair failed and we were unable to recover it. 00:28:29.350 [2024-12-09 17:38:58.318532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.350 [2024-12-09 17:38:58.318564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.350 qpair failed and we were unable to recover it. 00:28:29.350 [2024-12-09 17:38:58.318766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.350 [2024-12-09 17:38:58.318797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.350 qpair failed and we were unable to recover it. 00:28:29.350 [2024-12-09 17:38:58.319083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.350 [2024-12-09 17:38:58.319115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.350 qpair failed and we were unable to recover it. 00:28:29.350 [2024-12-09 17:38:58.319421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.350 [2024-12-09 17:38:58.319455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.350 qpair failed and we were unable to recover it. 00:28:29.350 [2024-12-09 17:38:58.319717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.350 [2024-12-09 17:38:58.319749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.350 qpair failed and we were unable to recover it. 00:28:29.350 [2024-12-09 17:38:58.319951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.350 [2024-12-09 17:38:58.319983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.350 qpair failed and we were unable to recover it. 00:28:29.350 [2024-12-09 17:38:58.320249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.350 [2024-12-09 17:38:58.320282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.350 qpair failed and we were unable to recover it. 00:28:29.350 [2024-12-09 17:38:58.320579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.350 [2024-12-09 17:38:58.320610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.350 qpair failed and we were unable to recover it. 00:28:29.350 [2024-12-09 17:38:58.320881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.350 [2024-12-09 17:38:58.320913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.350 qpair failed and we were unable to recover it. 00:28:29.350 [2024-12-09 17:38:58.321210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.350 [2024-12-09 17:38:58.321253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.350 qpair failed and we were unable to recover it. 00:28:29.350 [2024-12-09 17:38:58.321533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.350 [2024-12-09 17:38:58.321565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.350 qpair failed and we were unable to recover it. 00:28:29.350 [2024-12-09 17:38:58.321766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.350 [2024-12-09 17:38:58.321798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.350 qpair failed and we were unable to recover it. 00:28:29.350 [2024-12-09 17:38:58.322074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.350 [2024-12-09 17:38:58.322105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.350 qpair failed and we were unable to recover it. 00:28:29.350 [2024-12-09 17:38:58.322308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.350 [2024-12-09 17:38:58.322342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.350 qpair failed and we were unable to recover it. 00:28:29.350 [2024-12-09 17:38:58.322642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.350 [2024-12-09 17:38:58.322674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.350 qpair failed and we were unable to recover it. 00:28:29.350 [2024-12-09 17:38:58.322940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.350 [2024-12-09 17:38:58.322972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.350 qpair failed and we were unable to recover it. 00:28:29.350 [2024-12-09 17:38:58.323282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.350 [2024-12-09 17:38:58.323315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.350 qpair failed and we were unable to recover it. 00:28:29.350 [2024-12-09 17:38:58.323593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.350 [2024-12-09 17:38:58.323626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.350 qpair failed and we were unable to recover it. 00:28:29.350 [2024-12-09 17:38:58.323895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.350 [2024-12-09 17:38:58.323928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.350 qpair failed and we were unable to recover it. 00:28:29.350 [2024-12-09 17:38:58.324240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.350 [2024-12-09 17:38:58.324273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.350 qpair failed and we were unable to recover it. 00:28:29.350 [2024-12-09 17:38:58.324565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.350 [2024-12-09 17:38:58.324598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.350 qpair failed and we were unable to recover it. 00:28:29.350 [2024-12-09 17:38:58.324746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.350 [2024-12-09 17:38:58.324777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.350 qpair failed and we were unable to recover it. 00:28:29.350 [2024-12-09 17:38:58.325026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.351 [2024-12-09 17:38:58.325058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.351 qpair failed and we were unable to recover it. 00:28:29.351 [2024-12-09 17:38:58.325339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.351 [2024-12-09 17:38:58.325374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.351 qpair failed and we were unable to recover it. 00:28:29.351 [2024-12-09 17:38:58.325579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.351 [2024-12-09 17:38:58.325611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.351 qpair failed and we were unable to recover it. 00:28:29.351 [2024-12-09 17:38:58.325744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.351 [2024-12-09 17:38:58.325776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.351 qpair failed and we were unable to recover it. 00:28:29.351 [2024-12-09 17:38:58.326031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.351 [2024-12-09 17:38:58.326063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.351 qpair failed and we were unable to recover it. 00:28:29.351 [2024-12-09 17:38:58.326255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.351 [2024-12-09 17:38:58.326288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.351 qpair failed and we were unable to recover it. 00:28:29.351 [2024-12-09 17:38:58.326470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.351 [2024-12-09 17:38:58.326503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.351 qpair failed and we were unable to recover it. 00:28:29.351 [2024-12-09 17:38:58.326757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.351 [2024-12-09 17:38:58.326789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.351 qpair failed and we were unable to recover it. 00:28:29.351 [2024-12-09 17:38:58.327015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.351 [2024-12-09 17:38:58.327047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.351 qpair failed and we were unable to recover it. 00:28:29.351 [2024-12-09 17:38:58.327248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.351 [2024-12-09 17:38:58.327288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.351 qpair failed and we were unable to recover it. 00:28:29.351 [2024-12-09 17:38:58.327425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.351 [2024-12-09 17:38:58.327458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.351 qpair failed and we were unable to recover it. 00:28:29.351 [2024-12-09 17:38:58.327641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.351 [2024-12-09 17:38:58.327674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.351 qpair failed and we were unable to recover it. 00:28:29.351 [2024-12-09 17:38:58.327945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.351 [2024-12-09 17:38:58.327977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.351 qpair failed and we were unable to recover it. 00:28:29.351 [2024-12-09 17:38:58.328176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.351 [2024-12-09 17:38:58.328209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.351 qpair failed and we were unable to recover it. 00:28:29.351 [2024-12-09 17:38:58.328534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.351 [2024-12-09 17:38:58.328566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.351 qpair failed and we were unable to recover it. 00:28:29.351 [2024-12-09 17:38:58.328820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.351 [2024-12-09 17:38:58.328853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.351 qpair failed and we were unable to recover it. 00:28:29.351 [2024-12-09 17:38:58.329067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.351 [2024-12-09 17:38:58.329099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.351 qpair failed and we were unable to recover it. 00:28:29.351 [2024-12-09 17:38:58.329377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.351 [2024-12-09 17:38:58.329410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.351 qpair failed and we were unable to recover it. 00:28:29.351 [2024-12-09 17:38:58.329614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.351 [2024-12-09 17:38:58.329646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.351 qpair failed and we were unable to recover it. 00:28:29.351 [2024-12-09 17:38:58.329902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.351 [2024-12-09 17:38:58.329935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.351 qpair failed and we were unable to recover it. 00:28:29.351 [2024-12-09 17:38:58.330238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.351 [2024-12-09 17:38:58.330271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.351 qpair failed and we were unable to recover it. 00:28:29.351 [2024-12-09 17:38:58.330539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.351 [2024-12-09 17:38:58.330572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.351 qpair failed and we were unable to recover it. 00:28:29.351 [2024-12-09 17:38:58.330865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.351 [2024-12-09 17:38:58.330898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.351 qpair failed and we were unable to recover it. 00:28:29.351 [2024-12-09 17:38:58.331175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.351 [2024-12-09 17:38:58.331208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.351 qpair failed and we were unable to recover it. 00:28:29.351 [2024-12-09 17:38:58.331422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.351 [2024-12-09 17:38:58.331453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.351 qpair failed and we were unable to recover it. 00:28:29.351 [2024-12-09 17:38:58.331636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.351 [2024-12-09 17:38:58.331668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.351 qpair failed and we were unable to recover it. 00:28:29.351 [2024-12-09 17:38:58.331925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.351 [2024-12-09 17:38:58.331957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.351 qpair failed and we were unable to recover it. 00:28:29.351 [2024-12-09 17:38:58.332258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.351 [2024-12-09 17:38:58.332292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.351 qpair failed and we were unable to recover it. 00:28:29.351 [2024-12-09 17:38:58.332495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.351 [2024-12-09 17:38:58.332527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.351 qpair failed and we were unable to recover it. 00:28:29.351 [2024-12-09 17:38:58.332708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.351 [2024-12-09 17:38:58.332740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.351 qpair failed and we were unable to recover it. 00:28:29.351 [2024-12-09 17:38:58.332947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.351 [2024-12-09 17:38:58.332980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.351 qpair failed and we were unable to recover it. 00:28:29.351 [2024-12-09 17:38:58.333237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.351 [2024-12-09 17:38:58.333269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.351 qpair failed and we were unable to recover it. 00:28:29.351 [2024-12-09 17:38:58.333570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.351 [2024-12-09 17:38:58.333602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.351 qpair failed and we were unable to recover it. 00:28:29.351 [2024-12-09 17:38:58.333805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.351 [2024-12-09 17:38:58.333837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.351 qpair failed and we were unable to recover it. 00:28:29.351 [2024-12-09 17:38:58.334094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.351 [2024-12-09 17:38:58.334126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.351 qpair failed and we were unable to recover it. 00:28:29.351 [2024-12-09 17:38:58.334413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.351 [2024-12-09 17:38:58.334446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.351 qpair failed and we were unable to recover it. 00:28:29.351 [2024-12-09 17:38:58.334585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.351 [2024-12-09 17:38:58.334618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.351 qpair failed and we were unable to recover it. 00:28:29.351 [2024-12-09 17:38:58.334824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.351 [2024-12-09 17:38:58.334856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.351 qpair failed and we were unable to recover it. 00:28:29.351 [2024-12-09 17:38:58.335052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.351 [2024-12-09 17:38:58.335084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.351 qpair failed and we were unable to recover it. 00:28:29.351 [2024-12-09 17:38:58.335364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.352 [2024-12-09 17:38:58.335398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.352 qpair failed and we were unable to recover it. 00:28:29.352 [2024-12-09 17:38:58.335680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.352 [2024-12-09 17:38:58.335712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.352 qpair failed and we were unable to recover it. 00:28:29.352 [2024-12-09 17:38:58.335993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.352 [2024-12-09 17:38:58.336025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.352 qpair failed and we were unable to recover it. 00:28:29.352 [2024-12-09 17:38:58.336282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.352 [2024-12-09 17:38:58.336315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.352 qpair failed and we were unable to recover it. 00:28:29.352 [2024-12-09 17:38:58.336548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.352 [2024-12-09 17:38:58.336582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.352 qpair failed and we were unable to recover it. 00:28:29.352 [2024-12-09 17:38:58.336862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.352 [2024-12-09 17:38:58.336895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.352 qpair failed and we were unable to recover it. 00:28:29.352 [2024-12-09 17:38:58.337100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.352 [2024-12-09 17:38:58.337132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.352 qpair failed and we were unable to recover it. 00:28:29.352 [2024-12-09 17:38:58.337438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.352 [2024-12-09 17:38:58.337471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.352 qpair failed and we were unable to recover it. 00:28:29.352 [2024-12-09 17:38:58.337693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.352 [2024-12-09 17:38:58.337725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.352 qpair failed and we were unable to recover it. 00:28:29.352 [2024-12-09 17:38:58.337988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.352 [2024-12-09 17:38:58.338021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.352 qpair failed and we were unable to recover it. 00:28:29.352 [2024-12-09 17:38:58.338255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.352 [2024-12-09 17:38:58.338295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.352 qpair failed and we were unable to recover it. 00:28:29.352 [2024-12-09 17:38:58.338518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.352 [2024-12-09 17:38:58.338551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.352 qpair failed and we were unable to recover it. 00:28:29.352 [2024-12-09 17:38:58.338733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.352 [2024-12-09 17:38:58.338765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.352 qpair failed and we were unable to recover it. 00:28:29.352 [2024-12-09 17:38:58.338963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.352 [2024-12-09 17:38:58.338995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.352 qpair failed and we were unable to recover it. 00:28:29.352 [2024-12-09 17:38:58.339278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.352 [2024-12-09 17:38:58.339311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.352 qpair failed and we were unable to recover it. 00:28:29.352 [2024-12-09 17:38:58.339579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.352 [2024-12-09 17:38:58.339611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.352 qpair failed and we were unable to recover it. 00:28:29.352 [2024-12-09 17:38:58.339798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.352 [2024-12-09 17:38:58.339829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.352 qpair failed and we were unable to recover it. 00:28:29.352 [2024-12-09 17:38:58.340014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.352 [2024-12-09 17:38:58.340046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.352 qpair failed and we were unable to recover it. 00:28:29.352 [2024-12-09 17:38:58.340304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.352 [2024-12-09 17:38:58.340338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.352 qpair failed and we were unable to recover it. 00:28:29.352 [2024-12-09 17:38:58.340642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.352 [2024-12-09 17:38:58.340673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.352 qpair failed and we were unable to recover it. 00:28:29.352 [2024-12-09 17:38:58.340939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.352 [2024-12-09 17:38:58.340971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.352 qpair failed and we were unable to recover it. 00:28:29.352 [2024-12-09 17:38:58.341275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.352 [2024-12-09 17:38:58.341310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.352 qpair failed and we were unable to recover it. 00:28:29.352 [2024-12-09 17:38:58.341437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.352 [2024-12-09 17:38:58.341468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.352 qpair failed and we were unable to recover it. 00:28:29.352 [2024-12-09 17:38:58.341682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.352 [2024-12-09 17:38:58.341714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.352 qpair failed and we were unable to recover it. 00:28:29.352 [2024-12-09 17:38:58.342039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.352 [2024-12-09 17:38:58.342072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.352 qpair failed and we were unable to recover it. 00:28:29.352 [2024-12-09 17:38:58.342342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.352 [2024-12-09 17:38:58.342375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.352 qpair failed and we were unable to recover it. 00:28:29.352 [2024-12-09 17:38:58.342583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.352 [2024-12-09 17:38:58.342615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.352 qpair failed and we were unable to recover it. 00:28:29.352 [2024-12-09 17:38:58.342894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.352 [2024-12-09 17:38:58.342927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.352 qpair failed and we were unable to recover it. 00:28:29.352 [2024-12-09 17:38:58.343239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.352 [2024-12-09 17:38:58.343272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.352 qpair failed and we were unable to recover it. 00:28:29.352 [2024-12-09 17:38:58.343534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.352 [2024-12-09 17:38:58.343566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.352 qpair failed and we were unable to recover it. 00:28:29.352 [2024-12-09 17:38:58.343764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.352 [2024-12-09 17:38:58.343796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.352 qpair failed and we were unable to recover it. 00:28:29.352 [2024-12-09 17:38:58.343929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.352 [2024-12-09 17:38:58.343960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.352 qpair failed and we were unable to recover it. 00:28:29.352 [2024-12-09 17:38:58.344143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.352 [2024-12-09 17:38:58.344175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.352 qpair failed and we were unable to recover it. 00:28:29.352 [2024-12-09 17:38:58.344385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.352 [2024-12-09 17:38:58.344418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.352 qpair failed and we were unable to recover it. 00:28:29.352 [2024-12-09 17:38:58.344742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.353 [2024-12-09 17:38:58.344774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.353 qpair failed and we were unable to recover it. 00:28:29.353 [2024-12-09 17:38:58.344926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.353 [2024-12-09 17:38:58.344958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.353 qpair failed and we were unable to recover it. 00:28:29.353 [2024-12-09 17:38:58.345161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.353 [2024-12-09 17:38:58.345193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.353 qpair failed and we were unable to recover it. 00:28:29.353 [2024-12-09 17:38:58.345397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.353 [2024-12-09 17:38:58.345431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.353 qpair failed and we were unable to recover it. 00:28:29.353 [2024-12-09 17:38:58.345686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.353 [2024-12-09 17:38:58.345718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.353 qpair failed and we were unable to recover it. 00:28:29.353 [2024-12-09 17:38:58.345970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.353 [2024-12-09 17:38:58.346002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.353 qpair failed and we were unable to recover it. 00:28:29.353 [2024-12-09 17:38:58.346279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.353 [2024-12-09 17:38:58.346313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.353 qpair failed and we were unable to recover it. 00:28:29.353 [2024-12-09 17:38:58.346515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.353 [2024-12-09 17:38:58.346548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.353 qpair failed and we were unable to recover it. 00:28:29.353 [2024-12-09 17:38:58.346849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.353 [2024-12-09 17:38:58.346881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.353 qpair failed and we were unable to recover it. 00:28:29.353 [2024-12-09 17:38:58.347150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.353 [2024-12-09 17:38:58.347181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.353 qpair failed and we were unable to recover it. 00:28:29.353 [2024-12-09 17:38:58.347399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.353 [2024-12-09 17:38:58.347432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.353 qpair failed and we were unable to recover it. 00:28:29.353 [2024-12-09 17:38:58.347615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.353 [2024-12-09 17:38:58.347647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.353 qpair failed and we were unable to recover it. 00:28:29.353 [2024-12-09 17:38:58.347847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.353 [2024-12-09 17:38:58.347879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.353 qpair failed and we were unable to recover it. 00:28:29.353 [2024-12-09 17:38:58.348077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.353 [2024-12-09 17:38:58.348109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.353 qpair failed and we were unable to recover it. 00:28:29.353 [2024-12-09 17:38:58.348294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.353 [2024-12-09 17:38:58.348327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.353 qpair failed and we were unable to recover it. 00:28:29.353 [2024-12-09 17:38:58.348531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.353 [2024-12-09 17:38:58.348563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.353 qpair failed and we were unable to recover it. 00:28:29.353 [2024-12-09 17:38:58.348746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.353 [2024-12-09 17:38:58.348783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.353 qpair failed and we were unable to recover it. 00:28:29.353 [2024-12-09 17:38:58.348984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.353 [2024-12-09 17:38:58.349016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.353 qpair failed and we were unable to recover it. 00:28:29.353 [2024-12-09 17:38:58.349239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.353 [2024-12-09 17:38:58.349272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.353 qpair failed and we were unable to recover it. 00:28:29.353 [2024-12-09 17:38:58.349490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.353 [2024-12-09 17:38:58.349523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.353 qpair failed and we were unable to recover it. 00:28:29.353 [2024-12-09 17:38:58.349803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.353 [2024-12-09 17:38:58.349836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.353 qpair failed and we were unable to recover it. 00:28:29.353 [2024-12-09 17:38:58.350094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.353 [2024-12-09 17:38:58.350126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.353 qpair failed and we were unable to recover it. 00:28:29.353 [2024-12-09 17:38:58.350430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.353 [2024-12-09 17:38:58.350465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.353 qpair failed and we were unable to recover it. 00:28:29.353 [2024-12-09 17:38:58.350658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.353 [2024-12-09 17:38:58.350690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.353 qpair failed and we were unable to recover it. 00:28:29.353 [2024-12-09 17:38:58.350911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.353 [2024-12-09 17:38:58.350943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.353 qpair failed and we were unable to recover it. 00:28:29.353 [2024-12-09 17:38:58.351123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.353 [2024-12-09 17:38:58.351155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.353 qpair failed and we were unable to recover it. 00:28:29.353 [2024-12-09 17:38:58.351442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.353 [2024-12-09 17:38:58.351474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.353 qpair failed and we were unable to recover it. 00:28:29.353 [2024-12-09 17:38:58.351754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.353 [2024-12-09 17:38:58.351786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.353 qpair failed and we were unable to recover it. 00:28:29.353 [2024-12-09 17:38:58.351978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.353 [2024-12-09 17:38:58.352011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.353 qpair failed and we were unable to recover it. 00:28:29.353 [2024-12-09 17:38:58.352194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.353 [2024-12-09 17:38:58.352236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.353 qpair failed and we were unable to recover it. 00:28:29.353 [2024-12-09 17:38:58.352522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.353 [2024-12-09 17:38:58.352555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.353 qpair failed and we were unable to recover it. 00:28:29.353 [2024-12-09 17:38:58.352826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.353 [2024-12-09 17:38:58.352857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.353 qpair failed and we were unable to recover it. 00:28:29.353 [2024-12-09 17:38:58.353158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.353 [2024-12-09 17:38:58.353190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.353 qpair failed and we were unable to recover it. 00:28:29.353 [2024-12-09 17:38:58.353459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.353 [2024-12-09 17:38:58.353492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.353 qpair failed and we were unable to recover it. 00:28:29.353 [2024-12-09 17:38:58.353760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.353 [2024-12-09 17:38:58.353792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.353 qpair failed and we were unable to recover it. 00:28:29.353 [2024-12-09 17:38:58.354087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.353 [2024-12-09 17:38:58.354119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.353 qpair failed and we were unable to recover it. 00:28:29.353 [2024-12-09 17:38:58.354418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.353 [2024-12-09 17:38:58.354452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.353 qpair failed and we were unable to recover it. 00:28:29.353 [2024-12-09 17:38:58.354721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.353 [2024-12-09 17:38:58.354752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.353 qpair failed and we were unable to recover it. 00:28:29.353 [2024-12-09 17:38:58.355051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.353 [2024-12-09 17:38:58.355083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.353 qpair failed and we were unable to recover it. 00:28:29.353 [2024-12-09 17:38:58.355290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.353 [2024-12-09 17:38:58.355323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.354 qpair failed and we were unable to recover it. 00:28:29.354 [2024-12-09 17:38:58.355628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.354 [2024-12-09 17:38:58.355659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.354 qpair failed and we were unable to recover it. 00:28:29.354 [2024-12-09 17:38:58.355844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.354 [2024-12-09 17:38:58.355876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.354 qpair failed and we were unable to recover it. 00:28:29.354 [2024-12-09 17:38:58.356130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.354 [2024-12-09 17:38:58.356163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.354 qpair failed and we were unable to recover it. 00:28:29.354 [2024-12-09 17:38:58.356411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.354 [2024-12-09 17:38:58.356445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.354 qpair failed and we were unable to recover it. 00:28:29.354 [2024-12-09 17:38:58.356718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.354 [2024-12-09 17:38:58.356750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.354 qpair failed and we were unable to recover it. 00:28:29.354 [2024-12-09 17:38:58.357040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.354 [2024-12-09 17:38:58.357072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.354 qpair failed and we were unable to recover it. 00:28:29.354 [2024-12-09 17:38:58.357254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.354 [2024-12-09 17:38:58.357287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.354 qpair failed and we were unable to recover it. 00:28:29.354 [2024-12-09 17:38:58.357486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.354 [2024-12-09 17:38:58.357518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.354 qpair failed and we were unable to recover it. 00:28:29.354 [2024-12-09 17:38:58.357723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.354 [2024-12-09 17:38:58.357756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.354 qpair failed and we were unable to recover it. 00:28:29.354 [2024-12-09 17:38:58.357954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.354 [2024-12-09 17:38:58.357986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.354 qpair failed and we were unable to recover it. 00:28:29.354 [2024-12-09 17:38:58.358263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.354 [2024-12-09 17:38:58.358296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.354 qpair failed and we were unable to recover it. 00:28:29.354 [2024-12-09 17:38:58.358580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.354 [2024-12-09 17:38:58.358612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.354 qpair failed and we were unable to recover it. 00:28:29.354 [2024-12-09 17:38:58.358898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.354 [2024-12-09 17:38:58.358930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.354 qpair failed and we were unable to recover it. 00:28:29.354 [2024-12-09 17:38:58.359211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.354 [2024-12-09 17:38:58.359253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.354 qpair failed and we were unable to recover it. 00:28:29.354 [2024-12-09 17:38:58.359476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.354 [2024-12-09 17:38:58.359509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.354 qpair failed and we were unable to recover it. 00:28:29.354 [2024-12-09 17:38:58.359797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.354 [2024-12-09 17:38:58.359828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.354 qpair failed and we were unable to recover it. 00:28:29.354 [2024-12-09 17:38:58.360108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.354 [2024-12-09 17:38:58.360146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.354 qpair failed and we were unable to recover it. 00:28:29.354 [2024-12-09 17:38:58.360332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.354 [2024-12-09 17:38:58.360383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.354 qpair failed and we were unable to recover it. 00:28:29.354 [2024-12-09 17:38:58.360622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.354 [2024-12-09 17:38:58.360655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.354 qpair failed and we were unable to recover it. 00:28:29.354 [2024-12-09 17:38:58.360862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.354 [2024-12-09 17:38:58.360895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.354 qpair failed and we were unable to recover it. 00:28:29.354 [2024-12-09 17:38:58.361079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.354 [2024-12-09 17:38:58.361111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.354 qpair failed and we were unable to recover it. 00:28:29.354 [2024-12-09 17:38:58.361399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.354 [2024-12-09 17:38:58.361433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.354 qpair failed and we were unable to recover it. 00:28:29.354 [2024-12-09 17:38:58.361638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.354 [2024-12-09 17:38:58.361670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.354 qpair failed and we were unable to recover it. 00:28:29.354 [2024-12-09 17:38:58.361949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.354 [2024-12-09 17:38:58.361981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.354 qpair failed and we were unable to recover it. 00:28:29.354 [2024-12-09 17:38:58.362268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.354 [2024-12-09 17:38:58.362302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.354 qpair failed and we were unable to recover it. 00:28:29.354 [2024-12-09 17:38:58.362583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.354 [2024-12-09 17:38:58.362615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.354 qpair failed and we were unable to recover it. 00:28:29.354 [2024-12-09 17:38:58.362795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.354 [2024-12-09 17:38:58.362826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.354 qpair failed and we were unable to recover it. 00:28:29.354 [2024-12-09 17:38:58.363036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.354 [2024-12-09 17:38:58.363068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.354 qpair failed and we were unable to recover it. 00:28:29.354 [2024-12-09 17:38:58.363252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.354 [2024-12-09 17:38:58.363285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.354 qpair failed and we were unable to recover it. 00:28:29.354 [2024-12-09 17:38:58.363491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.354 [2024-12-09 17:38:58.363522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.354 qpair failed and we were unable to recover it. 00:28:29.354 [2024-12-09 17:38:58.363784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.354 [2024-12-09 17:38:58.363817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.354 qpair failed and we were unable to recover it. 00:28:29.354 [2024-12-09 17:38:58.364015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.354 [2024-12-09 17:38:58.364047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.354 qpair failed and we were unable to recover it. 00:28:29.354 [2024-12-09 17:38:58.364239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.354 [2024-12-09 17:38:58.364272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.354 qpair failed and we were unable to recover it. 00:28:29.354 [2024-12-09 17:38:58.364553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.354 [2024-12-09 17:38:58.364585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.354 qpair failed and we were unable to recover it. 00:28:29.354 [2024-12-09 17:38:58.364700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.354 [2024-12-09 17:38:58.364732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.354 qpair failed and we were unable to recover it. 00:28:29.354 [2024-12-09 17:38:58.364985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.354 [2024-12-09 17:38:58.365017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.354 qpair failed and we were unable to recover it. 00:28:29.354 [2024-12-09 17:38:58.365272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.354 [2024-12-09 17:38:58.365307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.354 qpair failed and we were unable to recover it. 00:28:29.354 [2024-12-09 17:38:58.365422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.354 [2024-12-09 17:38:58.365455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.354 qpair failed and we were unable to recover it. 00:28:29.354 [2024-12-09 17:38:58.365732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.355 [2024-12-09 17:38:58.365764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.355 qpair failed and we were unable to recover it. 00:28:29.355 [2024-12-09 17:38:58.366042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.355 [2024-12-09 17:38:58.366074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.355 qpair failed and we were unable to recover it. 00:28:29.355 [2024-12-09 17:38:58.366365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.355 [2024-12-09 17:38:58.366399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.355 qpair failed and we were unable to recover it. 00:28:29.355 [2024-12-09 17:38:58.366605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.355 [2024-12-09 17:38:58.366637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.355 qpair failed and we were unable to recover it. 00:28:29.355 [2024-12-09 17:38:58.366836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.355 [2024-12-09 17:38:58.366868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.355 qpair failed and we were unable to recover it. 00:28:29.355 [2024-12-09 17:38:58.367057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.355 [2024-12-09 17:38:58.367089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.355 qpair failed and we were unable to recover it. 00:28:29.355 [2024-12-09 17:38:58.367292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.355 [2024-12-09 17:38:58.367325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.355 qpair failed and we were unable to recover it. 00:28:29.355 [2024-12-09 17:38:58.367602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.355 [2024-12-09 17:38:58.367635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.355 qpair failed and we were unable to recover it. 00:28:29.355 [2024-12-09 17:38:58.367864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.355 [2024-12-09 17:38:58.367896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.355 qpair failed and we were unable to recover it. 00:28:29.355 [2024-12-09 17:38:58.368174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.355 [2024-12-09 17:38:58.368206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.355 qpair failed and we were unable to recover it. 00:28:29.355 [2024-12-09 17:38:58.368495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.355 [2024-12-09 17:38:58.368527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.355 qpair failed and we were unable to recover it. 00:28:29.355 [2024-12-09 17:38:58.368660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.355 [2024-12-09 17:38:58.368691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.355 qpair failed and we were unable to recover it. 00:28:29.355 [2024-12-09 17:38:58.369012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.355 [2024-12-09 17:38:58.369045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.355 qpair failed and we were unable to recover it. 00:28:29.355 [2024-12-09 17:38:58.369243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.355 [2024-12-09 17:38:58.369276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.355 qpair failed and we were unable to recover it. 00:28:29.355 [2024-12-09 17:38:58.369482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.355 [2024-12-09 17:38:58.369514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.355 qpair failed and we were unable to recover it. 00:28:29.355 [2024-12-09 17:38:58.369780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.355 [2024-12-09 17:38:58.369812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.355 qpair failed and we were unable to recover it. 00:28:29.355 [2024-12-09 17:38:58.370010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.355 [2024-12-09 17:38:58.370043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.355 qpair failed and we were unable to recover it. 00:28:29.355 [2024-12-09 17:38:58.370247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.355 [2024-12-09 17:38:58.370280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.355 qpair failed and we were unable to recover it. 00:28:29.355 [2024-12-09 17:38:58.370564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.355 [2024-12-09 17:38:58.370596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.355 qpair failed and we were unable to recover it. 00:28:29.355 [2024-12-09 17:38:58.370882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.355 [2024-12-09 17:38:58.370914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.355 qpair failed and we were unable to recover it. 00:28:29.355 [2024-12-09 17:38:58.371175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.355 [2024-12-09 17:38:58.371206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.355 qpair failed and we were unable to recover it. 00:28:29.355 [2024-12-09 17:38:58.371489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.355 [2024-12-09 17:38:58.371522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.355 qpair failed and we were unable to recover it. 00:28:29.355 [2024-12-09 17:38:58.371744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.355 [2024-12-09 17:38:58.371777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.355 qpair failed and we were unable to recover it. 00:28:29.355 [2024-12-09 17:38:58.372081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.355 [2024-12-09 17:38:58.372112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.355 qpair failed and we were unable to recover it. 00:28:29.355 [2024-12-09 17:38:58.372382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.355 [2024-12-09 17:38:58.372416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.355 qpair failed and we were unable to recover it. 00:28:29.355 [2024-12-09 17:38:58.372695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.355 [2024-12-09 17:38:58.372727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.355 qpair failed and we were unable to recover it. 00:28:29.355 [2024-12-09 17:38:58.373017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.355 [2024-12-09 17:38:58.373049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.355 qpair failed and we were unable to recover it. 00:28:29.355 [2024-12-09 17:38:58.373272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.355 [2024-12-09 17:38:58.373306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.355 qpair failed and we were unable to recover it. 00:28:29.355 [2024-12-09 17:38:58.373561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.355 [2024-12-09 17:38:58.373593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.355 qpair failed and we were unable to recover it. 00:28:29.355 [2024-12-09 17:38:58.373868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.355 [2024-12-09 17:38:58.373900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.355 qpair failed and we were unable to recover it. 00:28:29.355 [2024-12-09 17:38:58.374100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.355 [2024-12-09 17:38:58.374132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.355 qpair failed and we were unable to recover it. 00:28:29.355 [2024-12-09 17:38:58.374315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.355 [2024-12-09 17:38:58.374348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.355 qpair failed and we were unable to recover it. 00:28:29.355 [2024-12-09 17:38:58.374647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.355 [2024-12-09 17:38:58.374681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.355 qpair failed and we were unable to recover it. 00:28:29.355 [2024-12-09 17:38:58.374956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.355 [2024-12-09 17:38:58.374990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.355 qpair failed and we were unable to recover it. 00:28:29.355 [2024-12-09 17:38:58.375303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.355 [2024-12-09 17:38:58.375337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.355 qpair failed and we were unable to recover it. 00:28:29.355 [2024-12-09 17:38:58.375615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.355 [2024-12-09 17:38:58.375647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.355 qpair failed and we were unable to recover it. 00:28:29.355 [2024-12-09 17:38:58.375923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.355 [2024-12-09 17:38:58.375955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.355 qpair failed and we were unable to recover it. 00:28:29.355 [2024-12-09 17:38:58.376180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.355 [2024-12-09 17:38:58.376213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.355 qpair failed and we were unable to recover it. 00:28:29.355 [2024-12-09 17:38:58.376374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.356 [2024-12-09 17:38:58.376407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.356 qpair failed and we were unable to recover it. 00:28:29.356 [2024-12-09 17:38:58.376529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.356 [2024-12-09 17:38:58.376562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.356 qpair failed and we were unable to recover it. 00:28:29.356 [2024-12-09 17:38:58.376853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.356 [2024-12-09 17:38:58.376885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.356 qpair failed and we were unable to recover it. 00:28:29.356 [2024-12-09 17:38:58.377082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.356 [2024-12-09 17:38:58.377114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.356 qpair failed and we were unable to recover it. 00:28:29.356 [2024-12-09 17:38:58.377300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.356 [2024-12-09 17:38:58.377334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.356 qpair failed and we were unable to recover it. 00:28:29.356 [2024-12-09 17:38:58.377608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.356 [2024-12-09 17:38:58.377639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.356 qpair failed and we were unable to recover it. 00:28:29.356 [2024-12-09 17:38:58.377832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.356 [2024-12-09 17:38:58.377864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.356 qpair failed and we were unable to recover it. 00:28:29.356 [2024-12-09 17:38:58.378049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.356 [2024-12-09 17:38:58.378086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.356 qpair failed and we were unable to recover it. 00:28:29.356 [2024-12-09 17:38:58.378365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.356 [2024-12-09 17:38:58.378399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.356 qpair failed and we were unable to recover it. 00:28:29.356 [2024-12-09 17:38:58.378668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.356 [2024-12-09 17:38:58.378701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.356 qpair failed and we were unable to recover it. 00:28:29.356 [2024-12-09 17:38:58.378900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.356 [2024-12-09 17:38:58.378931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.356 qpair failed and we were unable to recover it. 00:28:29.356 [2024-12-09 17:38:58.379207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.356 [2024-12-09 17:38:58.379249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.356 qpair failed and we were unable to recover it. 00:28:29.356 [2024-12-09 17:38:58.379453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.356 [2024-12-09 17:38:58.379485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.356 qpair failed and we were unable to recover it. 00:28:29.356 [2024-12-09 17:38:58.379761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.356 [2024-12-09 17:38:58.379793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.356 qpair failed and we were unable to recover it. 00:28:29.356 [2024-12-09 17:38:58.380082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.356 [2024-12-09 17:38:58.380114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.356 qpair failed and we were unable to recover it. 00:28:29.356 [2024-12-09 17:38:58.380309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.356 [2024-12-09 17:38:58.380343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.356 qpair failed and we were unable to recover it. 00:28:29.356 [2024-12-09 17:38:58.380596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.356 [2024-12-09 17:38:58.380629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.356 qpair failed and we were unable to recover it. 00:28:29.356 [2024-12-09 17:38:58.380811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.356 [2024-12-09 17:38:58.380843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.356 qpair failed and we were unable to recover it. 00:28:29.356 [2024-12-09 17:38:58.381119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.356 [2024-12-09 17:38:58.381151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.356 qpair failed and we were unable to recover it. 00:28:29.356 [2024-12-09 17:38:58.381371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.356 [2024-12-09 17:38:58.381405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.356 qpair failed and we were unable to recover it. 00:28:29.356 [2024-12-09 17:38:58.381631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.356 [2024-12-09 17:38:58.381663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.356 qpair failed and we were unable to recover it. 00:28:29.356 [2024-12-09 17:38:58.381947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.356 [2024-12-09 17:38:58.381980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.356 qpair failed and we were unable to recover it. 00:28:29.356 [2024-12-09 17:38:58.382178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.356 [2024-12-09 17:38:58.382210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.356 qpair failed and we were unable to recover it. 00:28:29.356 [2024-12-09 17:38:58.382527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.356 [2024-12-09 17:38:58.382559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.356 qpair failed and we were unable to recover it. 00:28:29.356 [2024-12-09 17:38:58.382744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.356 [2024-12-09 17:38:58.382776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.356 qpair failed and we were unable to recover it. 00:28:29.356 [2024-12-09 17:38:58.383029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.356 [2024-12-09 17:38:58.383061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.356 qpair failed and we were unable to recover it. 00:28:29.356 [2024-12-09 17:38:58.383342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.356 [2024-12-09 17:38:58.383376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.356 qpair failed and we were unable to recover it. 00:28:29.356 [2024-12-09 17:38:58.383663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.356 [2024-12-09 17:38:58.383695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.356 qpair failed and we were unable to recover it. 00:28:29.356 [2024-12-09 17:38:58.383917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.356 [2024-12-09 17:38:58.383949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.356 qpair failed and we were unable to recover it. 00:28:29.356 [2024-12-09 17:38:58.384149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.356 [2024-12-09 17:38:58.384181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.356 qpair failed and we were unable to recover it. 00:28:29.356 [2024-12-09 17:38:58.384478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.356 [2024-12-09 17:38:58.384512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.356 qpair failed and we were unable to recover it. 00:28:29.356 [2024-12-09 17:38:58.384783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.356 [2024-12-09 17:38:58.384815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.356 qpair failed and we were unable to recover it. 00:28:29.356 [2024-12-09 17:38:58.385040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.356 [2024-12-09 17:38:58.385072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.356 qpair failed and we were unable to recover it. 00:28:29.356 [2024-12-09 17:38:58.385376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.357 [2024-12-09 17:38:58.385411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.357 qpair failed and we were unable to recover it. 00:28:29.357 [2024-12-09 17:38:58.385624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.357 [2024-12-09 17:38:58.385656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.357 qpair failed and we were unable to recover it. 00:28:29.357 [2024-12-09 17:38:58.385851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.357 [2024-12-09 17:38:58.385884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.357 qpair failed and we were unable to recover it. 00:28:29.357 [2024-12-09 17:38:58.386161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.357 [2024-12-09 17:38:58.386193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.357 qpair failed and we were unable to recover it. 00:28:29.357 [2024-12-09 17:38:58.386503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.357 [2024-12-09 17:38:58.386536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.357 qpair failed and we were unable to recover it. 00:28:29.357 [2024-12-09 17:38:58.386739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.357 [2024-12-09 17:38:58.386771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.357 qpair failed and we were unable to recover it. 00:28:29.357 [2024-12-09 17:38:58.387050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.357 [2024-12-09 17:38:58.387082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.357 qpair failed and we were unable to recover it. 00:28:29.357 [2024-12-09 17:38:58.387317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.357 [2024-12-09 17:38:58.387350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.357 qpair failed and we were unable to recover it. 00:28:29.357 [2024-12-09 17:38:58.387567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.357 [2024-12-09 17:38:58.387600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.357 qpair failed and we were unable to recover it. 00:28:29.357 [2024-12-09 17:38:58.387794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.357 [2024-12-09 17:38:58.387826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.357 qpair failed and we were unable to recover it. 00:28:29.357 [2024-12-09 17:38:58.388101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.357 [2024-12-09 17:38:58.388134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.357 qpair failed and we were unable to recover it. 00:28:29.357 [2024-12-09 17:38:58.388341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.357 [2024-12-09 17:38:58.388375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.357 qpair failed and we were unable to recover it. 00:28:29.357 [2024-12-09 17:38:58.388652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.357 [2024-12-09 17:38:58.388684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.357 qpair failed and we were unable to recover it. 00:28:29.357 [2024-12-09 17:38:58.388868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.357 [2024-12-09 17:38:58.388900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.357 qpair failed and we were unable to recover it. 00:28:29.357 [2024-12-09 17:38:58.389118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.357 [2024-12-09 17:38:58.389157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.357 qpair failed and we were unable to recover it. 00:28:29.357 [2024-12-09 17:38:58.389419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.357 [2024-12-09 17:38:58.389452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.357 qpair failed and we were unable to recover it. 00:28:29.357 [2024-12-09 17:38:58.389745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.357 [2024-12-09 17:38:58.389777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.357 qpair failed and we were unable to recover it. 00:28:29.357 [2024-12-09 17:38:58.390060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.357 [2024-12-09 17:38:58.390092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.357 qpair failed and we were unable to recover it. 00:28:29.357 [2024-12-09 17:38:58.390280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.357 [2024-12-09 17:38:58.390313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.357 qpair failed and we were unable to recover it. 00:28:29.357 [2024-12-09 17:38:58.390589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.357 [2024-12-09 17:38:58.390621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.357 qpair failed and we were unable to recover it. 00:28:29.357 [2024-12-09 17:38:58.390899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.357 [2024-12-09 17:38:58.390931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.357 qpair failed and we were unable to recover it. 00:28:29.357 [2024-12-09 17:38:58.391144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.357 [2024-12-09 17:38:58.391176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.357 qpair failed and we were unable to recover it. 00:28:29.357 [2024-12-09 17:38:58.391463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.357 [2024-12-09 17:38:58.391497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.357 qpair failed and we were unable to recover it. 00:28:29.357 [2024-12-09 17:38:58.391693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.357 [2024-12-09 17:38:58.391725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.357 qpair failed and we were unable to recover it. 00:28:29.357 [2024-12-09 17:38:58.391983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.357 [2024-12-09 17:38:58.392015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.357 qpair failed and we were unable to recover it. 00:28:29.357 [2024-12-09 17:38:58.392248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.357 [2024-12-09 17:38:58.392282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.357 qpair failed and we were unable to recover it. 00:28:29.357 [2024-12-09 17:38:58.392476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.357 [2024-12-09 17:38:58.392510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.357 qpair failed and we were unable to recover it. 00:28:29.357 [2024-12-09 17:38:58.392787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.357 [2024-12-09 17:38:58.392819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.357 qpair failed and we were unable to recover it. 00:28:29.357 [2024-12-09 17:38:58.393124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.357 [2024-12-09 17:38:58.393156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.357 qpair failed and we were unable to recover it. 00:28:29.357 [2024-12-09 17:38:58.393446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.357 [2024-12-09 17:38:58.393480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.357 qpair failed and we were unable to recover it. 00:28:29.357 [2024-12-09 17:38:58.393763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.357 [2024-12-09 17:38:58.393795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.357 qpair failed and we were unable to recover it. 00:28:29.357 [2024-12-09 17:38:58.394056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.357 [2024-12-09 17:38:58.394088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.357 qpair failed and we were unable to recover it. 00:28:29.357 [2024-12-09 17:38:58.394367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.357 [2024-12-09 17:38:58.394400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.357 qpair failed and we were unable to recover it. 00:28:29.357 [2024-12-09 17:38:58.394596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.357 [2024-12-09 17:38:58.394628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.357 qpair failed and we were unable to recover it. 00:28:29.357 [2024-12-09 17:38:58.394886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.357 [2024-12-09 17:38:58.394918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.357 qpair failed and we were unable to recover it. 00:28:29.357 [2024-12-09 17:38:58.395144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.357 [2024-12-09 17:38:58.395175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.357 qpair failed and we were unable to recover it. 00:28:29.357 [2024-12-09 17:38:58.395314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.357 [2024-12-09 17:38:58.395347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.357 qpair failed and we were unable to recover it. 00:28:29.357 [2024-12-09 17:38:58.395551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.357 [2024-12-09 17:38:58.395583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.357 qpair failed and we were unable to recover it. 00:28:29.357 [2024-12-09 17:38:58.395856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.357 [2024-12-09 17:38:58.395887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.357 qpair failed and we were unable to recover it. 00:28:29.357 [2024-12-09 17:38:58.396169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.358 [2024-12-09 17:38:58.396201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.358 qpair failed and we were unable to recover it. 00:28:29.358 [2024-12-09 17:38:58.396353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.358 [2024-12-09 17:38:58.396386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.358 qpair failed and we were unable to recover it. 00:28:29.358 [2024-12-09 17:38:58.396649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.358 [2024-12-09 17:38:58.396682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.358 qpair failed and we were unable to recover it. 00:28:29.358 [2024-12-09 17:38:58.396989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.358 [2024-12-09 17:38:58.397020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.358 qpair failed and we were unable to recover it. 00:28:29.358 [2024-12-09 17:38:58.397288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.358 [2024-12-09 17:38:58.397322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.358 qpair failed and we were unable to recover it. 00:28:29.358 [2024-12-09 17:38:58.397599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.358 [2024-12-09 17:38:58.397631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.358 qpair failed and we were unable to recover it. 00:28:29.358 [2024-12-09 17:38:58.397829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.358 [2024-12-09 17:38:58.397860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.358 qpair failed and we were unable to recover it. 00:28:29.358 [2024-12-09 17:38:58.398118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.358 [2024-12-09 17:38:58.398150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.358 qpair failed and we were unable to recover it. 00:28:29.358 [2024-12-09 17:38:58.398297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.358 [2024-12-09 17:38:58.398331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.358 qpair failed and we were unable to recover it. 00:28:29.358 [2024-12-09 17:38:58.398558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.358 [2024-12-09 17:38:58.398591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.358 qpair failed and we were unable to recover it. 00:28:29.358 [2024-12-09 17:38:58.398808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.358 [2024-12-09 17:38:58.398841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.358 qpair failed and we were unable to recover it. 00:28:29.358 [2024-12-09 17:38:58.399097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.358 [2024-12-09 17:38:58.399129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.358 qpair failed and we were unable to recover it. 00:28:29.358 [2024-12-09 17:38:58.399392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.358 [2024-12-09 17:38:58.399425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.358 qpair failed and we were unable to recover it. 00:28:29.358 [2024-12-09 17:38:58.399630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.358 [2024-12-09 17:38:58.399662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.358 qpair failed and we were unable to recover it. 00:28:29.358 [2024-12-09 17:38:58.399911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.358 [2024-12-09 17:38:58.399944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.358 qpair failed and we were unable to recover it. 00:28:29.358 [2024-12-09 17:38:58.400271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.358 [2024-12-09 17:38:58.400312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.358 qpair failed and we were unable to recover it. 00:28:29.358 [2024-12-09 17:38:58.400568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.358 [2024-12-09 17:38:58.400601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.358 qpair failed and we were unable to recover it. 00:28:29.358 [2024-12-09 17:38:58.400830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.358 [2024-12-09 17:38:58.400862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.358 qpair failed and we were unable to recover it. 00:28:29.358 [2024-12-09 17:38:58.401059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.358 [2024-12-09 17:38:58.401091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.358 qpair failed and we were unable to recover it. 00:28:29.358 [2024-12-09 17:38:58.401347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.358 [2024-12-09 17:38:58.401381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.358 qpair failed and we were unable to recover it. 00:28:29.358 [2024-12-09 17:38:58.401585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.358 [2024-12-09 17:38:58.401616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.358 qpair failed and we were unable to recover it. 00:28:29.358 [2024-12-09 17:38:58.401840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.358 [2024-12-09 17:38:58.401872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.358 qpair failed and we were unable to recover it. 00:28:29.358 [2024-12-09 17:38:58.402078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.358 [2024-12-09 17:38:58.402110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.358 qpair failed and we were unable to recover it. 00:28:29.358 [2024-12-09 17:38:58.402316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.358 [2024-12-09 17:38:58.402350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.358 qpair failed and we were unable to recover it. 00:28:29.358 [2024-12-09 17:38:58.402534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.358 [2024-12-09 17:38:58.402567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.358 qpair failed and we were unable to recover it. 00:28:29.358 [2024-12-09 17:38:58.402822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.358 [2024-12-09 17:38:58.402854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.358 qpair failed and we were unable to recover it. 00:28:29.358 [2024-12-09 17:38:58.403105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.358 [2024-12-09 17:38:58.403137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.358 qpair failed and we were unable to recover it. 00:28:29.358 [2024-12-09 17:38:58.403416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.358 [2024-12-09 17:38:58.403450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.358 qpair failed and we were unable to recover it. 00:28:29.358 [2024-12-09 17:38:58.403650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.358 [2024-12-09 17:38:58.403682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.358 qpair failed and we were unable to recover it. 00:28:29.358 [2024-12-09 17:38:58.403889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.358 [2024-12-09 17:38:58.403922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.358 qpair failed and we were unable to recover it. 00:28:29.358 [2024-12-09 17:38:58.404108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.358 [2024-12-09 17:38:58.404141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.358 qpair failed and we were unable to recover it. 00:28:29.358 [2024-12-09 17:38:58.404337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.358 [2024-12-09 17:38:58.404371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.358 qpair failed and we were unable to recover it. 00:28:29.358 [2024-12-09 17:38:58.404576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.358 [2024-12-09 17:38:58.404608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.358 qpair failed and we were unable to recover it. 00:28:29.358 [2024-12-09 17:38:58.404898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.358 [2024-12-09 17:38:58.404930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.358 qpair failed and we were unable to recover it. 00:28:29.358 [2024-12-09 17:38:58.405161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.358 [2024-12-09 17:38:58.405192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.358 qpair failed and we were unable to recover it. 00:28:29.358 [2024-12-09 17:38:58.405478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.358 [2024-12-09 17:38:58.405510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.358 qpair failed and we were unable to recover it. 00:28:29.358 [2024-12-09 17:38:58.405734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.358 [2024-12-09 17:38:58.405767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.358 qpair failed and we were unable to recover it. 00:28:29.358 [2024-12-09 17:38:58.405950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.358 [2024-12-09 17:38:58.405981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.358 qpair failed and we were unable to recover it. 00:28:29.358 [2024-12-09 17:38:58.406161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.358 [2024-12-09 17:38:58.406192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.358 qpair failed and we were unable to recover it. 00:28:29.358 [2024-12-09 17:38:58.406398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.359 [2024-12-09 17:38:58.406431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.359 qpair failed and we were unable to recover it. 00:28:29.359 [2024-12-09 17:38:58.406707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.359 [2024-12-09 17:38:58.406738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.359 qpair failed and we were unable to recover it. 00:28:29.359 [2024-12-09 17:38:58.407026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.359 [2024-12-09 17:38:58.407060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.359 qpair failed and we were unable to recover it. 00:28:29.359 [2024-12-09 17:38:58.407339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.359 [2024-12-09 17:38:58.407374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.359 qpair failed and we were unable to recover it. 00:28:29.359 [2024-12-09 17:38:58.407573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.359 [2024-12-09 17:38:58.407605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.359 qpair failed and we were unable to recover it. 00:28:29.359 [2024-12-09 17:38:58.407865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.359 [2024-12-09 17:38:58.407897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.359 qpair failed and we were unable to recover it. 00:28:29.359 [2024-12-09 17:38:58.408160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.359 [2024-12-09 17:38:58.408192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.359 qpair failed and we were unable to recover it. 00:28:29.359 [2024-12-09 17:38:58.408415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.359 [2024-12-09 17:38:58.408449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.359 qpair failed and we were unable to recover it. 00:28:29.359 [2024-12-09 17:38:58.408747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.359 [2024-12-09 17:38:58.408780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.359 qpair failed and we were unable to recover it. 00:28:29.359 [2024-12-09 17:38:58.408996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.359 [2024-12-09 17:38:58.409028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.359 qpair failed and we were unable to recover it. 00:28:29.359 [2024-12-09 17:38:58.409304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.359 [2024-12-09 17:38:58.409339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.359 qpair failed and we were unable to recover it. 00:28:29.359 [2024-12-09 17:38:58.409647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.359 [2024-12-09 17:38:58.409679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.359 qpair failed and we were unable to recover it. 00:28:29.359 [2024-12-09 17:38:58.409954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.359 [2024-12-09 17:38:58.409986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.359 qpair failed and we were unable to recover it. 00:28:29.359 [2024-12-09 17:38:58.410200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.359 [2024-12-09 17:38:58.410240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.359 qpair failed and we were unable to recover it. 00:28:29.359 [2024-12-09 17:38:58.410494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.359 [2024-12-09 17:38:58.410526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.359 qpair failed and we were unable to recover it. 00:28:29.359 [2024-12-09 17:38:58.410778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.359 [2024-12-09 17:38:58.410810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.359 qpair failed and we were unable to recover it. 00:28:29.359 [2024-12-09 17:38:58.411109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.359 [2024-12-09 17:38:58.411147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.359 qpair failed and we were unable to recover it. 00:28:29.359 [2024-12-09 17:38:58.411421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.359 [2024-12-09 17:38:58.411456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.359 qpair failed and we were unable to recover it. 00:28:29.359 [2024-12-09 17:38:58.411644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.359 [2024-12-09 17:38:58.411676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.359 qpair failed and we were unable to recover it. 00:28:29.359 [2024-12-09 17:38:58.411959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.359 [2024-12-09 17:38:58.411991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.359 qpair failed and we were unable to recover it. 00:28:29.359 [2024-12-09 17:38:58.412211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.359 [2024-12-09 17:38:58.412270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.359 qpair failed and we were unable to recover it. 00:28:29.359 [2024-12-09 17:38:58.412450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.359 [2024-12-09 17:38:58.412482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.359 qpair failed and we were unable to recover it. 00:28:29.359 [2024-12-09 17:38:58.412665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.359 [2024-12-09 17:38:58.412697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.359 qpair failed and we were unable to recover it. 00:28:29.359 [2024-12-09 17:38:58.412976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.359 [2024-12-09 17:38:58.413009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.359 qpair failed and we were unable to recover it. 00:28:29.359 [2024-12-09 17:38:58.413288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.359 [2024-12-09 17:38:58.413321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.359 qpair failed and we were unable to recover it. 00:28:29.359 [2024-12-09 17:38:58.413593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.359 [2024-12-09 17:38:58.413625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.359 qpair failed and we were unable to recover it. 00:28:29.359 [2024-12-09 17:38:58.413916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.359 [2024-12-09 17:38:58.413948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.359 qpair failed and we were unable to recover it. 00:28:29.359 [2024-12-09 17:38:58.414081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.359 [2024-12-09 17:38:58.414111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.359 qpair failed and we were unable to recover it. 00:28:29.359 [2024-12-09 17:38:58.414393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.359 [2024-12-09 17:38:58.414427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.359 qpair failed and we were unable to recover it. 00:28:29.359 [2024-12-09 17:38:58.414646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.359 [2024-12-09 17:38:58.414677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.359 qpair failed and we were unable to recover it. 00:28:29.359 [2024-12-09 17:38:58.414884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.359 [2024-12-09 17:38:58.414916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.359 qpair failed and we were unable to recover it. 00:28:29.359 [2024-12-09 17:38:58.415118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.359 [2024-12-09 17:38:58.415151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.359 qpair failed and we were unable to recover it. 00:28:29.359 [2024-12-09 17:38:58.415431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.359 [2024-12-09 17:38:58.415465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.359 qpair failed and we were unable to recover it. 00:28:29.359 [2024-12-09 17:38:58.415647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.359 [2024-12-09 17:38:58.415678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.359 qpair failed and we were unable to recover it. 00:28:29.359 [2024-12-09 17:38:58.415879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.359 [2024-12-09 17:38:58.415911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.359 qpair failed and we were unable to recover it. 00:28:29.359 [2024-12-09 17:38:58.416105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.359 [2024-12-09 17:38:58.416136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.359 qpair failed and we were unable to recover it. 00:28:29.359 [2024-12-09 17:38:58.416393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.359 [2024-12-09 17:38:58.416427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.359 qpair failed and we were unable to recover it. 00:28:29.359 [2024-12-09 17:38:58.416613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.359 [2024-12-09 17:38:58.416645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.359 qpair failed and we were unable to recover it. 00:28:29.359 [2024-12-09 17:38:58.416903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.359 [2024-12-09 17:38:58.416935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.359 qpair failed and we were unable to recover it. 00:28:29.360 [2024-12-09 17:38:58.417119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.360 [2024-12-09 17:38:58.417152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.360 qpair failed and we were unable to recover it. 00:28:29.360 [2024-12-09 17:38:58.417432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.360 [2024-12-09 17:38:58.417465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.360 qpair failed and we were unable to recover it. 00:28:29.360 [2024-12-09 17:38:58.417662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.360 [2024-12-09 17:38:58.417693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.360 qpair failed and we were unable to recover it. 00:28:29.360 [2024-12-09 17:38:58.417932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.360 [2024-12-09 17:38:58.417964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.360 qpair failed and we were unable to recover it. 00:28:29.360 [2024-12-09 17:38:58.418246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.360 [2024-12-09 17:38:58.418280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.360 qpair failed and we were unable to recover it. 00:28:29.360 [2024-12-09 17:38:58.418563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.360 [2024-12-09 17:38:58.418595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.360 qpair failed and we were unable to recover it. 00:28:29.360 [2024-12-09 17:38:58.418880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.360 [2024-12-09 17:38:58.418913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.360 qpair failed and we were unable to recover it. 00:28:29.360 [2024-12-09 17:38:58.419098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.360 [2024-12-09 17:38:58.419131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.360 qpair failed and we were unable to recover it. 00:28:29.360 [2024-12-09 17:38:58.419335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.360 [2024-12-09 17:38:58.419369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.360 qpair failed and we were unable to recover it. 00:28:29.360 [2024-12-09 17:38:58.419646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.360 [2024-12-09 17:38:58.419679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.360 qpair failed and we were unable to recover it. 00:28:29.360 [2024-12-09 17:38:58.419960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.360 [2024-12-09 17:38:58.419992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.360 qpair failed and we were unable to recover it. 00:28:29.360 [2024-12-09 17:38:58.420277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.360 [2024-12-09 17:38:58.420310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.360 qpair failed and we were unable to recover it. 00:28:29.360 [2024-12-09 17:38:58.420595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.360 [2024-12-09 17:38:58.420628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.360 qpair failed and we were unable to recover it. 00:28:29.360 [2024-12-09 17:38:58.420908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.360 [2024-12-09 17:38:58.420941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.360 qpair failed and we were unable to recover it. 00:28:29.360 [2024-12-09 17:38:58.421147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.360 [2024-12-09 17:38:58.421179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.360 qpair failed and we were unable to recover it. 00:28:29.360 [2024-12-09 17:38:58.421466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.360 [2024-12-09 17:38:58.421500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.360 qpair failed and we were unable to recover it. 00:28:29.360 [2024-12-09 17:38:58.421784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.360 [2024-12-09 17:38:58.421816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.360 qpair failed and we were unable to recover it. 00:28:29.360 [2024-12-09 17:38:58.422097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.360 [2024-12-09 17:38:58.422134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.360 qpair failed and we were unable to recover it. 00:28:29.360 [2024-12-09 17:38:58.422420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.360 [2024-12-09 17:38:58.422453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.360 qpair failed and we were unable to recover it. 00:28:29.360 [2024-12-09 17:38:58.422568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.360 [2024-12-09 17:38:58.422600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.360 qpair failed and we were unable to recover it. 00:28:29.360 [2024-12-09 17:38:58.422875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.360 [2024-12-09 17:38:58.422906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.360 qpair failed and we were unable to recover it. 00:28:29.360 [2024-12-09 17:38:58.423089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.360 [2024-12-09 17:38:58.423121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.360 qpair failed and we were unable to recover it. 00:28:29.360 [2024-12-09 17:38:58.423304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.360 [2024-12-09 17:38:58.423337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.360 qpair failed and we were unable to recover it. 00:28:29.360 [2024-12-09 17:38:58.423599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.360 [2024-12-09 17:38:58.423631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.360 qpair failed and we were unable to recover it. 00:28:29.360 [2024-12-09 17:38:58.423815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.360 [2024-12-09 17:38:58.423846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.360 qpair failed and we were unable to recover it. 00:28:29.360 [2024-12-09 17:38:58.424124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.360 [2024-12-09 17:38:58.424156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.360 qpair failed and we were unable to recover it. 00:28:29.360 [2024-12-09 17:38:58.424418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.360 [2024-12-09 17:38:58.424452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.360 qpair failed and we were unable to recover it. 00:28:29.360 [2024-12-09 17:38:58.424678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.360 [2024-12-09 17:38:58.424712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.360 qpair failed and we were unable to recover it. 00:28:29.360 [2024-12-09 17:38:58.424988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.360 [2024-12-09 17:38:58.425020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.360 qpair failed and we were unable to recover it. 00:28:29.360 [2024-12-09 17:38:58.425243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.360 [2024-12-09 17:38:58.425276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.360 qpair failed and we were unable to recover it. 00:28:29.360 [2024-12-09 17:38:58.425476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.360 [2024-12-09 17:38:58.425507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.360 qpair failed and we were unable to recover it. 00:28:29.360 [2024-12-09 17:38:58.425769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.360 [2024-12-09 17:38:58.425802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.360 qpair failed and we were unable to recover it. 00:28:29.360 [2024-12-09 17:38:58.426102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.360 [2024-12-09 17:38:58.426134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.360 qpair failed and we were unable to recover it. 00:28:29.360 [2024-12-09 17:38:58.426423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.360 [2024-12-09 17:38:58.426457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.360 qpair failed and we were unable to recover it. 00:28:29.360 [2024-12-09 17:38:58.426711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.360 [2024-12-09 17:38:58.426742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.360 qpair failed and we were unable to recover it. 00:28:29.360 [2024-12-09 17:38:58.426934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.360 [2024-12-09 17:38:58.426966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.360 qpair failed and we were unable to recover it. 00:28:29.360 [2024-12-09 17:38:58.427170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.360 [2024-12-09 17:38:58.427202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.360 qpair failed and we were unable to recover it. 00:28:29.360 [2024-12-09 17:38:58.427468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.360 [2024-12-09 17:38:58.427501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.360 qpair failed and we were unable to recover it. 00:28:29.360 [2024-12-09 17:38:58.427756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.361 [2024-12-09 17:38:58.427789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.361 qpair failed and we were unable to recover it. 00:28:29.361 [2024-12-09 17:38:58.428092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.361 [2024-12-09 17:38:58.428124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.361 qpair failed and we were unable to recover it. 00:28:29.361 [2024-12-09 17:38:58.428390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.361 [2024-12-09 17:38:58.428423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.361 qpair failed and we were unable to recover it. 00:28:29.361 [2024-12-09 17:38:58.428647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.361 [2024-12-09 17:38:58.428678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.361 qpair failed and we were unable to recover it. 00:28:29.361 [2024-12-09 17:38:58.428863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.361 [2024-12-09 17:38:58.428896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.361 qpair failed and we were unable to recover it. 00:28:29.361 [2024-12-09 17:38:58.429176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.361 [2024-12-09 17:38:58.429207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.361 qpair failed and we were unable to recover it. 00:28:29.361 [2024-12-09 17:38:58.429351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.361 [2024-12-09 17:38:58.429385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.361 qpair failed and we were unable to recover it. 00:28:29.361 [2024-12-09 17:38:58.429578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.361 [2024-12-09 17:38:58.429610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.361 qpair failed and we were unable to recover it. 00:28:29.361 [2024-12-09 17:38:58.429812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.361 [2024-12-09 17:38:58.429843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.361 qpair failed and we were unable to recover it. 00:28:29.361 [2024-12-09 17:38:58.430028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.361 [2024-12-09 17:38:58.430060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.361 qpair failed and we were unable to recover it. 00:28:29.361 [2024-12-09 17:38:58.430337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.361 [2024-12-09 17:38:58.430370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.361 qpair failed and we were unable to recover it. 00:28:29.361 [2024-12-09 17:38:58.430554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.361 [2024-12-09 17:38:58.430586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.361 qpair failed and we were unable to recover it. 00:28:29.361 [2024-12-09 17:38:58.430782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.361 [2024-12-09 17:38:58.430815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.361 qpair failed and we were unable to recover it. 00:28:29.361 [2024-12-09 17:38:58.431008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.361 [2024-12-09 17:38:58.431039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.361 qpair failed and we were unable to recover it. 00:28:29.361 [2024-12-09 17:38:58.431248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.361 [2024-12-09 17:38:58.431281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.361 qpair failed and we were unable to recover it. 00:28:29.361 [2024-12-09 17:38:58.431566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.361 [2024-12-09 17:38:58.431602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.361 qpair failed and we were unable to recover it. 00:28:29.361 [2024-12-09 17:38:58.431905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.361 [2024-12-09 17:38:58.431938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.361 qpair failed and we were unable to recover it. 00:28:29.361 [2024-12-09 17:38:58.432144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.361 [2024-12-09 17:38:58.432177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.361 qpair failed and we were unable to recover it. 00:28:29.361 [2024-12-09 17:38:58.432468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.361 [2024-12-09 17:38:58.432501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.361 qpair failed and we were unable to recover it. 00:28:29.361 [2024-12-09 17:38:58.432800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.361 [2024-12-09 17:38:58.432839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.361 qpair failed and we were unable to recover it. 00:28:29.361 [2024-12-09 17:38:58.433125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.361 [2024-12-09 17:38:58.433158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.361 qpair failed and we were unable to recover it. 00:28:29.361 [2024-12-09 17:38:58.433457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.361 [2024-12-09 17:38:58.433492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.361 qpair failed and we were unable to recover it. 00:28:29.361 [2024-12-09 17:38:58.433764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.361 [2024-12-09 17:38:58.433797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.361 qpair failed and we were unable to recover it. 00:28:29.361 [2024-12-09 17:38:58.434003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.361 [2024-12-09 17:38:58.434035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.361 qpair failed and we were unable to recover it. 00:28:29.361 [2024-12-09 17:38:58.434311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.361 [2024-12-09 17:38:58.434345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.361 qpair failed and we were unable to recover it. 00:28:29.361 [2024-12-09 17:38:58.434600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.361 [2024-12-09 17:38:58.434633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.361 qpair failed and we were unable to recover it. 00:28:29.361 [2024-12-09 17:38:58.434936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.361 [2024-12-09 17:38:58.434970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.361 qpair failed and we were unable to recover it. 00:28:29.361 [2024-12-09 17:38:58.435239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.361 [2024-12-09 17:38:58.435272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.361 qpair failed and we were unable to recover it. 00:28:29.361 [2024-12-09 17:38:58.435562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.361 [2024-12-09 17:38:58.435596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.361 qpair failed and we were unable to recover it. 00:28:29.361 [2024-12-09 17:38:58.435789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.361 [2024-12-09 17:38:58.435822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.361 qpair failed and we were unable to recover it. 00:28:29.361 [2024-12-09 17:38:58.436114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.361 [2024-12-09 17:38:58.436146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.361 qpair failed and we were unable to recover it. 00:28:29.361 [2024-12-09 17:38:58.436356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.361 [2024-12-09 17:38:58.436390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.361 qpair failed and we were unable to recover it. 00:28:29.361 [2024-12-09 17:38:58.436575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.361 [2024-12-09 17:38:58.436608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.361 qpair failed and we were unable to recover it. 00:28:29.361 [2024-12-09 17:38:58.436799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.361 [2024-12-09 17:38:58.436832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.361 qpair failed and we were unable to recover it. 00:28:29.361 [2024-12-09 17:38:58.437044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.361 [2024-12-09 17:38:58.437075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.362 qpair failed and we were unable to recover it. 00:28:29.362 [2024-12-09 17:38:58.437355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.362 [2024-12-09 17:38:58.437389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.362 qpair failed and we were unable to recover it. 00:28:29.362 [2024-12-09 17:38:58.437672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.362 [2024-12-09 17:38:58.437704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.362 qpair failed and we were unable to recover it. 00:28:29.362 [2024-12-09 17:38:58.437990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.362 [2024-12-09 17:38:58.438023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.362 qpair failed and we were unable to recover it. 00:28:29.362 [2024-12-09 17:38:58.438282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.362 [2024-12-09 17:38:58.438316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.362 qpair failed and we were unable to recover it. 00:28:29.362 [2024-12-09 17:38:58.438621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.362 [2024-12-09 17:38:58.438653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.362 qpair failed and we were unable to recover it. 00:28:29.362 [2024-12-09 17:38:58.438873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.362 [2024-12-09 17:38:58.438905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.362 qpair failed and we were unable to recover it. 00:28:29.362 [2024-12-09 17:38:58.439136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.362 [2024-12-09 17:38:58.439169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.362 qpair failed and we were unable to recover it. 00:28:29.362 [2024-12-09 17:38:58.439455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.362 [2024-12-09 17:38:58.439489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.362 qpair failed and we were unable to recover it. 00:28:29.362 [2024-12-09 17:38:58.439646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.362 [2024-12-09 17:38:58.439678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.362 qpair failed and we were unable to recover it. 00:28:29.362 [2024-12-09 17:38:58.439866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.362 [2024-12-09 17:38:58.439898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.362 qpair failed and we were unable to recover it. 00:28:29.362 [2024-12-09 17:38:58.440150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.362 [2024-12-09 17:38:58.440182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.362 qpair failed and we were unable to recover it. 00:28:29.362 [2024-12-09 17:38:58.440480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.362 [2024-12-09 17:38:58.440514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.362 qpair failed and we were unable to recover it. 00:28:29.362 [2024-12-09 17:38:58.440741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.362 [2024-12-09 17:38:58.440774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.362 qpair failed and we were unable to recover it. 00:28:29.362 [2024-12-09 17:38:58.441003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.362 [2024-12-09 17:38:58.441036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.362 qpair failed and we were unable to recover it. 00:28:29.362 [2024-12-09 17:38:58.441229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.362 [2024-12-09 17:38:58.441265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.362 qpair failed and we were unable to recover it. 00:28:29.362 [2024-12-09 17:38:58.441545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.362 [2024-12-09 17:38:58.441578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.362 qpair failed and we were unable to recover it. 00:28:29.362 [2024-12-09 17:38:58.441727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.362 [2024-12-09 17:38:58.441760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.362 qpair failed and we were unable to recover it. 00:28:29.362 [2024-12-09 17:38:58.441949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.362 [2024-12-09 17:38:58.441981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.362 qpair failed and we were unable to recover it. 00:28:29.362 [2024-12-09 17:38:58.442255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.362 [2024-12-09 17:38:58.442290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.362 qpair failed and we were unable to recover it. 00:28:29.362 [2024-12-09 17:38:58.442566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.362 [2024-12-09 17:38:58.442598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.362 qpair failed and we were unable to recover it. 00:28:29.362 [2024-12-09 17:38:58.442891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.362 [2024-12-09 17:38:58.442925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.362 qpair failed and we were unable to recover it. 00:28:29.362 [2024-12-09 17:38:58.443199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.362 [2024-12-09 17:38:58.443241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.362 qpair failed and we were unable to recover it. 00:28:29.362 [2024-12-09 17:38:58.443532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.362 [2024-12-09 17:38:58.443565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.362 qpair failed and we were unable to recover it. 00:28:29.362 [2024-12-09 17:38:58.443778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.362 [2024-12-09 17:38:58.443810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.362 qpair failed and we were unable to recover it. 00:28:29.362 [2024-12-09 17:38:58.444067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.362 [2024-12-09 17:38:58.444106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.362 qpair failed and we were unable to recover it. 00:28:29.362 [2024-12-09 17:38:58.444382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.362 [2024-12-09 17:38:58.444416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.362 qpair failed and we were unable to recover it. 00:28:29.362 [2024-12-09 17:38:58.444700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.362 [2024-12-09 17:38:58.444733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.362 qpair failed and we were unable to recover it. 00:28:29.362 [2024-12-09 17:38:58.445014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.362 [2024-12-09 17:38:58.445047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.362 qpair failed and we were unable to recover it. 00:28:29.362 [2024-12-09 17:38:58.445308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.362 [2024-12-09 17:38:58.445342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.362 qpair failed and we were unable to recover it. 00:28:29.362 [2024-12-09 17:38:58.445646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.362 [2024-12-09 17:38:58.445679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.362 qpair failed and we were unable to recover it. 00:28:29.362 [2024-12-09 17:38:58.445862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.362 [2024-12-09 17:38:58.445894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.362 qpair failed and we were unable to recover it. 00:28:29.362 [2024-12-09 17:38:58.446170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.362 [2024-12-09 17:38:58.446202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.362 qpair failed and we were unable to recover it. 00:28:29.362 [2024-12-09 17:38:58.446411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.362 [2024-12-09 17:38:58.446444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.362 qpair failed and we were unable to recover it. 00:28:29.362 [2024-12-09 17:38:58.446626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.362 [2024-12-09 17:38:58.446659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.362 qpair failed and we were unable to recover it. 00:28:29.362 [2024-12-09 17:38:58.446913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.362 [2024-12-09 17:38:58.446945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.362 qpair failed and we were unable to recover it. 00:28:29.362 [2024-12-09 17:38:58.447141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.362 [2024-12-09 17:38:58.447173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.362 qpair failed and we were unable to recover it. 00:28:29.362 [2024-12-09 17:38:58.447371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.362 [2024-12-09 17:38:58.447404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.363 qpair failed and we were unable to recover it. 00:28:29.363 [2024-12-09 17:38:58.447628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.363 [2024-12-09 17:38:58.447661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.363 qpair failed and we were unable to recover it. 00:28:29.363 [2024-12-09 17:38:58.447896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.363 [2024-12-09 17:38:58.447929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.363 qpair failed and we were unable to recover it. 00:28:29.363 [2024-12-09 17:38:58.448210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.363 [2024-12-09 17:38:58.448254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.363 qpair failed and we were unable to recover it. 00:28:29.363 [2024-12-09 17:38:58.448528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.363 [2024-12-09 17:38:58.448561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.363 qpair failed and we were unable to recover it. 00:28:29.363 [2024-12-09 17:38:58.448762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.363 [2024-12-09 17:38:58.448795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.363 qpair failed and we were unable to recover it. 00:28:29.363 [2024-12-09 17:38:58.448979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.363 [2024-12-09 17:38:58.449012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.363 qpair failed and we were unable to recover it. 00:28:29.363 [2024-12-09 17:38:58.449243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.363 [2024-12-09 17:38:58.449279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.363 qpair failed and we were unable to recover it. 00:28:29.363 [2024-12-09 17:38:58.449519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.363 [2024-12-09 17:38:58.449553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.363 qpair failed and we were unable to recover it. 00:28:29.363 [2024-12-09 17:38:58.449837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.363 [2024-12-09 17:38:58.449871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.363 qpair failed and we were unable to recover it. 00:28:29.363 [2024-12-09 17:38:58.450075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.363 [2024-12-09 17:38:58.450109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.363 qpair failed and we were unable to recover it. 00:28:29.363 [2024-12-09 17:38:58.450295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.363 [2024-12-09 17:38:58.450328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.363 qpair failed and we were unable to recover it. 00:28:29.363 [2024-12-09 17:38:58.450603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.363 [2024-12-09 17:38:58.450637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.363 qpair failed and we were unable to recover it. 00:28:29.363 [2024-12-09 17:38:58.450936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.363 [2024-12-09 17:38:58.450970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.363 qpair failed and we were unable to recover it. 00:28:29.363 [2024-12-09 17:38:58.451180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.363 [2024-12-09 17:38:58.451213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.363 qpair failed and we were unable to recover it. 00:28:29.363 [2024-12-09 17:38:58.451513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.363 [2024-12-09 17:38:58.451547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.363 qpair failed and we were unable to recover it. 00:28:29.363 [2024-12-09 17:38:58.451701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.363 [2024-12-09 17:38:58.451736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.363 qpair failed and we were unable to recover it. 00:28:29.363 [2024-12-09 17:38:58.452051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.363 [2024-12-09 17:38:58.452086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.363 qpair failed and we were unable to recover it. 00:28:29.363 [2024-12-09 17:38:58.452370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.363 [2024-12-09 17:38:58.452405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.363 qpair failed and we were unable to recover it. 00:28:29.363 [2024-12-09 17:38:58.452607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.363 [2024-12-09 17:38:58.452641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.363 qpair failed and we were unable to recover it. 00:28:29.363 [2024-12-09 17:38:58.452919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.363 [2024-12-09 17:38:58.452951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.363 qpair failed and we were unable to recover it. 00:28:29.363 [2024-12-09 17:38:58.453263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.363 [2024-12-09 17:38:58.453297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.363 qpair failed and we were unable to recover it. 00:28:29.363 [2024-12-09 17:38:58.453410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.363 [2024-12-09 17:38:58.453442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.363 qpair failed and we were unable to recover it. 00:28:29.363 [2024-12-09 17:38:58.453625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.363 [2024-12-09 17:38:58.453660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.363 qpair failed and we were unable to recover it. 00:28:29.363 [2024-12-09 17:38:58.453908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.363 [2024-12-09 17:38:58.453945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.363 qpair failed and we were unable to recover it. 00:28:29.363 [2024-12-09 17:38:58.454249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.363 [2024-12-09 17:38:58.454284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.363 qpair failed and we were unable to recover it. 00:28:29.363 [2024-12-09 17:38:58.454511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.363 [2024-12-09 17:38:58.454545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.363 qpair failed and we were unable to recover it. 00:28:29.363 [2024-12-09 17:38:58.454831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.363 [2024-12-09 17:38:58.454867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.363 qpair failed and we were unable to recover it. 00:28:29.363 [2024-12-09 17:38:58.455143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.363 [2024-12-09 17:38:58.455182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.363 qpair failed and we were unable to recover it. 00:28:29.363 [2024-12-09 17:38:58.455463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.363 [2024-12-09 17:38:58.455497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.363 qpair failed and we were unable to recover it. 00:28:29.363 [2024-12-09 17:38:58.455706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.363 [2024-12-09 17:38:58.455739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.363 qpair failed and we were unable to recover it. 00:28:29.363 [2024-12-09 17:38:58.455957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.363 [2024-12-09 17:38:58.455990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.363 qpair failed and we were unable to recover it. 00:28:29.363 [2024-12-09 17:38:58.456291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.363 [2024-12-09 17:38:58.456324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.363 qpair failed and we were unable to recover it. 00:28:29.363 [2024-12-09 17:38:58.456437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.363 [2024-12-09 17:38:58.456469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.363 qpair failed and we were unable to recover it. 00:28:29.363 [2024-12-09 17:38:58.456621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.363 [2024-12-09 17:38:58.456653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.363 qpair failed and we were unable to recover it. 00:28:29.363 [2024-12-09 17:38:58.456843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.363 [2024-12-09 17:38:58.456876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.363 qpair failed and we were unable to recover it. 00:28:29.363 [2024-12-09 17:38:58.457100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.363 [2024-12-09 17:38:58.457131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.363 qpair failed and we were unable to recover it. 00:28:29.363 [2024-12-09 17:38:58.457384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.363 [2024-12-09 17:38:58.457417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.363 qpair failed and we were unable to recover it. 00:28:29.363 [2024-12-09 17:38:58.457614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.363 [2024-12-09 17:38:58.457646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.363 qpair failed and we were unable to recover it. 00:28:29.363 [2024-12-09 17:38:58.457774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.363 [2024-12-09 17:38:58.457807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.364 qpair failed and we were unable to recover it. 00:28:29.364 [2024-12-09 17:38:58.457915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.364 [2024-12-09 17:38:58.457946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.364 qpair failed and we were unable to recover it. 00:28:29.364 [2024-12-09 17:38:58.458155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.364 [2024-12-09 17:38:58.458187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.364 qpair failed and we were unable to recover it. 00:28:29.364 [2024-12-09 17:38:58.458424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.364 [2024-12-09 17:38:58.458458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.364 qpair failed and we were unable to recover it. 00:28:29.364 [2024-12-09 17:38:58.458739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.364 [2024-12-09 17:38:58.458772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.364 qpair failed and we were unable to recover it. 00:28:29.364 [2024-12-09 17:38:58.458979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.364 [2024-12-09 17:38:58.459012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.364 qpair failed and we were unable to recover it. 00:28:29.364 [2024-12-09 17:38:58.459269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.364 [2024-12-09 17:38:58.459302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.364 qpair failed and we were unable to recover it. 00:28:29.364 [2024-12-09 17:38:58.459604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.364 [2024-12-09 17:38:58.459637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.364 qpair failed and we were unable to recover it. 00:28:29.364 [2024-12-09 17:38:58.459836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.364 [2024-12-09 17:38:58.459869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.364 qpair failed and we were unable to recover it. 00:28:29.364 [2024-12-09 17:38:58.460126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.364 [2024-12-09 17:38:58.460158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.364 qpair failed and we were unable to recover it. 00:28:29.364 [2024-12-09 17:38:58.460364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.364 [2024-12-09 17:38:58.460397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.364 qpair failed and we were unable to recover it. 00:28:29.364 [2024-12-09 17:38:58.460670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.364 [2024-12-09 17:38:58.460703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.364 qpair failed and we were unable to recover it. 00:28:29.364 [2024-12-09 17:38:58.460969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.364 [2024-12-09 17:38:58.461006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.364 qpair failed and we were unable to recover it. 00:28:29.364 [2024-12-09 17:38:58.461303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.364 [2024-12-09 17:38:58.461336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.364 qpair failed and we were unable to recover it. 00:28:29.364 [2024-12-09 17:38:58.461481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.364 [2024-12-09 17:38:58.461513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.364 qpair failed and we were unable to recover it. 00:28:29.364 [2024-12-09 17:38:58.461835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.364 [2024-12-09 17:38:58.461868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.364 qpair failed and we were unable to recover it. 00:28:29.364 [2024-12-09 17:38:58.462155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.364 [2024-12-09 17:38:58.462189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.364 qpair failed and we were unable to recover it. 00:28:29.364 [2024-12-09 17:38:58.462429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.364 [2024-12-09 17:38:58.462464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.364 qpair failed and we were unable to recover it. 00:28:29.364 [2024-12-09 17:38:58.462650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.364 [2024-12-09 17:38:58.462683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.364 qpair failed and we were unable to recover it. 00:28:29.364 [2024-12-09 17:38:58.462880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.364 [2024-12-09 17:38:58.462912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.364 qpair failed and we were unable to recover it. 00:28:29.364 [2024-12-09 17:38:58.463177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.364 [2024-12-09 17:38:58.463212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.364 qpair failed and we were unable to recover it. 00:28:29.364 [2024-12-09 17:38:58.463536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.364 [2024-12-09 17:38:58.463569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.364 qpair failed and we were unable to recover it. 00:28:29.364 [2024-12-09 17:38:58.463800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.364 [2024-12-09 17:38:58.463832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.364 qpair failed and we were unable to recover it. 00:28:29.364 [2024-12-09 17:38:58.464016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.364 [2024-12-09 17:38:58.464049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.364 qpair failed and we were unable to recover it. 00:28:29.364 [2024-12-09 17:38:58.464352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.364 [2024-12-09 17:38:58.464388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.364 qpair failed and we were unable to recover it. 00:28:29.364 [2024-12-09 17:38:58.464597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.364 [2024-12-09 17:38:58.464632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.364 qpair failed and we were unable to recover it. 00:28:29.364 [2024-12-09 17:38:58.464890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.364 [2024-12-09 17:38:58.464926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.364 qpair failed and we were unable to recover it. 00:28:29.364 [2024-12-09 17:38:58.465233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.364 [2024-12-09 17:38:58.465269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.364 qpair failed and we were unable to recover it. 00:28:29.364 [2024-12-09 17:38:58.465572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.364 [2024-12-09 17:38:58.465611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.364 qpair failed and we were unable to recover it. 00:28:29.364 [2024-12-09 17:38:58.465899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.364 [2024-12-09 17:38:58.465942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.364 qpair failed and we were unable to recover it. 00:28:29.364 [2024-12-09 17:38:58.466206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.364 [2024-12-09 17:38:58.466250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.364 qpair failed and we were unable to recover it. 00:28:29.364 [2024-12-09 17:38:58.466478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.364 [2024-12-09 17:38:58.466512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.364 qpair failed and we were unable to recover it. 00:28:29.364 [2024-12-09 17:38:58.466642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.364 [2024-12-09 17:38:58.466675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.364 qpair failed and we were unable to recover it. 00:28:29.364 [2024-12-09 17:38:58.466902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.364 [2024-12-09 17:38:58.466935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.364 qpair failed and we were unable to recover it. 00:28:29.364 [2024-12-09 17:38:58.467205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.364 [2024-12-09 17:38:58.467249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.364 qpair failed and we were unable to recover it. 00:28:29.364 [2024-12-09 17:38:58.467461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.364 [2024-12-09 17:38:58.467494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.364 qpair failed and we were unable to recover it. 00:28:29.364 [2024-12-09 17:38:58.467642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.364 [2024-12-09 17:38:58.467675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.364 qpair failed and we were unable to recover it. 00:28:29.364 [2024-12-09 17:38:58.467871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.364 [2024-12-09 17:38:58.467904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.364 qpair failed and we were unable to recover it. 00:28:29.364 [2024-12-09 17:38:58.468182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.364 [2024-12-09 17:38:58.468216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.364 qpair failed and we were unable to recover it. 00:28:29.364 [2024-12-09 17:38:58.468415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.364 [2024-12-09 17:38:58.468448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.364 qpair failed and we were unable to recover it. 00:28:29.365 [2024-12-09 17:38:58.468627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.365 [2024-12-09 17:38:58.468660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.365 qpair failed and we were unable to recover it. 00:28:29.365 [2024-12-09 17:38:58.468864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.365 [2024-12-09 17:38:58.468898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.365 qpair failed and we were unable to recover it. 00:28:29.365 [2024-12-09 17:38:58.469200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.365 [2024-12-09 17:38:58.469261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.365 qpair failed and we were unable to recover it. 00:28:29.365 [2024-12-09 17:38:58.469478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.365 [2024-12-09 17:38:58.469511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.365 qpair failed and we were unable to recover it. 00:28:29.365 [2024-12-09 17:38:58.469735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.365 [2024-12-09 17:38:58.469768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.365 qpair failed and we were unable to recover it. 00:28:29.365 [2024-12-09 17:38:58.469985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.365 [2024-12-09 17:38:58.470017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.365 qpair failed and we were unable to recover it. 00:28:29.365 [2024-12-09 17:38:58.470296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.365 [2024-12-09 17:38:58.470331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.365 qpair failed and we were unable to recover it. 00:28:29.365 [2024-12-09 17:38:58.470627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.365 [2024-12-09 17:38:58.470660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.365 qpair failed and we were unable to recover it. 00:28:29.365 [2024-12-09 17:38:58.470865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.365 [2024-12-09 17:38:58.470898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.365 qpair failed and we were unable to recover it. 00:28:29.365 [2024-12-09 17:38:58.471045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.365 [2024-12-09 17:38:58.471077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.365 qpair failed and we were unable to recover it. 00:28:29.365 [2024-12-09 17:38:58.471376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.365 [2024-12-09 17:38:58.471411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.365 qpair failed and we were unable to recover it. 00:28:29.365 [2024-12-09 17:38:58.471667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.365 [2024-12-09 17:38:58.471700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.365 qpair failed and we were unable to recover it. 00:28:29.365 [2024-12-09 17:38:58.472007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.365 [2024-12-09 17:38:58.472040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.365 qpair failed and we were unable to recover it. 00:28:29.365 [2024-12-09 17:38:58.472327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.365 [2024-12-09 17:38:58.472361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.365 qpair failed and we were unable to recover it. 00:28:29.365 [2024-12-09 17:38:58.472492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.365 [2024-12-09 17:38:58.472525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.365 qpair failed and we were unable to recover it. 00:28:29.365 [2024-12-09 17:38:58.472733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.365 [2024-12-09 17:38:58.472765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.365 qpair failed and we were unable to recover it. 00:28:29.365 [2024-12-09 17:38:58.473052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.365 [2024-12-09 17:38:58.473086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.365 qpair failed and we were unable to recover it. 00:28:29.365 [2024-12-09 17:38:58.473293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.365 [2024-12-09 17:38:58.473328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.365 qpair failed and we were unable to recover it. 00:28:29.365 [2024-12-09 17:38:58.473526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.365 [2024-12-09 17:38:58.473559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.365 qpair failed and we were unable to recover it. 00:28:29.365 [2024-12-09 17:38:58.473746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.365 [2024-12-09 17:38:58.473779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.365 qpair failed and we were unable to recover it. 00:28:29.365 [2024-12-09 17:38:58.474055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.365 [2024-12-09 17:38:58.474089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.365 qpair failed and we were unable to recover it. 00:28:29.365 [2024-12-09 17:38:58.474374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.365 [2024-12-09 17:38:58.474409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.365 qpair failed and we were unable to recover it. 00:28:29.365 [2024-12-09 17:38:58.474619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.365 [2024-12-09 17:38:58.474652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.365 qpair failed and we were unable to recover it. 00:28:29.365 [2024-12-09 17:38:58.474860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.365 [2024-12-09 17:38:58.474894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.365 qpair failed and we were unable to recover it. 00:28:29.365 [2024-12-09 17:38:58.475161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.365 [2024-12-09 17:38:58.475193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.365 qpair failed and we were unable to recover it. 00:28:29.365 [2024-12-09 17:38:58.475494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.365 [2024-12-09 17:38:58.475531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.365 qpair failed and we were unable to recover it. 00:28:29.365 [2024-12-09 17:38:58.475801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.365 [2024-12-09 17:38:58.475836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.365 qpair failed and we were unable to recover it. 00:28:29.365 [2024-12-09 17:38:58.476125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.365 [2024-12-09 17:38:58.476157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.365 qpair failed and we were unable to recover it. 00:28:29.365 [2024-12-09 17:38:58.476380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.365 [2024-12-09 17:38:58.476413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.365 qpair failed and we were unable to recover it. 00:28:29.365 [2024-12-09 17:38:58.476665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.365 [2024-12-09 17:38:58.476704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.365 qpair failed and we were unable to recover it. 00:28:29.365 [2024-12-09 17:38:58.476920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.365 [2024-12-09 17:38:58.476954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.365 qpair failed and we were unable to recover it. 00:28:29.365 [2024-12-09 17:38:58.477091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.365 [2024-12-09 17:38:58.477124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.365 qpair failed and we were unable to recover it. 00:28:29.365 [2024-12-09 17:38:58.477334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.365 [2024-12-09 17:38:58.477368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.365 qpair failed and we were unable to recover it. 00:28:29.365 [2024-12-09 17:38:58.477622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.365 [2024-12-09 17:38:58.477655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.365 qpair failed and we were unable to recover it. 00:28:29.365 [2024-12-09 17:38:58.477940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.365 [2024-12-09 17:38:58.477974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.366 qpair failed and we were unable to recover it. 00:28:29.366 [2024-12-09 17:38:58.478182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.366 [2024-12-09 17:38:58.478214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.366 qpair failed and we were unable to recover it. 00:28:29.366 [2024-12-09 17:38:58.478484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.366 [2024-12-09 17:38:58.478517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.366 qpair failed and we were unable to recover it. 00:28:29.366 [2024-12-09 17:38:58.478651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.366 [2024-12-09 17:38:58.478684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.366 qpair failed and we were unable to recover it. 00:28:29.366 [2024-12-09 17:38:58.478890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.366 [2024-12-09 17:38:58.478922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.366 qpair failed and we were unable to recover it. 00:28:29.366 [2024-12-09 17:38:58.479200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.366 [2024-12-09 17:38:58.479247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.366 qpair failed and we were unable to recover it. 00:28:29.366 [2024-12-09 17:38:58.479452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.366 [2024-12-09 17:38:58.479485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.366 qpair failed and we were unable to recover it. 00:28:29.366 [2024-12-09 17:38:58.479671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.366 [2024-12-09 17:38:58.479703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.366 qpair failed and we were unable to recover it. 00:28:29.366 [2024-12-09 17:38:58.480049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.366 [2024-12-09 17:38:58.480081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.366 qpair failed and we were unable to recover it. 00:28:29.366 [2024-12-09 17:38:58.480327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.366 [2024-12-09 17:38:58.480362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.366 qpair failed and we were unable to recover it. 00:28:29.366 [2024-12-09 17:38:58.480582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.366 [2024-12-09 17:38:58.480615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.366 qpair failed and we were unable to recover it. 00:28:29.366 [2024-12-09 17:38:58.480748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.366 [2024-12-09 17:38:58.480781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.366 qpair failed and we were unable to recover it. 00:28:29.366 [2024-12-09 17:38:58.481034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.366 [2024-12-09 17:38:58.481067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.366 qpair failed and we were unable to recover it. 00:28:29.366 [2024-12-09 17:38:58.481349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.366 [2024-12-09 17:38:58.481382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.366 qpair failed and we were unable to recover it. 00:28:29.366 [2024-12-09 17:38:58.481663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.366 [2024-12-09 17:38:58.481696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.366 qpair failed and we were unable to recover it. 00:28:29.366 [2024-12-09 17:38:58.481980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.366 [2024-12-09 17:38:58.482012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.366 qpair failed and we were unable to recover it. 00:28:29.366 [2024-12-09 17:38:58.482226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.366 [2024-12-09 17:38:58.482260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.366 qpair failed and we were unable to recover it. 00:28:29.366 [2024-12-09 17:38:58.482444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.366 [2024-12-09 17:38:58.482477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.366 qpair failed and we were unable to recover it. 00:28:29.366 [2024-12-09 17:38:58.482709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.366 [2024-12-09 17:38:58.482741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.366 qpair failed and we were unable to recover it. 00:28:29.366 [2024-12-09 17:38:58.483065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.366 [2024-12-09 17:38:58.483098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.366 qpair failed and we were unable to recover it. 00:28:29.366 [2024-12-09 17:38:58.483301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.366 [2024-12-09 17:38:58.483335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.366 qpair failed and we were unable to recover it. 00:28:29.366 [2024-12-09 17:38:58.483613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.366 [2024-12-09 17:38:58.483645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.366 qpair failed and we were unable to recover it. 00:28:29.366 [2024-12-09 17:38:58.483864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.366 [2024-12-09 17:38:58.483897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.366 qpair failed and we were unable to recover it. 00:28:29.366 [2024-12-09 17:38:58.484114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.366 [2024-12-09 17:38:58.484145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.366 qpair failed and we were unable to recover it. 00:28:29.366 [2024-12-09 17:38:58.484403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.366 [2024-12-09 17:38:58.484438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.366 qpair failed and we were unable to recover it. 00:28:29.366 [2024-12-09 17:38:58.484593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.366 [2024-12-09 17:38:58.484625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.366 qpair failed and we were unable to recover it. 00:28:29.366 [2024-12-09 17:38:58.484929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.366 [2024-12-09 17:38:58.484961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.366 qpair failed and we were unable to recover it. 00:28:29.366 [2024-12-09 17:38:58.485175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.366 [2024-12-09 17:38:58.485207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.366 qpair failed and we were unable to recover it. 00:28:29.366 [2024-12-09 17:38:58.485462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.366 [2024-12-09 17:38:58.485496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.366 qpair failed and we were unable to recover it. 00:28:29.366 [2024-12-09 17:38:58.485624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.366 [2024-12-09 17:38:58.485657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.366 qpair failed and we were unable to recover it. 00:28:29.366 [2024-12-09 17:38:58.485929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.366 [2024-12-09 17:38:58.485960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.366 qpair failed and we were unable to recover it. 00:28:29.366 [2024-12-09 17:38:58.486087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.366 [2024-12-09 17:38:58.486119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.366 qpair failed and we were unable to recover it. 00:28:29.366 [2024-12-09 17:38:58.486302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.366 [2024-12-09 17:38:58.486337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.366 qpair failed and we were unable to recover it. 00:28:29.366 [2024-12-09 17:38:58.486522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.366 [2024-12-09 17:38:58.486554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.366 qpair failed and we were unable to recover it. 00:28:29.366 [2024-12-09 17:38:58.486742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.366 [2024-12-09 17:38:58.486775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.366 qpair failed and we were unable to recover it. 00:28:29.366 [2024-12-09 17:38:58.487004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.366 [2024-12-09 17:38:58.487043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.366 qpair failed and we were unable to recover it. 00:28:29.366 [2024-12-09 17:38:58.487344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.366 [2024-12-09 17:38:58.487377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.366 qpair failed and we were unable to recover it. 00:28:29.366 [2024-12-09 17:38:58.487640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.366 [2024-12-09 17:38:58.487673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.366 qpair failed and we were unable to recover it. 00:28:29.366 [2024-12-09 17:38:58.487997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.366 [2024-12-09 17:38:58.488030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.366 qpair failed and we were unable to recover it. 00:28:29.366 [2024-12-09 17:38:58.488228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.367 [2024-12-09 17:38:58.488261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.367 qpair failed and we were unable to recover it. 00:28:29.367 [2024-12-09 17:38:58.488469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.367 [2024-12-09 17:38:58.488501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.367 qpair failed and we were unable to recover it. 00:28:29.367 [2024-12-09 17:38:58.488759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.367 [2024-12-09 17:38:58.488792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.367 qpair failed and we were unable to recover it. 00:28:29.367 [2024-12-09 17:38:58.489093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.367 [2024-12-09 17:38:58.489125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.367 qpair failed and we were unable to recover it. 00:28:29.367 [2024-12-09 17:38:58.489338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.367 [2024-12-09 17:38:58.489372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.367 qpair failed and we were unable to recover it. 00:28:29.367 [2024-12-09 17:38:58.489654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.367 [2024-12-09 17:38:58.489688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.367 qpair failed and we were unable to recover it. 00:28:29.367 [2024-12-09 17:38:58.489886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.367 [2024-12-09 17:38:58.489918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.367 qpair failed and we were unable to recover it. 00:28:29.367 [2024-12-09 17:38:58.490098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.367 [2024-12-09 17:38:58.490130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.367 qpair failed and we were unable to recover it. 00:28:29.367 [2024-12-09 17:38:58.490316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.367 [2024-12-09 17:38:58.490349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.367 qpair failed and we were unable to recover it. 00:28:29.367 [2024-12-09 17:38:58.490627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.367 [2024-12-09 17:38:58.490659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.367 qpair failed and we were unable to recover it. 00:28:29.367 [2024-12-09 17:38:58.490942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.367 [2024-12-09 17:38:58.490975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.367 qpair failed and we were unable to recover it. 00:28:29.367 [2024-12-09 17:38:58.491239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.367 [2024-12-09 17:38:58.491273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.367 qpair failed and we were unable to recover it. 00:28:29.367 [2024-12-09 17:38:58.491576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.367 [2024-12-09 17:38:58.491609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.367 qpair failed and we were unable to recover it. 00:28:29.367 [2024-12-09 17:38:58.491914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.367 [2024-12-09 17:38:58.491947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.367 qpair failed and we were unable to recover it. 00:28:29.367 [2024-12-09 17:38:58.492153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.367 [2024-12-09 17:38:58.492186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.367 qpair failed and we were unable to recover it. 00:28:29.367 [2024-12-09 17:38:58.492453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.367 [2024-12-09 17:38:58.492485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.367 qpair failed and we were unable to recover it. 00:28:29.367 [2024-12-09 17:38:58.492687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.367 [2024-12-09 17:38:58.492720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.367 qpair failed and we were unable to recover it. 00:28:29.367 [2024-12-09 17:38:58.492975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.367 [2024-12-09 17:38:58.493007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.367 qpair failed and we were unable to recover it. 00:28:29.367 [2024-12-09 17:38:58.493268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.367 [2024-12-09 17:38:58.493302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.367 qpair failed and we were unable to recover it. 00:28:29.367 [2024-12-09 17:38:58.493605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.367 [2024-12-09 17:38:58.493638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.367 qpair failed and we were unable to recover it. 00:28:29.367 [2024-12-09 17:38:58.493870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.367 [2024-12-09 17:38:58.493902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.367 qpair failed and we were unable to recover it. 00:28:29.367 [2024-12-09 17:38:58.494177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.367 [2024-12-09 17:38:58.494209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.367 qpair failed and we were unable to recover it. 00:28:29.367 [2024-12-09 17:38:58.494444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.367 [2024-12-09 17:38:58.494478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.367 qpair failed and we were unable to recover it. 00:28:29.367 [2024-12-09 17:38:58.494688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.367 [2024-12-09 17:38:58.494721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.367 qpair failed and we were unable to recover it. 00:28:29.367 [2024-12-09 17:38:58.494997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.367 [2024-12-09 17:38:58.495029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.367 qpair failed and we were unable to recover it. 00:28:29.367 [2024-12-09 17:38:58.495235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.367 [2024-12-09 17:38:58.495270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.367 qpair failed and we were unable to recover it. 00:28:29.367 [2024-12-09 17:38:58.495541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.367 [2024-12-09 17:38:58.495586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.367 qpair failed and we were unable to recover it. 00:28:29.367 [2024-12-09 17:38:58.495801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.367 [2024-12-09 17:38:58.495834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.367 qpair failed and we were unable to recover it. 00:28:29.367 [2024-12-09 17:38:58.496038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.367 [2024-12-09 17:38:58.496070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.367 qpair failed and we were unable to recover it. 00:28:29.367 [2024-12-09 17:38:58.496344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.367 [2024-12-09 17:38:58.496394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.367 qpair failed and we were unable to recover it. 00:28:29.367 [2024-12-09 17:38:58.496593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.367 [2024-12-09 17:38:58.496626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.367 qpair failed and we were unable to recover it. 00:28:29.367 [2024-12-09 17:38:58.496880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.367 [2024-12-09 17:38:58.496912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.367 qpair failed and we were unable to recover it. 00:28:29.367 [2024-12-09 17:38:58.497117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.367 [2024-12-09 17:38:58.497149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.367 qpair failed and we were unable to recover it. 00:28:29.367 [2024-12-09 17:38:58.497374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.367 [2024-12-09 17:38:58.497408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.367 qpair failed and we were unable to recover it. 00:28:29.367 [2024-12-09 17:38:58.497605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.367 [2024-12-09 17:38:58.497650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.367 qpair failed and we were unable to recover it. 00:28:29.367 [2024-12-09 17:38:58.497829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.367 [2024-12-09 17:38:58.497865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.367 qpair failed and we were unable to recover it. 00:28:29.367 [2024-12-09 17:38:58.498124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.367 [2024-12-09 17:38:58.498165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.367 qpair failed and we were unable to recover it. 00:28:29.367 [2024-12-09 17:38:58.498443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.367 [2024-12-09 17:38:58.498492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.367 qpair failed and we were unable to recover it. 00:28:29.367 [2024-12-09 17:38:58.498702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.367 [2024-12-09 17:38:58.498735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.367 qpair failed and we were unable to recover it. 00:28:29.367 [2024-12-09 17:38:58.499057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.367 [2024-12-09 17:38:58.499088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.368 qpair failed and we were unable to recover it. 00:28:29.368 [2024-12-09 17:38:58.499380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.368 [2024-12-09 17:38:58.499413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.368 qpair failed and we were unable to recover it. 00:28:29.368 [2024-12-09 17:38:58.499691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.368 [2024-12-09 17:38:58.499731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.368 qpair failed and we were unable to recover it. 00:28:29.368 [2024-12-09 17:38:58.500053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.368 [2024-12-09 17:38:58.500090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.368 qpair failed and we were unable to recover it. 00:28:29.368 [2024-12-09 17:38:58.500374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.368 [2024-12-09 17:38:58.500408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.368 qpair failed and we were unable to recover it. 00:28:29.642 [2024-12-09 17:38:58.500632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.642 [2024-12-09 17:38:58.500666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.642 qpair failed and we were unable to recover it. 00:28:29.642 [2024-12-09 17:38:58.500856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.642 [2024-12-09 17:38:58.500887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.642 qpair failed and we were unable to recover it. 00:28:29.642 [2024-12-09 17:38:58.501166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.642 [2024-12-09 17:38:58.501198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.642 qpair failed and we were unable to recover it. 00:28:29.642 [2024-12-09 17:38:58.501429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.642 [2024-12-09 17:38:58.501476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.642 qpair failed and we were unable to recover it. 00:28:29.642 [2024-12-09 17:38:58.501834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.642 [2024-12-09 17:38:58.501883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.642 qpair failed and we were unable to recover it. 00:28:29.642 [2024-12-09 17:38:58.502190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.642 [2024-12-09 17:38:58.502256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.642 qpair failed and we were unable to recover it. 00:28:29.642 [2024-12-09 17:38:58.502489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.642 [2024-12-09 17:38:58.502539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.642 qpair failed and we were unable to recover it. 00:28:29.642 [2024-12-09 17:38:58.502712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.642 [2024-12-09 17:38:58.502756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.642 qpair failed and we were unable to recover it. 00:28:29.642 [2024-12-09 17:38:58.503004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.642 [2024-12-09 17:38:58.503052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.642 qpair failed and we were unable to recover it. 00:28:29.642 [2024-12-09 17:38:58.503369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.642 [2024-12-09 17:38:58.503421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.642 qpair failed and we were unable to recover it. 00:28:29.642 [2024-12-09 17:38:58.503584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.642 [2024-12-09 17:38:58.503631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.642 qpair failed and we were unable to recover it. 00:28:29.642 [2024-12-09 17:38:58.503883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.642 [2024-12-09 17:38:58.503932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.642 qpair failed and we were unable to recover it. 00:28:29.642 [2024-12-09 17:38:58.504143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.642 [2024-12-09 17:38:58.504186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.642 qpair failed and we were unable to recover it. 00:28:29.642 [2024-12-09 17:38:58.504496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.642 [2024-12-09 17:38:58.504546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.642 qpair failed and we were unable to recover it. 00:28:29.642 [2024-12-09 17:38:58.504872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.642 [2024-12-09 17:38:58.504911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.642 qpair failed and we were unable to recover it. 00:28:29.642 [2024-12-09 17:38:58.505194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.642 [2024-12-09 17:38:58.505251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.642 qpair failed and we were unable to recover it. 00:28:29.642 [2024-12-09 17:38:58.505562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.642 [2024-12-09 17:38:58.505596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.642 qpair failed and we were unable to recover it. 00:28:29.642 [2024-12-09 17:38:58.505882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.642 [2024-12-09 17:38:58.505914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.642 qpair failed and we were unable to recover it. 00:28:29.642 [2024-12-09 17:38:58.506196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.642 [2024-12-09 17:38:58.506241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.642 qpair failed and we were unable to recover it. 00:28:29.642 [2024-12-09 17:38:58.506514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.642 [2024-12-09 17:38:58.506546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.642 qpair failed and we were unable to recover it. 00:28:29.642 [2024-12-09 17:38:58.506758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.642 [2024-12-09 17:38:58.506791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.642 qpair failed and we were unable to recover it. 00:28:29.642 [2024-12-09 17:38:58.507073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.642 [2024-12-09 17:38:58.507106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.642 qpair failed and we were unable to recover it. 00:28:29.642 [2024-12-09 17:38:58.507360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.642 [2024-12-09 17:38:58.507395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.642 qpair failed and we were unable to recover it. 00:28:29.642 [2024-12-09 17:38:58.507607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.642 [2024-12-09 17:38:58.507640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.642 qpair failed and we were unable to recover it. 00:28:29.642 [2024-12-09 17:38:58.507786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.642 [2024-12-09 17:38:58.507819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.642 qpair failed and we were unable to recover it. 00:28:29.642 [2024-12-09 17:38:58.508095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.642 [2024-12-09 17:38:58.508127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.642 qpair failed and we were unable to recover it. 00:28:29.642 [2024-12-09 17:38:58.508397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.642 [2024-12-09 17:38:58.508433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.642 qpair failed and we were unable to recover it. 00:28:29.642 [2024-12-09 17:38:58.508621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.642 [2024-12-09 17:38:58.508654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.642 qpair failed and we were unable to recover it. 00:28:29.642 [2024-12-09 17:38:58.508771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.642 [2024-12-09 17:38:58.508803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.642 qpair failed and we were unable to recover it. 00:28:29.642 [2024-12-09 17:38:58.509078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.642 [2024-12-09 17:38:58.509111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.642 qpair failed and we were unable to recover it. 00:28:29.642 [2024-12-09 17:38:58.509436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.642 [2024-12-09 17:38:58.509470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.643 qpair failed and we were unable to recover it. 00:28:29.643 [2024-12-09 17:38:58.509771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.643 [2024-12-09 17:38:58.509804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.643 qpair failed and we were unable to recover it. 00:28:29.643 [2024-12-09 17:38:58.510007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.643 [2024-12-09 17:38:58.510047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.643 qpair failed and we were unable to recover it. 00:28:29.643 [2024-12-09 17:38:58.510326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.643 [2024-12-09 17:38:58.510360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.643 qpair failed and we were unable to recover it. 00:28:29.643 [2024-12-09 17:38:58.510639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.643 [2024-12-09 17:38:58.510672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.643 qpair failed and we were unable to recover it. 00:28:29.643 [2024-12-09 17:38:58.510885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.643 [2024-12-09 17:38:58.510919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.643 qpair failed and we were unable to recover it. 00:28:29.643 [2024-12-09 17:38:58.511198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.643 [2024-12-09 17:38:58.511244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.643 qpair failed and we were unable to recover it. 00:28:29.643 [2024-12-09 17:38:58.511451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.643 [2024-12-09 17:38:58.511484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.643 qpair failed and we were unable to recover it. 00:28:29.643 [2024-12-09 17:38:58.511734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.643 [2024-12-09 17:38:58.511767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.643 qpair failed and we were unable to recover it. 00:28:29.643 [2024-12-09 17:38:58.512049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.643 [2024-12-09 17:38:58.512082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.643 qpair failed and we were unable to recover it. 00:28:29.643 [2024-12-09 17:38:58.512349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.643 [2024-12-09 17:38:58.512384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.643 qpair failed and we were unable to recover it. 00:28:29.643 [2024-12-09 17:38:58.512499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.643 [2024-12-09 17:38:58.512533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.643 qpair failed and we were unable to recover it. 00:28:29.643 [2024-12-09 17:38:58.512728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.643 [2024-12-09 17:38:58.512760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.643 qpair failed and we were unable to recover it. 00:28:29.643 [2024-12-09 17:38:58.513032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.643 [2024-12-09 17:38:58.513065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.643 qpair failed and we were unable to recover it. 00:28:29.643 [2024-12-09 17:38:58.513321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.643 [2024-12-09 17:38:58.513355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.643 qpair failed and we were unable to recover it. 00:28:29.643 [2024-12-09 17:38:58.513618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.643 [2024-12-09 17:38:58.513651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.643 qpair failed and we were unable to recover it. 00:28:29.643 [2024-12-09 17:38:58.513933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.643 [2024-12-09 17:38:58.513966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.643 qpair failed and we were unable to recover it. 00:28:29.643 [2024-12-09 17:38:58.514250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.643 [2024-12-09 17:38:58.514284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.643 qpair failed and we were unable to recover it. 00:28:29.643 [2024-12-09 17:38:58.514563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.643 [2024-12-09 17:38:58.514597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.643 qpair failed and we were unable to recover it. 00:28:29.643 [2024-12-09 17:38:58.514853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.643 [2024-12-09 17:38:58.514886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.643 qpair failed and we were unable to recover it. 00:28:29.643 [2024-12-09 17:38:58.515142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.643 [2024-12-09 17:38:58.515175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.643 qpair failed and we were unable to recover it. 00:28:29.643 [2024-12-09 17:38:58.515459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.643 [2024-12-09 17:38:58.515493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.643 qpair failed and we were unable to recover it. 00:28:29.643 [2024-12-09 17:38:58.515689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.643 [2024-12-09 17:38:58.515722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.643 qpair failed and we were unable to recover it. 00:28:29.643 [2024-12-09 17:38:58.515853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.643 [2024-12-09 17:38:58.515885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.643 qpair failed and we were unable to recover it. 00:28:29.643 [2024-12-09 17:38:58.516163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.643 [2024-12-09 17:38:58.516195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.643 qpair failed and we were unable to recover it. 00:28:29.643 [2024-12-09 17:38:58.516461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.643 [2024-12-09 17:38:58.516495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.643 qpair failed and we were unable to recover it. 00:28:29.643 [2024-12-09 17:38:58.516819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.643 [2024-12-09 17:38:58.516851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.643 qpair failed and we were unable to recover it. 00:28:29.643 [2024-12-09 17:38:58.517127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.643 [2024-12-09 17:38:58.517160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.643 qpair failed and we were unable to recover it. 00:28:29.643 [2024-12-09 17:38:58.517451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.643 [2024-12-09 17:38:58.517486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.643 qpair failed and we were unable to recover it. 00:28:29.643 [2024-12-09 17:38:58.517768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.643 [2024-12-09 17:38:58.517805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.643 qpair failed and we were unable to recover it. 00:28:29.643 [2024-12-09 17:38:58.518058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.643 [2024-12-09 17:38:58.518091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.643 qpair failed and we were unable to recover it. 00:28:29.643 [2024-12-09 17:38:58.518307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.643 [2024-12-09 17:38:58.518341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.643 qpair failed and we were unable to recover it. 00:28:29.643 [2024-12-09 17:38:58.518602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.643 [2024-12-09 17:38:58.518635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.643 qpair failed and we were unable to recover it. 00:28:29.643 [2024-12-09 17:38:58.518866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.643 [2024-12-09 17:38:58.518899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.643 qpair failed and we were unable to recover it. 00:28:29.643 [2024-12-09 17:38:58.519208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.643 [2024-12-09 17:38:58.519251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.643 qpair failed and we were unable to recover it. 00:28:29.643 [2024-12-09 17:38:58.519506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.643 [2024-12-09 17:38:58.519540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.643 qpair failed and we were unable to recover it. 00:28:29.643 [2024-12-09 17:38:58.519695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.643 [2024-12-09 17:38:58.519728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.643 qpair failed and we were unable to recover it. 00:28:29.643 [2024-12-09 17:38:58.519981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.643 [2024-12-09 17:38:58.520014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.643 qpair failed and we were unable to recover it. 00:28:29.643 [2024-12-09 17:38:58.520289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.643 [2024-12-09 17:38:58.520325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.643 qpair failed and we were unable to recover it. 00:28:29.643 [2024-12-09 17:38:58.520608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.643 [2024-12-09 17:38:58.520641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.644 qpair failed and we were unable to recover it. 00:28:29.644 [2024-12-09 17:38:58.520936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.644 [2024-12-09 17:38:58.520969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.644 qpair failed and we were unable to recover it. 00:28:29.644 [2024-12-09 17:38:58.521176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.644 [2024-12-09 17:38:58.521209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.644 qpair failed and we were unable to recover it. 00:28:29.644 [2024-12-09 17:38:58.521496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.644 [2024-12-09 17:38:58.521530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.644 qpair failed and we were unable to recover it. 00:28:29.644 [2024-12-09 17:38:58.521723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.644 [2024-12-09 17:38:58.521756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.644 qpair failed and we were unable to recover it. 00:28:29.644 [2024-12-09 17:38:58.521954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.644 [2024-12-09 17:38:58.521987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.644 qpair failed and we were unable to recover it. 00:28:29.644 [2024-12-09 17:38:58.522268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.644 [2024-12-09 17:38:58.522302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.644 qpair failed and we were unable to recover it. 00:28:29.644 [2024-12-09 17:38:58.522508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.644 [2024-12-09 17:38:58.522541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.644 qpair failed and we were unable to recover it. 00:28:29.644 [2024-12-09 17:38:58.522675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.644 [2024-12-09 17:38:58.522708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.644 qpair failed and we were unable to recover it. 00:28:29.644 [2024-12-09 17:38:58.522993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.644 [2024-12-09 17:38:58.523026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.644 qpair failed and we were unable to recover it. 00:28:29.644 [2024-12-09 17:38:58.523313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.644 [2024-12-09 17:38:58.523345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.644 qpair failed and we were unable to recover it. 00:28:29.644 [2024-12-09 17:38:58.523629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.644 [2024-12-09 17:38:58.523663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.644 qpair failed and we were unable to recover it. 00:28:29.644 [2024-12-09 17:38:58.523799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.644 [2024-12-09 17:38:58.523832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.644 qpair failed and we were unable to recover it. 00:28:29.644 [2024-12-09 17:38:58.524085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.644 [2024-12-09 17:38:58.524118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.644 qpair failed and we were unable to recover it. 00:28:29.644 [2024-12-09 17:38:58.524377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.644 [2024-12-09 17:38:58.524411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.644 qpair failed and we were unable to recover it. 00:28:29.644 [2024-12-09 17:38:58.524684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.644 [2024-12-09 17:38:58.524717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.644 qpair failed and we were unable to recover it. 00:28:29.644 [2024-12-09 17:38:58.524908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.644 [2024-12-09 17:38:58.524940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.644 qpair failed and we were unable to recover it. 00:28:29.644 [2024-12-09 17:38:58.525209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.644 [2024-12-09 17:38:58.525252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.644 qpair failed and we were unable to recover it. 00:28:29.644 [2024-12-09 17:38:58.525521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.644 [2024-12-09 17:38:58.525555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.644 qpair failed and we were unable to recover it. 00:28:29.644 [2024-12-09 17:38:58.525862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.644 [2024-12-09 17:38:58.525895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.644 qpair failed and we were unable to recover it. 00:28:29.644 [2024-12-09 17:38:58.526096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.644 [2024-12-09 17:38:58.526129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.644 qpair failed and we were unable to recover it. 00:28:29.644 [2024-12-09 17:38:58.526314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.644 [2024-12-09 17:38:58.526349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.644 qpair failed and we were unable to recover it. 00:28:29.644 [2024-12-09 17:38:58.526631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.644 [2024-12-09 17:38:58.526663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.644 qpair failed and we were unable to recover it. 00:28:29.644 [2024-12-09 17:38:58.526869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.644 [2024-12-09 17:38:58.526902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.644 qpair failed and we were unable to recover it. 00:28:29.644 [2024-12-09 17:38:58.527085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.644 [2024-12-09 17:38:58.527117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.644 qpair failed and we were unable to recover it. 00:28:29.644 [2024-12-09 17:38:58.527395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.644 [2024-12-09 17:38:58.527429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.644 qpair failed and we were unable to recover it. 00:28:29.644 [2024-12-09 17:38:58.527738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.644 [2024-12-09 17:38:58.527771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.644 qpair failed and we were unable to recover it. 00:28:29.644 [2024-12-09 17:38:58.528046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.644 [2024-12-09 17:38:58.528079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.644 qpair failed and we were unable to recover it. 00:28:29.644 [2024-12-09 17:38:58.528356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.644 [2024-12-09 17:38:58.528390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.644 qpair failed and we were unable to recover it. 00:28:29.644 [2024-12-09 17:38:58.528673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.644 [2024-12-09 17:38:58.528706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.644 qpair failed and we were unable to recover it. 00:28:29.644 [2024-12-09 17:38:58.528970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.644 [2024-12-09 17:38:58.529008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.644 qpair failed and we were unable to recover it. 00:28:29.644 [2024-12-09 17:38:58.529215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.644 [2024-12-09 17:38:58.529258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.644 qpair failed and we were unable to recover it. 00:28:29.644 [2024-12-09 17:38:58.529531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.644 [2024-12-09 17:38:58.529564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.644 qpair failed and we were unable to recover it. 00:28:29.644 [2024-12-09 17:38:58.529869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.644 [2024-12-09 17:38:58.529902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.644 qpair failed and we were unable to recover it. 00:28:29.644 [2024-12-09 17:38:58.530163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.644 [2024-12-09 17:38:58.530196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.644 qpair failed and we were unable to recover it. 00:28:29.644 [2024-12-09 17:38:58.530473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.644 [2024-12-09 17:38:58.530507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.644 qpair failed and we were unable to recover it. 00:28:29.644 [2024-12-09 17:38:58.530641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.644 [2024-12-09 17:38:58.530675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.644 qpair failed and we were unable to recover it. 00:28:29.644 [2024-12-09 17:38:58.530895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.644 [2024-12-09 17:38:58.530927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.644 qpair failed and we were unable to recover it. 00:28:29.644 [2024-12-09 17:38:58.531184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.644 [2024-12-09 17:38:58.531240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.644 qpair failed and we were unable to recover it. 00:28:29.644 [2024-12-09 17:38:58.531523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.645 [2024-12-09 17:38:58.531556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.645 qpair failed and we were unable to recover it. 00:28:29.645 [2024-12-09 17:38:58.531865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.645 [2024-12-09 17:38:58.531899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.645 qpair failed and we were unable to recover it. 00:28:29.645 [2024-12-09 17:38:58.532086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.645 [2024-12-09 17:38:58.532119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.645 qpair failed and we were unable to recover it. 00:28:29.645 [2024-12-09 17:38:58.532347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.645 [2024-12-09 17:38:58.532382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.645 qpair failed and we were unable to recover it. 00:28:29.645 [2024-12-09 17:38:58.532639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.645 [2024-12-09 17:38:58.532672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.645 qpair failed and we were unable to recover it. 00:28:29.645 [2024-12-09 17:38:58.533001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.645 [2024-12-09 17:38:58.533035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.645 qpair failed and we were unable to recover it. 00:28:29.645 [2024-12-09 17:38:58.533319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.645 [2024-12-09 17:38:58.533353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.645 qpair failed and we were unable to recover it. 00:28:29.645 [2024-12-09 17:38:58.533499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.645 [2024-12-09 17:38:58.533531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.645 qpair failed and we were unable to recover it. 00:28:29.645 [2024-12-09 17:38:58.533735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.645 [2024-12-09 17:38:58.533768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.645 qpair failed and we were unable to recover it. 00:28:29.645 [2024-12-09 17:38:58.533952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.645 [2024-12-09 17:38:58.533985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.645 qpair failed and we were unable to recover it. 00:28:29.645 [2024-12-09 17:38:58.534140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.645 [2024-12-09 17:38:58.534172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.645 qpair failed and we were unable to recover it. 00:28:29.645 [2024-12-09 17:38:58.534363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.645 [2024-12-09 17:38:58.534397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.645 qpair failed and we were unable to recover it. 00:28:29.645 [2024-12-09 17:38:58.534696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.645 [2024-12-09 17:38:58.534729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.645 qpair failed and we were unable to recover it. 00:28:29.645 [2024-12-09 17:38:58.534932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.645 [2024-12-09 17:38:58.534965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.645 qpair failed and we were unable to recover it. 00:28:29.645 [2024-12-09 17:38:58.535170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.645 [2024-12-09 17:38:58.535203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.645 qpair failed and we were unable to recover it. 00:28:29.645 [2024-12-09 17:38:58.535410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.645 [2024-12-09 17:38:58.535443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.645 qpair failed and we were unable to recover it. 00:28:29.645 [2024-12-09 17:38:58.535654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.645 [2024-12-09 17:38:58.535686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.645 qpair failed and we were unable to recover it. 00:28:29.645 [2024-12-09 17:38:58.535945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.645 [2024-12-09 17:38:58.535978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.645 qpair failed and we were unable to recover it. 00:28:29.645 [2024-12-09 17:38:58.536272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.645 [2024-12-09 17:38:58.536308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.645 qpair failed and we were unable to recover it. 00:28:29.645 [2024-12-09 17:38:58.536498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.645 [2024-12-09 17:38:58.536531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.645 qpair failed and we were unable to recover it. 00:28:29.645 [2024-12-09 17:38:58.536752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.645 [2024-12-09 17:38:58.536786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.645 qpair failed and we were unable to recover it. 00:28:29.645 [2024-12-09 17:38:58.536991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.645 [2024-12-09 17:38:58.537024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.645 qpair failed and we were unable to recover it. 00:28:29.645 [2024-12-09 17:38:58.537210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.645 [2024-12-09 17:38:58.537253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.645 qpair failed and we were unable to recover it. 00:28:29.645 [2024-12-09 17:38:58.537530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.645 [2024-12-09 17:38:58.537563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.645 qpair failed and we were unable to recover it. 00:28:29.645 [2024-12-09 17:38:58.537745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.645 [2024-12-09 17:38:58.537778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.645 qpair failed and we were unable to recover it. 00:28:29.645 [2024-12-09 17:38:58.538051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.645 [2024-12-09 17:38:58.538083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.645 qpair failed and we were unable to recover it. 00:28:29.645 [2024-12-09 17:38:58.538359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.645 [2024-12-09 17:38:58.538393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.645 qpair failed and we were unable to recover it. 00:28:29.645 [2024-12-09 17:38:58.538685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.645 [2024-12-09 17:38:58.538718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.645 qpair failed and we were unable to recover it. 00:28:29.645 [2024-12-09 17:38:58.538987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.645 [2024-12-09 17:38:58.539020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.645 qpair failed and we were unable to recover it. 00:28:29.645 [2024-12-09 17:38:58.539244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.645 [2024-12-09 17:38:58.539278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.645 qpair failed and we were unable to recover it. 00:28:29.645 [2024-12-09 17:38:58.539581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.645 [2024-12-09 17:38:58.539614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.645 qpair failed and we were unable to recover it. 00:28:29.645 [2024-12-09 17:38:58.539880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.645 [2024-12-09 17:38:58.539919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.645 qpair failed and we were unable to recover it. 00:28:29.645 [2024-12-09 17:38:58.540153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.645 [2024-12-09 17:38:58.540186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.645 qpair failed and we were unable to recover it. 00:28:29.645 [2024-12-09 17:38:58.540484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.645 [2024-12-09 17:38:58.540519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.645 qpair failed and we were unable to recover it. 00:28:29.645 [2024-12-09 17:38:58.540788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.645 [2024-12-09 17:38:58.540821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.645 qpair failed and we were unable to recover it. 00:28:29.645 [2024-12-09 17:38:58.540972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.645 [2024-12-09 17:38:58.541005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.645 qpair failed and we were unable to recover it. 00:28:29.645 [2024-12-09 17:38:58.541305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.645 [2024-12-09 17:38:58.541339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.645 qpair failed and we were unable to recover it. 00:28:29.645 [2024-12-09 17:38:58.541605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.645 [2024-12-09 17:38:58.541637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.645 qpair failed and we were unable to recover it. 00:28:29.645 [2024-12-09 17:38:58.541837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.645 [2024-12-09 17:38:58.541869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.645 qpair failed and we were unable to recover it. 00:28:29.645 [2024-12-09 17:38:58.542051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.645 [2024-12-09 17:38:58.542082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.646 qpair failed and we were unable to recover it. 00:28:29.646 [2024-12-09 17:38:58.542359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.646 [2024-12-09 17:38:58.542392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.646 qpair failed and we were unable to recover it. 00:28:29.646 [2024-12-09 17:38:58.542679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.646 [2024-12-09 17:38:58.542711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.646 qpair failed and we were unable to recover it. 00:28:29.646 [2024-12-09 17:38:58.542990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.646 [2024-12-09 17:38:58.543022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.646 qpair failed and we were unable to recover it. 00:28:29.646 [2024-12-09 17:38:58.543311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.646 [2024-12-09 17:38:58.543344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.646 qpair failed and we were unable to recover it. 00:28:29.646 [2024-12-09 17:38:58.543624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.646 [2024-12-09 17:38:58.543656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.646 qpair failed and we were unable to recover it. 00:28:29.646 [2024-12-09 17:38:58.543923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.646 [2024-12-09 17:38:58.543956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.646 qpair failed and we were unable to recover it. 00:28:29.646 [2024-12-09 17:38:58.544261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.646 [2024-12-09 17:38:58.544294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.646 qpair failed and we were unable to recover it. 00:28:29.646 [2024-12-09 17:38:58.544500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.646 [2024-12-09 17:38:58.544532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.646 qpair failed and we were unable to recover it. 00:28:29.646 [2024-12-09 17:38:58.544735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.646 [2024-12-09 17:38:58.544767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.646 qpair failed and we were unable to recover it. 00:28:29.646 [2024-12-09 17:38:58.545018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.646 [2024-12-09 17:38:58.545050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.646 qpair failed and we were unable to recover it. 00:28:29.646 [2024-12-09 17:38:58.545199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.646 [2024-12-09 17:38:58.545242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.646 qpair failed and we were unable to recover it. 00:28:29.646 [2024-12-09 17:38:58.545461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.646 [2024-12-09 17:38:58.545493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.646 qpair failed and we were unable to recover it. 00:28:29.646 [2024-12-09 17:38:58.545698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.646 [2024-12-09 17:38:58.545731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.646 qpair failed and we were unable to recover it. 00:28:29.646 [2024-12-09 17:38:58.546011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.646 [2024-12-09 17:38:58.546044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.646 qpair failed and we were unable to recover it. 00:28:29.646 [2024-12-09 17:38:58.546175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.646 [2024-12-09 17:38:58.546206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.646 qpair failed and we were unable to recover it. 00:28:29.646 [2024-12-09 17:38:58.546519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.646 [2024-12-09 17:38:58.546557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.646 qpair failed and we were unable to recover it. 00:28:29.646 [2024-12-09 17:38:58.546845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.646 [2024-12-09 17:38:58.546879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.646 qpair failed and we were unable to recover it. 00:28:29.646 [2024-12-09 17:38:58.547153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.646 [2024-12-09 17:38:58.547195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.646 qpair failed and we were unable to recover it. 00:28:29.646 [2024-12-09 17:38:58.547513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.646 [2024-12-09 17:38:58.547549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.646 qpair failed and we were unable to recover it. 00:28:29.646 [2024-12-09 17:38:58.547735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.646 [2024-12-09 17:38:58.547767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.646 qpair failed and we were unable to recover it. 00:28:29.646 [2024-12-09 17:38:58.547885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.646 [2024-12-09 17:38:58.547917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.646 qpair failed and we were unable to recover it. 00:28:29.646 [2024-12-09 17:38:58.548192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.646 [2024-12-09 17:38:58.548245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.646 qpair failed and we were unable to recover it. 00:28:29.646 [2024-12-09 17:38:58.548550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.646 [2024-12-09 17:38:58.548583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.646 qpair failed and we were unable to recover it. 00:28:29.646 [2024-12-09 17:38:58.548864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.646 [2024-12-09 17:38:58.548897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.646 qpair failed and we were unable to recover it. 00:28:29.646 [2024-12-09 17:38:58.549052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.646 [2024-12-09 17:38:58.549085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.646 qpair failed and we were unable to recover it. 00:28:29.646 [2024-12-09 17:38:58.549268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.646 [2024-12-09 17:38:58.549302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.646 qpair failed and we were unable to recover it. 00:28:29.646 [2024-12-09 17:38:58.549556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.646 [2024-12-09 17:38:58.549588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.646 qpair failed and we were unable to recover it. 00:28:29.646 [2024-12-09 17:38:58.549861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.646 [2024-12-09 17:38:58.549893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.646 qpair failed and we were unable to recover it. 00:28:29.646 [2024-12-09 17:38:58.550177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.646 [2024-12-09 17:38:58.550210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.646 qpair failed and we were unable to recover it. 00:28:29.646 [2024-12-09 17:38:58.550515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.646 [2024-12-09 17:38:58.550547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.646 qpair failed and we were unable to recover it. 00:28:29.646 [2024-12-09 17:38:58.550733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.646 [2024-12-09 17:38:58.550766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.646 qpair failed and we were unable to recover it. 00:28:29.646 [2024-12-09 17:38:58.551006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.646 [2024-12-09 17:38:58.551046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.646 qpair failed and we were unable to recover it. 00:28:29.646 [2024-12-09 17:38:58.551315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.646 [2024-12-09 17:38:58.551350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.647 qpair failed and we were unable to recover it. 00:28:29.647 [2024-12-09 17:38:58.551630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.647 [2024-12-09 17:38:58.551663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.647 qpair failed and we were unable to recover it. 00:28:29.647 [2024-12-09 17:38:58.551863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.647 [2024-12-09 17:38:58.551895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.647 qpair failed and we were unable to recover it. 00:28:29.647 [2024-12-09 17:38:58.552098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.647 [2024-12-09 17:38:58.552132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.647 qpair failed and we were unable to recover it. 00:28:29.647 [2024-12-09 17:38:58.552264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.647 [2024-12-09 17:38:58.552299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.647 qpair failed and we were unable to recover it. 00:28:29.647 [2024-12-09 17:38:58.552424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.647 [2024-12-09 17:38:58.552456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.647 qpair failed and we were unable to recover it. 00:28:29.647 [2024-12-09 17:38:58.552637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.647 [2024-12-09 17:38:58.552669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.647 qpair failed and we were unable to recover it. 00:28:29.647 [2024-12-09 17:38:58.552952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.647 [2024-12-09 17:38:58.552985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.647 qpair failed and we were unable to recover it. 00:28:29.647 [2024-12-09 17:38:58.553174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.647 [2024-12-09 17:38:58.553206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.647 qpair failed and we were unable to recover it. 00:28:29.647 [2024-12-09 17:38:58.553493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.647 [2024-12-09 17:38:58.553525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.647 qpair failed and we were unable to recover it. 00:28:29.647 [2024-12-09 17:38:58.553776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.647 [2024-12-09 17:38:58.553809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.647 qpair failed and we were unable to recover it. 00:28:29.647 [2024-12-09 17:38:58.554113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.647 [2024-12-09 17:38:58.554145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.647 qpair failed and we were unable to recover it. 00:28:29.647 [2024-12-09 17:38:58.554419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.647 [2024-12-09 17:38:58.554454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.647 qpair failed and we were unable to recover it. 00:28:29.647 [2024-12-09 17:38:58.554713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.647 [2024-12-09 17:38:58.554746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.647 qpair failed and we were unable to recover it. 00:28:29.647 [2024-12-09 17:38:58.554970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.647 [2024-12-09 17:38:58.555003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.647 qpair failed and we were unable to recover it. 00:28:29.647 [2024-12-09 17:38:58.555286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.647 [2024-12-09 17:38:58.555321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.647 qpair failed and we were unable to recover it. 00:28:29.647 [2024-12-09 17:38:58.555600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.647 [2024-12-09 17:38:58.555632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.647 qpair failed and we were unable to recover it. 00:28:29.647 [2024-12-09 17:38:58.555856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.647 [2024-12-09 17:38:58.555888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.647 qpair failed and we were unable to recover it. 00:28:29.647 [2024-12-09 17:38:58.556163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.647 [2024-12-09 17:38:58.556196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.647 qpair failed and we were unable to recover it. 00:28:29.647 [2024-12-09 17:38:58.556412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.647 [2024-12-09 17:38:58.556445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.647 qpair failed and we were unable to recover it. 00:28:29.647 [2024-12-09 17:38:58.556726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.647 [2024-12-09 17:38:58.556757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.647 qpair failed and we were unable to recover it. 00:28:29.647 [2024-12-09 17:38:58.556959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.647 [2024-12-09 17:38:58.556991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.647 qpair failed and we were unable to recover it. 00:28:29.647 [2024-12-09 17:38:58.557269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.647 [2024-12-09 17:38:58.557303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.647 qpair failed and we were unable to recover it. 00:28:29.647 [2024-12-09 17:38:58.557611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.647 [2024-12-09 17:38:58.557643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.647 qpair failed and we were unable to recover it. 00:28:29.647 [2024-12-09 17:38:58.557906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.647 [2024-12-09 17:38:58.557939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.647 qpair failed and we were unable to recover it. 00:28:29.647 [2024-12-09 17:38:58.558241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.647 [2024-12-09 17:38:58.558276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.647 qpair failed and we were unable to recover it. 00:28:29.647 [2024-12-09 17:38:58.558468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.647 [2024-12-09 17:38:58.558501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.647 qpair failed and we were unable to recover it. 00:28:29.647 [2024-12-09 17:38:58.558789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.647 [2024-12-09 17:38:58.558822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.647 qpair failed and we were unable to recover it. 00:28:29.647 [2024-12-09 17:38:58.559129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.647 [2024-12-09 17:38:58.559161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.647 qpair failed and we were unable to recover it. 00:28:29.647 [2024-12-09 17:38:58.559468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.647 [2024-12-09 17:38:58.559502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.647 qpair failed and we were unable to recover it. 00:28:29.647 [2024-12-09 17:38:58.559706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.647 [2024-12-09 17:38:58.559737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.647 qpair failed and we were unable to recover it. 00:28:29.647 [2024-12-09 17:38:58.559849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.647 [2024-12-09 17:38:58.559881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.647 qpair failed and we were unable to recover it. 00:28:29.647 [2024-12-09 17:38:58.560159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.647 [2024-12-09 17:38:58.560192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.647 qpair failed and we were unable to recover it. 00:28:29.647 [2024-12-09 17:38:58.560474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.647 [2024-12-09 17:38:58.560507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.647 qpair failed and we were unable to recover it. 00:28:29.647 [2024-12-09 17:38:58.560737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.647 [2024-12-09 17:38:58.560770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.647 qpair failed and we were unable to recover it. 00:28:29.647 [2024-12-09 17:38:58.561047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.647 [2024-12-09 17:38:58.561079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.647 qpair failed and we were unable to recover it. 00:28:29.647 [2024-12-09 17:38:58.561366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.647 [2024-12-09 17:38:58.561400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.647 qpair failed and we were unable to recover it. 00:28:29.647 [2024-12-09 17:38:58.561532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.647 [2024-12-09 17:38:58.561564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.647 qpair failed and we were unable to recover it. 00:28:29.647 [2024-12-09 17:38:58.561791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.647 [2024-12-09 17:38:58.561823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.647 qpair failed and we were unable to recover it. 00:28:29.647 [2024-12-09 17:38:58.562141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.647 [2024-12-09 17:38:58.562179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.647 qpair failed and we were unable to recover it. 00:28:29.648 [2024-12-09 17:38:58.562488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.648 [2024-12-09 17:38:58.562522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.648 qpair failed and we were unable to recover it. 00:28:29.648 [2024-12-09 17:38:58.562808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.648 [2024-12-09 17:38:58.562840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.648 qpair failed and we were unable to recover it. 00:28:29.648 [2024-12-09 17:38:58.563119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.648 [2024-12-09 17:38:58.563151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.648 qpair failed and we were unable to recover it. 00:28:29.648 [2024-12-09 17:38:58.563441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.648 [2024-12-09 17:38:58.563476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.648 qpair failed and we were unable to recover it. 00:28:29.648 [2024-12-09 17:38:58.563723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.648 [2024-12-09 17:38:58.563755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.648 qpair failed and we were unable to recover it. 00:28:29.648 [2024-12-09 17:38:58.563892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.648 [2024-12-09 17:38:58.563925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.648 qpair failed and we were unable to recover it. 00:28:29.648 [2024-12-09 17:38:58.564145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.648 [2024-12-09 17:38:58.564178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.648 qpair failed and we were unable to recover it. 00:28:29.648 [2024-12-09 17:38:58.564392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.648 [2024-12-09 17:38:58.564426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.648 qpair failed and we were unable to recover it. 00:28:29.648 [2024-12-09 17:38:58.564669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.648 [2024-12-09 17:38:58.564702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.648 qpair failed and we were unable to recover it. 00:28:29.648 [2024-12-09 17:38:58.564956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.648 [2024-12-09 17:38:58.564988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.648 qpair failed and we were unable to recover it. 00:28:29.648 [2024-12-09 17:38:58.565303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.648 [2024-12-09 17:38:58.565337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.648 qpair failed and we were unable to recover it. 00:28:29.648 [2024-12-09 17:38:58.565615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.648 [2024-12-09 17:38:58.565648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.648 qpair failed and we were unable to recover it. 00:28:29.648 [2024-12-09 17:38:58.565845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.648 [2024-12-09 17:38:58.565877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.648 qpair failed and we were unable to recover it. 00:28:29.648 [2024-12-09 17:38:58.566130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.648 [2024-12-09 17:38:58.566163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.648 qpair failed and we were unable to recover it. 00:28:29.648 [2024-12-09 17:38:58.566499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.648 [2024-12-09 17:38:58.566534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.648 qpair failed and we were unable to recover it. 00:28:29.648 [2024-12-09 17:38:58.566812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.648 [2024-12-09 17:38:58.566845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.648 qpair failed and we were unable to recover it. 00:28:29.648 [2024-12-09 17:38:58.567134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.648 [2024-12-09 17:38:58.567167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.648 qpair failed and we were unable to recover it. 00:28:29.648 [2024-12-09 17:38:58.567364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.648 [2024-12-09 17:38:58.567398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.648 qpair failed and we were unable to recover it. 00:28:29.648 [2024-12-09 17:38:58.567666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.648 [2024-12-09 17:38:58.567698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.648 qpair failed and we were unable to recover it. 00:28:29.648 [2024-12-09 17:38:58.567978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.648 [2024-12-09 17:38:58.568011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.648 qpair failed and we were unable to recover it. 00:28:29.648 [2024-12-09 17:38:58.568240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.648 [2024-12-09 17:38:58.568275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.648 qpair failed and we were unable to recover it. 00:28:29.648 [2024-12-09 17:38:58.568576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.648 [2024-12-09 17:38:58.568611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.648 qpair failed and we were unable to recover it. 00:28:29.648 [2024-12-09 17:38:58.568898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.648 [2024-12-09 17:38:58.568931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.648 qpair failed and we were unable to recover it. 00:28:29.648 [2024-12-09 17:38:58.569154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.648 [2024-12-09 17:38:58.569187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.648 qpair failed and we were unable to recover it. 00:28:29.648 [2024-12-09 17:38:58.569461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.648 [2024-12-09 17:38:58.569495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.648 qpair failed and we were unable to recover it. 00:28:29.648 [2024-12-09 17:38:58.569758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.648 [2024-12-09 17:38:58.569791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.648 qpair failed and we were unable to recover it. 00:28:29.648 [2024-12-09 17:38:58.570049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.648 [2024-12-09 17:38:58.570082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.648 qpair failed and we were unable to recover it. 00:28:29.648 [2024-12-09 17:38:58.570286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.648 [2024-12-09 17:38:58.570321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.648 qpair failed and we were unable to recover it. 00:28:29.648 [2024-12-09 17:38:58.570598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.648 [2024-12-09 17:38:58.570631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.648 qpair failed and we were unable to recover it. 00:28:29.648 [2024-12-09 17:38:58.570828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.648 [2024-12-09 17:38:58.570861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.648 qpair failed and we were unable to recover it. 00:28:29.648 [2024-12-09 17:38:58.571069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.648 [2024-12-09 17:38:58.571101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.648 qpair failed and we were unable to recover it. 00:28:29.648 [2024-12-09 17:38:58.571360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.648 [2024-12-09 17:38:58.571394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.648 qpair failed and we were unable to recover it. 00:28:29.648 [2024-12-09 17:38:58.571697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.648 [2024-12-09 17:38:58.571730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.648 qpair failed and we were unable to recover it. 00:28:29.648 [2024-12-09 17:38:58.572023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.648 [2024-12-09 17:38:58.572056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.648 qpair failed and we were unable to recover it. 00:28:29.648 [2024-12-09 17:38:58.572336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.648 [2024-12-09 17:38:58.572370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.648 qpair failed and we were unable to recover it. 00:28:29.648 [2024-12-09 17:38:58.572626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.648 [2024-12-09 17:38:58.572658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.648 qpair failed and we were unable to recover it. 00:28:29.648 [2024-12-09 17:38:58.572858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.648 [2024-12-09 17:38:58.572891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.648 qpair failed and we were unable to recover it. 00:28:29.648 [2024-12-09 17:38:58.573146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.648 [2024-12-09 17:38:58.573179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.648 qpair failed and we were unable to recover it. 00:28:29.648 [2024-12-09 17:38:58.573488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.648 [2024-12-09 17:38:58.573522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.648 qpair failed and we were unable to recover it. 00:28:29.649 [2024-12-09 17:38:58.573802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.649 [2024-12-09 17:38:58.573841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.649 qpair failed and we were unable to recover it. 00:28:29.649 [2024-12-09 17:38:58.574125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.649 [2024-12-09 17:38:58.574158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.649 qpair failed and we were unable to recover it. 00:28:29.649 [2024-12-09 17:38:58.574383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.649 [2024-12-09 17:38:58.574418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.649 qpair failed and we were unable to recover it. 00:28:29.649 [2024-12-09 17:38:58.574632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.649 [2024-12-09 17:38:58.574665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.649 qpair failed and we were unable to recover it. 00:28:29.649 [2024-12-09 17:38:58.574852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.649 [2024-12-09 17:38:58.574885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.649 qpair failed and we were unable to recover it. 00:28:29.649 [2024-12-09 17:38:58.575193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.649 [2024-12-09 17:38:58.575236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.649 qpair failed and we were unable to recover it. 00:28:29.649 [2024-12-09 17:38:58.575465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.649 [2024-12-09 17:38:58.575498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.649 qpair failed and we were unable to recover it. 00:28:29.649 [2024-12-09 17:38:58.575800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.649 [2024-12-09 17:38:58.575834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.649 qpair failed and we were unable to recover it. 00:28:29.649 [2024-12-09 17:38:58.576040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.649 [2024-12-09 17:38:58.576072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.649 qpair failed and we were unable to recover it. 00:28:29.649 [2024-12-09 17:38:58.576275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.649 [2024-12-09 17:38:58.576309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.649 qpair failed and we were unable to recover it. 00:28:29.649 [2024-12-09 17:38:58.576511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.649 [2024-12-09 17:38:58.576545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.649 qpair failed and we were unable to recover it. 00:28:29.649 [2024-12-09 17:38:58.576802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.649 [2024-12-09 17:38:58.576835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.649 qpair failed and we were unable to recover it. 00:28:29.649 [2024-12-09 17:38:58.577095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.649 [2024-12-09 17:38:58.577128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.649 qpair failed and we were unable to recover it. 00:28:29.649 [2024-12-09 17:38:58.577261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.649 [2024-12-09 17:38:58.577295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.649 qpair failed and we were unable to recover it. 00:28:29.649 [2024-12-09 17:38:58.577578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.649 [2024-12-09 17:38:58.577611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.649 qpair failed and we were unable to recover it. 00:28:29.649 [2024-12-09 17:38:58.577878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.649 [2024-12-09 17:38:58.577911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.649 qpair failed and we were unable to recover it. 00:28:29.649 [2024-12-09 17:38:58.578039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.649 [2024-12-09 17:38:58.578072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.649 qpair failed and we were unable to recover it. 00:28:29.649 [2024-12-09 17:38:58.578351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.649 [2024-12-09 17:38:58.578385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.649 qpair failed and we were unable to recover it. 00:28:29.649 [2024-12-09 17:38:58.578664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.649 [2024-12-09 17:38:58.578697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.649 qpair failed and we were unable to recover it. 00:28:29.649 [2024-12-09 17:38:58.578834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.649 [2024-12-09 17:38:58.578866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.649 qpair failed and we were unable to recover it. 00:28:29.649 [2024-12-09 17:38:58.579170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.649 [2024-12-09 17:38:58.579203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.649 qpair failed and we were unable to recover it. 00:28:29.649 [2024-12-09 17:38:58.579360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.649 [2024-12-09 17:38:58.579393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.649 qpair failed and we were unable to recover it. 00:28:29.649 [2024-12-09 17:38:58.579665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.649 [2024-12-09 17:38:58.579699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.649 qpair failed and we were unable to recover it. 00:28:29.649 [2024-12-09 17:38:58.579901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.649 [2024-12-09 17:38:58.579934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.649 qpair failed and we were unable to recover it. 00:28:29.649 [2024-12-09 17:38:58.580213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.649 [2024-12-09 17:38:58.580258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.649 qpair failed and we were unable to recover it. 00:28:29.649 [2024-12-09 17:38:58.580533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.649 [2024-12-09 17:38:58.580567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.649 qpair failed and we were unable to recover it. 00:28:29.649 [2024-12-09 17:38:58.580851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.649 [2024-12-09 17:38:58.580884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.649 qpair failed and we were unable to recover it. 00:28:29.649 [2024-12-09 17:38:58.581183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.649 [2024-12-09 17:38:58.581215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.649 qpair failed and we were unable to recover it. 00:28:29.649 [2024-12-09 17:38:58.581510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.649 [2024-12-09 17:38:58.581543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.649 qpair failed and we were unable to recover it. 00:28:29.649 [2024-12-09 17:38:58.581838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.649 [2024-12-09 17:38:58.581870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.649 qpair failed and we were unable to recover it. 00:28:29.649 [2024-12-09 17:38:58.582144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.649 [2024-12-09 17:38:58.582177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.649 qpair failed and we were unable to recover it. 00:28:29.649 [2024-12-09 17:38:58.582493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.649 [2024-12-09 17:38:58.582530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.649 qpair failed and we were unable to recover it. 00:28:29.649 [2024-12-09 17:38:58.582738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.649 [2024-12-09 17:38:58.582771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.649 qpair failed and we were unable to recover it. 00:28:29.649 [2024-12-09 17:38:58.583055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.649 [2024-12-09 17:38:58.583088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.649 qpair failed and we were unable to recover it. 00:28:29.649 [2024-12-09 17:38:58.583364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.649 [2024-12-09 17:38:58.583399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.649 qpair failed and we were unable to recover it. 00:28:29.649 [2024-12-09 17:38:58.583680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.649 [2024-12-09 17:38:58.583713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.649 qpair failed and we were unable to recover it. 00:28:29.649 [2024-12-09 17:38:58.583999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.649 [2024-12-09 17:38:58.584032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.649 qpair failed and we were unable to recover it. 00:28:29.649 [2024-12-09 17:38:58.584265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.649 [2024-12-09 17:38:58.584301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.649 qpair failed and we were unable to recover it. 00:28:29.649 [2024-12-09 17:38:58.584570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.649 [2024-12-09 17:38:58.584603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.649 qpair failed and we were unable to recover it. 00:28:29.649 [2024-12-09 17:38:58.584792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.650 [2024-12-09 17:38:58.584824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.650 qpair failed and we were unable to recover it. 00:28:29.650 [2024-12-09 17:38:58.585049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.650 [2024-12-09 17:38:58.585088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.650 qpair failed and we were unable to recover it. 00:28:29.650 [2024-12-09 17:38:58.585274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.650 [2024-12-09 17:38:58.585308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.650 qpair failed and we were unable to recover it. 00:28:29.650 [2024-12-09 17:38:58.585610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.650 [2024-12-09 17:38:58.585643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.650 qpair failed and we were unable to recover it. 00:28:29.650 [2024-12-09 17:38:58.585914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.650 [2024-12-09 17:38:58.585946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.650 qpair failed and we were unable to recover it. 00:28:29.650 [2024-12-09 17:38:58.586234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.650 [2024-12-09 17:38:58.586268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.650 qpair failed and we were unable to recover it. 00:28:29.650 [2024-12-09 17:38:58.586547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.650 [2024-12-09 17:38:58.586579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.650 qpair failed and we were unable to recover it. 00:28:29.650 [2024-12-09 17:38:58.586710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.650 [2024-12-09 17:38:58.586744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.650 qpair failed and we were unable to recover it. 00:28:29.650 [2024-12-09 17:38:58.587020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.650 [2024-12-09 17:38:58.587052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.650 qpair failed and we were unable to recover it. 00:28:29.650 [2024-12-09 17:38:58.587322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.650 [2024-12-09 17:38:58.587358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.650 qpair failed and we were unable to recover it. 00:28:29.650 [2024-12-09 17:38:58.587671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.650 [2024-12-09 17:38:58.587704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.650 qpair failed and we were unable to recover it. 00:28:29.650 [2024-12-09 17:38:58.587989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.650 [2024-12-09 17:38:58.588022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.650 qpair failed and we were unable to recover it. 00:28:29.650 [2024-12-09 17:38:58.588302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.650 [2024-12-09 17:38:58.588336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.650 qpair failed and we were unable to recover it. 00:28:29.650 [2024-12-09 17:38:58.588614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.650 [2024-12-09 17:38:58.588647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.650 qpair failed and we were unable to recover it. 00:28:29.650 [2024-12-09 17:38:58.588796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.650 [2024-12-09 17:38:58.588828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.650 qpair failed and we were unable to recover it. 00:28:29.650 [2024-12-09 17:38:58.589135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.650 [2024-12-09 17:38:58.589169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.650 qpair failed and we were unable to recover it. 00:28:29.650 [2024-12-09 17:38:58.589462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.650 [2024-12-09 17:38:58.589497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.650 qpair failed and we were unable to recover it. 00:28:29.650 [2024-12-09 17:38:58.589697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.650 [2024-12-09 17:38:58.589730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.650 qpair failed and we were unable to recover it. 00:28:29.650 [2024-12-09 17:38:58.590033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.650 [2024-12-09 17:38:58.590065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.650 qpair failed and we were unable to recover it. 00:28:29.650 [2024-12-09 17:38:58.590275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.650 [2024-12-09 17:38:58.590309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.650 qpair failed and we were unable to recover it. 00:28:29.650 [2024-12-09 17:38:58.590585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.650 [2024-12-09 17:38:58.590618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.650 qpair failed and we were unable to recover it. 00:28:29.650 [2024-12-09 17:38:58.590921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.650 [2024-12-09 17:38:58.590955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.650 qpair failed and we were unable to recover it. 00:28:29.650 [2024-12-09 17:38:58.591136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.650 [2024-12-09 17:38:58.591169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.650 qpair failed and we were unable to recover it. 00:28:29.650 [2024-12-09 17:38:58.591462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.650 [2024-12-09 17:38:58.591497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.650 qpair failed and we were unable to recover it. 00:28:29.650 [2024-12-09 17:38:58.591773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.650 [2024-12-09 17:38:58.591807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.650 qpair failed and we were unable to recover it. 00:28:29.650 [2024-12-09 17:38:58.591942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.650 [2024-12-09 17:38:58.591975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.650 qpair failed and we were unable to recover it. 00:28:29.650 [2024-12-09 17:38:58.592239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.650 [2024-12-09 17:38:58.592274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.650 qpair failed and we were unable to recover it. 00:28:29.650 [2024-12-09 17:38:58.592560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.650 [2024-12-09 17:38:58.592593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.650 qpair failed and we were unable to recover it. 00:28:29.650 [2024-12-09 17:38:58.592895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.650 [2024-12-09 17:38:58.592928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.650 qpair failed and we were unable to recover it. 00:28:29.650 [2024-12-09 17:38:58.593198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.650 [2024-12-09 17:38:58.593242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.650 qpair failed and we were unable to recover it. 00:28:29.650 [2024-12-09 17:38:58.593453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.650 [2024-12-09 17:38:58.593487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.650 qpair failed and we were unable to recover it. 00:28:29.650 [2024-12-09 17:38:58.593716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.650 [2024-12-09 17:38:58.593749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.650 qpair failed and we were unable to recover it. 00:28:29.650 [2024-12-09 17:38:58.593935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.650 [2024-12-09 17:38:58.593968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.650 qpair failed and we were unable to recover it. 00:28:29.650 [2024-12-09 17:38:58.594239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.650 [2024-12-09 17:38:58.594273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.650 qpair failed and we were unable to recover it. 00:28:29.650 [2024-12-09 17:38:58.594480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.650 [2024-12-09 17:38:58.594514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.650 qpair failed and we were unable to recover it. 00:28:29.650 [2024-12-09 17:38:58.594798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.650 [2024-12-09 17:38:58.594831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.650 qpair failed and we were unable to recover it. 00:28:29.650 [2024-12-09 17:38:58.595018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.650 [2024-12-09 17:38:58.595050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.650 qpair failed and we were unable to recover it. 00:28:29.650 [2024-12-09 17:38:58.595315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.650 [2024-12-09 17:38:58.595349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.650 qpair failed and we were unable to recover it. 00:28:29.650 [2024-12-09 17:38:58.595654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.650 [2024-12-09 17:38:58.595687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.650 qpair failed and we were unable to recover it. 00:28:29.650 [2024-12-09 17:38:58.595947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.651 [2024-12-09 17:38:58.595980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.651 qpair failed and we were unable to recover it. 00:28:29.651 [2024-12-09 17:38:58.596263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.651 [2024-12-09 17:38:58.596296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.651 qpair failed and we were unable to recover it. 00:28:29.651 [2024-12-09 17:38:58.596614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.651 [2024-12-09 17:38:58.596653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.651 qpair failed and we were unable to recover it. 00:28:29.651 [2024-12-09 17:38:58.596954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.651 [2024-12-09 17:38:58.596987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.651 qpair failed and we were unable to recover it. 00:28:29.651 [2024-12-09 17:38:58.597190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.651 [2024-12-09 17:38:58.597232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.651 qpair failed and we were unable to recover it. 00:28:29.651 [2024-12-09 17:38:58.597516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.651 [2024-12-09 17:38:58.597549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.651 qpair failed and we were unable to recover it. 00:28:29.651 [2024-12-09 17:38:58.597828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.651 [2024-12-09 17:38:58.597861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.651 qpair failed and we were unable to recover it. 00:28:29.651 [2024-12-09 17:38:58.598048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.651 [2024-12-09 17:38:58.598080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.651 qpair failed and we were unable to recover it. 00:28:29.651 [2024-12-09 17:38:58.598236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.651 [2024-12-09 17:38:58.598270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.651 qpair failed and we were unable to recover it. 00:28:29.651 [2024-12-09 17:38:58.598455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.651 [2024-12-09 17:38:58.598489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.651 qpair failed and we were unable to recover it. 00:28:29.651 [2024-12-09 17:38:58.598647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.651 [2024-12-09 17:38:58.598679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.651 qpair failed and we were unable to recover it. 00:28:29.651 [2024-12-09 17:38:58.598955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.651 [2024-12-09 17:38:58.598988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.651 qpair failed and we were unable to recover it. 00:28:29.651 [2024-12-09 17:38:58.599284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.651 [2024-12-09 17:38:58.599319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.651 qpair failed and we were unable to recover it. 00:28:29.651 [2024-12-09 17:38:58.599587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.651 [2024-12-09 17:38:58.599619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.651 qpair failed and we were unable to recover it. 00:28:29.651 [2024-12-09 17:38:58.599878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.651 [2024-12-09 17:38:58.599911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.651 qpair failed and we were unable to recover it. 00:28:29.651 [2024-12-09 17:38:58.600104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.651 [2024-12-09 17:38:58.600136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.651 qpair failed and we were unable to recover it. 00:28:29.651 [2024-12-09 17:38:58.600370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.651 [2024-12-09 17:38:58.600404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.651 qpair failed and we were unable to recover it. 00:28:29.651 [2024-12-09 17:38:58.600606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.651 [2024-12-09 17:38:58.600639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.651 qpair failed and we were unable to recover it. 00:28:29.651 [2024-12-09 17:38:58.600753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.651 [2024-12-09 17:38:58.600786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.651 qpair failed and we were unable to recover it. 00:28:29.651 [2024-12-09 17:38:58.600982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.651 [2024-12-09 17:38:58.601015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.651 qpair failed and we were unable to recover it. 00:28:29.651 [2024-12-09 17:38:58.601305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.651 [2024-12-09 17:38:58.601340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.651 qpair failed and we were unable to recover it. 00:28:29.651 [2024-12-09 17:38:58.601632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.651 [2024-12-09 17:38:58.601664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.651 qpair failed and we were unable to recover it. 00:28:29.651 [2024-12-09 17:38:58.601845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.651 [2024-12-09 17:38:58.601877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.651 qpair failed and we were unable to recover it. 00:28:29.651 [2024-12-09 17:38:58.602137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.651 [2024-12-09 17:38:58.602169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.651 qpair failed and we were unable to recover it. 00:28:29.651 [2024-12-09 17:38:58.602461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.651 [2024-12-09 17:38:58.602496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.651 qpair failed and we were unable to recover it. 00:28:29.651 [2024-12-09 17:38:58.602777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.651 [2024-12-09 17:38:58.602809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.651 qpair failed and we were unable to recover it. 00:28:29.651 [2024-12-09 17:38:58.603095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.651 [2024-12-09 17:38:58.603128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.651 qpair failed and we were unable to recover it. 00:28:29.651 [2024-12-09 17:38:58.603439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.651 [2024-12-09 17:38:58.603474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.651 qpair failed and we were unable to recover it. 00:28:29.651 [2024-12-09 17:38:58.603746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.651 [2024-12-09 17:38:58.603779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.651 qpair failed and we were unable to recover it. 00:28:29.651 [2024-12-09 17:38:58.603990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.651 [2024-12-09 17:38:58.604023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.651 qpair failed and we were unable to recover it. 00:28:29.651 [2024-12-09 17:38:58.604303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.651 [2024-12-09 17:38:58.604337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.651 qpair failed and we were unable to recover it. 00:28:29.651 [2024-12-09 17:38:58.604621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.651 [2024-12-09 17:38:58.604654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.651 qpair failed and we were unable to recover it. 00:28:29.651 [2024-12-09 17:38:58.604933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.651 [2024-12-09 17:38:58.604966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.651 qpair failed and we were unable to recover it. 00:28:29.651 [2024-12-09 17:38:58.605248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.651 [2024-12-09 17:38:58.605282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.651 qpair failed and we were unable to recover it. 00:28:29.652 [2024-12-09 17:38:58.605563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.652 [2024-12-09 17:38:58.605595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.652 qpair failed and we were unable to recover it. 00:28:29.652 [2024-12-09 17:38:58.605878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.652 [2024-12-09 17:38:58.605911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.652 qpair failed and we were unable to recover it. 00:28:29.652 [2024-12-09 17:38:58.606112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.652 [2024-12-09 17:38:58.606145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.652 qpair failed and we were unable to recover it. 00:28:29.652 [2024-12-09 17:38:58.606414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.652 [2024-12-09 17:38:58.606447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.652 qpair failed and we were unable to recover it. 00:28:29.652 [2024-12-09 17:38:58.606703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.652 [2024-12-09 17:38:58.606736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.652 qpair failed and we were unable to recover it. 00:28:29.652 [2024-12-09 17:38:58.606931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.652 [2024-12-09 17:38:58.606964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.652 qpair failed and we were unable to recover it. 00:28:29.652 [2024-12-09 17:38:58.607173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.652 [2024-12-09 17:38:58.607206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.652 qpair failed and we were unable to recover it. 00:28:29.652 [2024-12-09 17:38:58.607533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.652 [2024-12-09 17:38:58.607567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.652 qpair failed and we were unable to recover it. 00:28:29.652 [2024-12-09 17:38:58.607848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.652 [2024-12-09 17:38:58.607900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.652 qpair failed and we were unable to recover it. 00:28:29.652 [2024-12-09 17:38:58.608031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.652 [2024-12-09 17:38:58.608064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.652 qpair failed and we were unable to recover it. 00:28:29.652 [2024-12-09 17:38:58.608329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.652 [2024-12-09 17:38:58.608363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.652 qpair failed and we were unable to recover it. 00:28:29.652 [2024-12-09 17:38:58.608639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.652 [2024-12-09 17:38:58.608672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.652 qpair failed and we were unable to recover it. 00:28:29.652 [2024-12-09 17:38:58.608959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.652 [2024-12-09 17:38:58.608993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.652 qpair failed and we were unable to recover it. 00:28:29.652 [2024-12-09 17:38:58.609274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.652 [2024-12-09 17:38:58.609308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.652 qpair failed and we were unable to recover it. 00:28:29.652 [2024-12-09 17:38:58.609438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.652 [2024-12-09 17:38:58.609471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.652 qpair failed and we were unable to recover it. 00:28:29.652 [2024-12-09 17:38:58.609692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.652 [2024-12-09 17:38:58.609725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.652 qpair failed and we were unable to recover it. 00:28:29.652 [2024-12-09 17:38:58.609996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.652 [2024-12-09 17:38:58.610028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.652 qpair failed and we were unable to recover it. 00:28:29.652 [2024-12-09 17:38:58.610326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.652 [2024-12-09 17:38:58.610361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.652 qpair failed and we were unable to recover it. 00:28:29.652 [2024-12-09 17:38:58.610653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.652 [2024-12-09 17:38:58.610686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.652 qpair failed and we were unable to recover it. 00:28:29.652 [2024-12-09 17:38:58.610911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.652 [2024-12-09 17:38:58.610944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.652 qpair failed and we were unable to recover it. 00:28:29.652 [2024-12-09 17:38:58.611248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.652 [2024-12-09 17:38:58.611282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.652 qpair failed and we were unable to recover it. 00:28:29.652 [2024-12-09 17:38:58.611544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.652 [2024-12-09 17:38:58.611577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.652 qpair failed and we were unable to recover it. 00:28:29.652 [2024-12-09 17:38:58.611865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.652 [2024-12-09 17:38:58.611899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.652 qpair failed and we were unable to recover it. 00:28:29.652 [2024-12-09 17:38:58.612178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.652 [2024-12-09 17:38:58.612211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.652 qpair failed and we were unable to recover it. 00:28:29.652 [2024-12-09 17:38:58.612355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.652 [2024-12-09 17:38:58.612388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.652 qpair failed and we were unable to recover it. 00:28:29.652 [2024-12-09 17:38:58.612661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.652 [2024-12-09 17:38:58.612694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.652 qpair failed and we were unable to recover it. 00:28:29.652 [2024-12-09 17:38:58.612873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.652 [2024-12-09 17:38:58.612906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.652 qpair failed and we were unable to recover it. 00:28:29.652 [2024-12-09 17:38:58.613161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.652 [2024-12-09 17:38:58.613193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.652 qpair failed and we were unable to recover it. 00:28:29.652 [2024-12-09 17:38:58.613460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.652 [2024-12-09 17:38:58.613494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.652 qpair failed and we were unable to recover it. 00:28:29.652 [2024-12-09 17:38:58.613701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.652 [2024-12-09 17:38:58.613734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.652 qpair failed and we were unable to recover it. 00:28:29.652 [2024-12-09 17:38:58.614005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.652 [2024-12-09 17:38:58.614038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.652 qpair failed and we were unable to recover it. 00:28:29.652 [2024-12-09 17:38:58.614243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.652 [2024-12-09 17:38:58.614278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.652 qpair failed and we were unable to recover it. 00:28:29.652 [2024-12-09 17:38:58.614562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.652 [2024-12-09 17:38:58.614596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.652 qpair failed and we were unable to recover it. 00:28:29.652 [2024-12-09 17:38:58.614891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.652 [2024-12-09 17:38:58.614924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.652 qpair failed and we were unable to recover it. 00:28:29.652 [2024-12-09 17:38:58.615192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.652 [2024-12-09 17:38:58.615252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.652 qpair failed and we were unable to recover it. 00:28:29.652 [2024-12-09 17:38:58.615455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.652 [2024-12-09 17:38:58.615489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.652 qpair failed and we were unable to recover it. 00:28:29.652 [2024-12-09 17:38:58.615716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.652 [2024-12-09 17:38:58.615749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.652 qpair failed and we were unable to recover it. 00:28:29.653 [2024-12-09 17:38:58.616024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.653 [2024-12-09 17:38:58.616056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.653 qpair failed and we were unable to recover it. 00:28:29.653 [2024-12-09 17:38:58.616282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.653 [2024-12-09 17:38:58.616317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.653 qpair failed and we were unable to recover it. 00:28:29.653 [2024-12-09 17:38:58.616602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.653 [2024-12-09 17:38:58.616636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.653 qpair failed and we were unable to recover it. 00:28:29.653 [2024-12-09 17:38:58.616915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.653 [2024-12-09 17:38:58.616948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.653 qpair failed and we were unable to recover it. 00:28:29.653 [2024-12-09 17:38:58.617154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.653 [2024-12-09 17:38:58.617186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.653 qpair failed and we were unable to recover it. 00:28:29.653 [2024-12-09 17:38:58.617466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.653 [2024-12-09 17:38:58.617500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.653 qpair failed and we were unable to recover it. 00:28:29.653 [2024-12-09 17:38:58.617798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.653 [2024-12-09 17:38:58.617831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.653 qpair failed and we were unable to recover it. 00:28:29.653 [2024-12-09 17:38:58.618045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.653 [2024-12-09 17:38:58.618078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.653 qpair failed and we were unable to recover it. 00:28:29.653 [2024-12-09 17:38:58.618201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.653 [2024-12-09 17:38:58.618244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.653 qpair failed and we were unable to recover it. 00:28:29.653 [2024-12-09 17:38:58.618540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.653 [2024-12-09 17:38:58.618574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.653 qpair failed and we were unable to recover it. 00:28:29.653 [2024-12-09 17:38:58.618864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.653 [2024-12-09 17:38:58.618896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.653 qpair failed and we were unable to recover it. 00:28:29.653 [2024-12-09 17:38:58.619096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.653 [2024-12-09 17:38:58.619134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.653 qpair failed and we were unable to recover it. 00:28:29.653 [2024-12-09 17:38:58.619262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.653 [2024-12-09 17:38:58.619297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.653 qpair failed and we were unable to recover it. 00:28:29.653 [2024-12-09 17:38:58.619427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.653 [2024-12-09 17:38:58.619459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.653 qpair failed and we were unable to recover it. 00:28:29.653 [2024-12-09 17:38:58.619640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.653 [2024-12-09 17:38:58.619673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.653 qpair failed and we were unable to recover it. 00:28:29.653 [2024-12-09 17:38:58.619936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.653 [2024-12-09 17:38:58.619968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.653 qpair failed and we were unable to recover it. 00:28:29.653 [2024-12-09 17:38:58.620251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.653 [2024-12-09 17:38:58.620285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.653 qpair failed and we were unable to recover it. 00:28:29.653 [2024-12-09 17:38:58.620498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.653 [2024-12-09 17:38:58.620530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.653 qpair failed and we were unable to recover it. 00:28:29.653 [2024-12-09 17:38:58.620712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.653 [2024-12-09 17:38:58.620745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.653 qpair failed and we were unable to recover it. 00:28:29.653 [2024-12-09 17:38:58.620946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.653 [2024-12-09 17:38:58.620978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.653 qpair failed and we were unable to recover it. 00:28:29.653 [2024-12-09 17:38:58.621261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.653 [2024-12-09 17:38:58.621295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.653 qpair failed and we were unable to recover it. 00:28:29.653 [2024-12-09 17:38:58.621599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.653 [2024-12-09 17:38:58.621632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.653 qpair failed and we were unable to recover it. 00:28:29.653 [2024-12-09 17:38:58.621891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.653 [2024-12-09 17:38:58.621924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.653 qpair failed and we were unable to recover it. 00:28:29.653 [2024-12-09 17:38:58.622114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.653 [2024-12-09 17:38:58.622147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.653 qpair failed and we were unable to recover it. 00:28:29.653 [2024-12-09 17:38:58.622428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.653 [2024-12-09 17:38:58.622462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.653 qpair failed and we were unable to recover it. 00:28:29.653 [2024-12-09 17:38:58.622670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.653 [2024-12-09 17:38:58.622703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.653 qpair failed and we were unable to recover it. 00:28:29.653 [2024-12-09 17:38:58.622978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.653 [2024-12-09 17:38:58.623011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.653 qpair failed and we were unable to recover it. 00:28:29.653 [2024-12-09 17:38:58.623289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.653 [2024-12-09 17:38:58.623323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.653 qpair failed and we were unable to recover it. 00:28:29.653 [2024-12-09 17:38:58.623547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.653 [2024-12-09 17:38:58.623579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.653 qpair failed and we were unable to recover it. 00:28:29.653 [2024-12-09 17:38:58.623835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.653 [2024-12-09 17:38:58.623868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.653 qpair failed and we were unable to recover it. 00:28:29.653 [2024-12-09 17:38:58.624130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.653 [2024-12-09 17:38:58.624163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.653 qpair failed and we were unable to recover it. 00:28:29.653 [2024-12-09 17:38:58.624381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.653 [2024-12-09 17:38:58.624416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.653 qpair failed and we were unable to recover it. 00:28:29.653 [2024-12-09 17:38:58.624655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.653 [2024-12-09 17:38:58.624687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.653 qpair failed and we were unable to recover it. 00:28:29.653 [2024-12-09 17:38:58.624867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.653 [2024-12-09 17:38:58.624899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.653 qpair failed and we were unable to recover it. 00:28:29.653 [2024-12-09 17:38:58.625200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.653 [2024-12-09 17:38:58.625243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.653 qpair failed and we were unable to recover it. 00:28:29.653 [2024-12-09 17:38:58.625535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.653 [2024-12-09 17:38:58.625568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.653 qpair failed and we were unable to recover it. 00:28:29.653 [2024-12-09 17:38:58.625869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.653 [2024-12-09 17:38:58.625902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.653 qpair failed and we were unable to recover it. 00:28:29.653 [2024-12-09 17:38:58.626171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.653 [2024-12-09 17:38:58.626204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.653 qpair failed and we were unable to recover it. 00:28:29.653 [2024-12-09 17:38:58.626505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.653 [2024-12-09 17:38:58.626538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.653 qpair failed and we were unable to recover it. 00:28:29.653 [2024-12-09 17:38:58.626738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.654 [2024-12-09 17:38:58.626771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.654 qpair failed and we were unable to recover it. 00:28:29.654 [2024-12-09 17:38:58.627026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.654 [2024-12-09 17:38:58.627059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.654 qpair failed and we were unable to recover it. 00:28:29.654 [2024-12-09 17:38:58.627362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.654 [2024-12-09 17:38:58.627396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.654 qpair failed and we were unable to recover it. 00:28:29.654 [2024-12-09 17:38:58.627659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.654 [2024-12-09 17:38:58.627692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.654 qpair failed and we were unable to recover it. 00:28:29.654 [2024-12-09 17:38:58.627914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.654 [2024-12-09 17:38:58.627948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.654 qpair failed and we were unable to recover it. 00:28:29.654 [2024-12-09 17:38:58.628202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.654 [2024-12-09 17:38:58.628254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.654 qpair failed and we were unable to recover it. 00:28:29.654 [2024-12-09 17:38:58.628547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.654 [2024-12-09 17:38:58.628580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.654 qpair failed and we were unable to recover it. 00:28:29.654 [2024-12-09 17:38:58.628864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.654 [2024-12-09 17:38:58.628897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.654 qpair failed and we were unable to recover it. 00:28:29.654 [2024-12-09 17:38:58.629180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.654 [2024-12-09 17:38:58.629212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.654 qpair failed and we were unable to recover it. 00:28:29.654 [2024-12-09 17:38:58.629499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.654 [2024-12-09 17:38:58.629533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.654 qpair failed and we were unable to recover it. 00:28:29.654 [2024-12-09 17:38:58.629857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.654 [2024-12-09 17:38:58.629890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.654 qpair failed and we were unable to recover it. 00:28:29.654 [2024-12-09 17:38:58.630097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.654 [2024-12-09 17:38:58.630129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.654 qpair failed and we were unable to recover it. 00:28:29.654 [2024-12-09 17:38:58.630386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.654 [2024-12-09 17:38:58.630426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.654 qpair failed and we were unable to recover it. 00:28:29.654 [2024-12-09 17:38:58.630681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.654 [2024-12-09 17:38:58.630714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.654 qpair failed and we were unable to recover it. 00:28:29.654 [2024-12-09 17:38:58.631015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.654 [2024-12-09 17:38:58.631048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.654 qpair failed and we were unable to recover it. 00:28:29.654 [2024-12-09 17:38:58.631258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.654 [2024-12-09 17:38:58.631293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.654 qpair failed and we were unable to recover it. 00:28:29.654 [2024-12-09 17:38:58.631489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.654 [2024-12-09 17:38:58.631526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.654 qpair failed and we were unable to recover it. 00:28:29.654 [2024-12-09 17:38:58.631827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.654 [2024-12-09 17:38:58.631859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.654 qpair failed and we were unable to recover it. 00:28:29.654 [2024-12-09 17:38:58.632118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.654 [2024-12-09 17:38:58.632151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.654 qpair failed and we were unable to recover it. 00:28:29.654 [2024-12-09 17:38:58.632444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.654 [2024-12-09 17:38:58.632478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.654 qpair failed and we were unable to recover it. 00:28:29.654 [2024-12-09 17:38:58.632700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.654 [2024-12-09 17:38:58.632733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.654 qpair failed and we were unable to recover it. 00:28:29.654 [2024-12-09 17:38:58.632925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.654 [2024-12-09 17:38:58.632957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.654 qpair failed and we were unable to recover it. 00:28:29.654 [2024-12-09 17:38:58.633161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.654 [2024-12-09 17:38:58.633193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.654 qpair failed and we were unable to recover it. 00:28:29.654 [2024-12-09 17:38:58.633474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.654 [2024-12-09 17:38:58.633508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.654 qpair failed and we were unable to recover it. 00:28:29.654 [2024-12-09 17:38:58.633716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.654 [2024-12-09 17:38:58.633750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.654 qpair failed and we were unable to recover it. 00:28:29.654 [2024-12-09 17:38:58.633867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.654 [2024-12-09 17:38:58.633898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.654 qpair failed and we were unable to recover it. 00:28:29.654 [2024-12-09 17:38:58.634117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.654 [2024-12-09 17:38:58.634151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.654 qpair failed and we were unable to recover it. 00:28:29.654 [2024-12-09 17:38:58.634407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.654 [2024-12-09 17:38:58.634440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.654 qpair failed and we were unable to recover it. 00:28:29.654 [2024-12-09 17:38:58.634699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.654 [2024-12-09 17:38:58.634731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.654 qpair failed and we were unable to recover it. 00:28:29.654 [2024-12-09 17:38:58.634948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.654 [2024-12-09 17:38:58.634980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.654 qpair failed and we were unable to recover it. 00:28:29.654 [2024-12-09 17:38:58.635242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.654 [2024-12-09 17:38:58.635276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.654 qpair failed and we were unable to recover it. 00:28:29.654 [2024-12-09 17:38:58.635573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.654 [2024-12-09 17:38:58.635604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.654 qpair failed and we were unable to recover it. 00:28:29.654 [2024-12-09 17:38:58.635875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.654 [2024-12-09 17:38:58.635907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.654 qpair failed and we were unable to recover it. 00:28:29.654 [2024-12-09 17:38:58.636204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.654 [2024-12-09 17:38:58.636252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.654 qpair failed and we were unable to recover it. 00:28:29.654 [2024-12-09 17:38:58.636461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.654 [2024-12-09 17:38:58.636494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.654 qpair failed and we were unable to recover it. 00:28:29.654 [2024-12-09 17:38:58.636749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.654 [2024-12-09 17:38:58.636780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.654 qpair failed and we were unable to recover it. 00:28:29.654 [2024-12-09 17:38:58.637057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.654 [2024-12-09 17:38:58.637089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.654 qpair failed and we were unable to recover it. 00:28:29.654 [2024-12-09 17:38:58.637294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.654 [2024-12-09 17:38:58.637328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.654 qpair failed and we were unable to recover it. 00:28:29.654 [2024-12-09 17:38:58.637573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.654 [2024-12-09 17:38:58.637605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.654 qpair failed and we were unable to recover it. 00:28:29.654 [2024-12-09 17:38:58.637796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.655 [2024-12-09 17:38:58.637828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.655 qpair failed and we were unable to recover it. 00:28:29.655 [2024-12-09 17:38:58.638056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.655 [2024-12-09 17:38:58.638088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.655 qpair failed and we were unable to recover it. 00:28:29.655 [2024-12-09 17:38:58.638366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.655 [2024-12-09 17:38:58.638399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.655 qpair failed and we were unable to recover it. 00:28:29.655 [2024-12-09 17:38:58.638529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.655 [2024-12-09 17:38:58.638561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.655 qpair failed and we were unable to recover it. 00:28:29.655 [2024-12-09 17:38:58.638836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.655 [2024-12-09 17:38:58.638867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.655 qpair failed and we were unable to recover it. 00:28:29.655 [2024-12-09 17:38:58.639170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.655 [2024-12-09 17:38:58.639202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.655 qpair failed and we were unable to recover it. 00:28:29.655 [2024-12-09 17:38:58.639397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.655 [2024-12-09 17:38:58.639430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.655 qpair failed and we were unable to recover it. 00:28:29.655 [2024-12-09 17:38:58.639626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.655 [2024-12-09 17:38:58.639658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.655 qpair failed and we were unable to recover it. 00:28:29.655 [2024-12-09 17:38:58.639787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.655 [2024-12-09 17:38:58.639819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.655 qpair failed and we were unable to recover it. 00:28:29.655 [2024-12-09 17:38:58.640007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.655 [2024-12-09 17:38:58.640039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.655 qpair failed and we were unable to recover it. 00:28:29.655 [2024-12-09 17:38:58.640237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.655 [2024-12-09 17:38:58.640270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.655 qpair failed and we were unable to recover it. 00:28:29.655 [2024-12-09 17:38:58.640551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.655 [2024-12-09 17:38:58.640583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.655 qpair failed and we were unable to recover it. 00:28:29.655 [2024-12-09 17:38:58.640849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.655 [2024-12-09 17:38:58.640880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.655 qpair failed and we were unable to recover it. 00:28:29.655 [2024-12-09 17:38:58.641101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.655 [2024-12-09 17:38:58.641138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.655 qpair failed and we were unable to recover it. 00:28:29.655 [2024-12-09 17:38:58.641398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.655 [2024-12-09 17:38:58.641432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.655 qpair failed and we were unable to recover it. 00:28:29.655 [2024-12-09 17:38:58.641630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.655 [2024-12-09 17:38:58.641663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.655 qpair failed and we were unable to recover it. 00:28:29.655 [2024-12-09 17:38:58.641939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.655 [2024-12-09 17:38:58.641971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.655 qpair failed and we were unable to recover it. 00:28:29.655 [2024-12-09 17:38:58.642162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.655 [2024-12-09 17:38:58.642194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.655 qpair failed and we were unable to recover it. 00:28:29.655 [2024-12-09 17:38:58.642488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.655 [2024-12-09 17:38:58.642521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.655 qpair failed and we were unable to recover it. 00:28:29.655 [2024-12-09 17:38:58.642789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.655 [2024-12-09 17:38:58.642821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.655 qpair failed and we were unable to recover it. 00:28:29.655 [2024-12-09 17:38:58.643095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.655 [2024-12-09 17:38:58.643127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.655 qpair failed and we were unable to recover it. 00:28:29.655 [2024-12-09 17:38:58.643425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.655 [2024-12-09 17:38:58.643459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.655 qpair failed and we were unable to recover it. 00:28:29.655 [2024-12-09 17:38:58.643730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.655 [2024-12-09 17:38:58.643762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.655 qpair failed and we were unable to recover it. 00:28:29.655 [2024-12-09 17:38:58.644086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.655 [2024-12-09 17:38:58.644118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.655 qpair failed and we were unable to recover it. 00:28:29.655 [2024-12-09 17:38:58.644249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.655 [2024-12-09 17:38:58.644282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.655 qpair failed and we were unable to recover it. 00:28:29.655 [2024-12-09 17:38:58.644477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.655 [2024-12-09 17:38:58.644508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.655 qpair failed and we were unable to recover it. 00:28:29.655 [2024-12-09 17:38:58.644709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.655 [2024-12-09 17:38:58.644741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.655 qpair failed and we were unable to recover it. 00:28:29.655 [2024-12-09 17:38:58.645000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.655 [2024-12-09 17:38:58.645033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.655 qpair failed and we were unable to recover it. 00:28:29.655 [2024-12-09 17:38:58.645335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.655 [2024-12-09 17:38:58.645368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.655 qpair failed and we were unable to recover it. 00:28:29.655 [2024-12-09 17:38:58.645665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.655 [2024-12-09 17:38:58.645696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.655 qpair failed and we were unable to recover it. 00:28:29.655 [2024-12-09 17:38:58.645905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.655 [2024-12-09 17:38:58.645937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.655 qpair failed and we were unable to recover it. 00:28:29.655 [2024-12-09 17:38:58.646215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.655 [2024-12-09 17:38:58.646258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.655 qpair failed and we were unable to recover it. 00:28:29.655 [2024-12-09 17:38:58.646539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.655 [2024-12-09 17:38:58.646571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.655 qpair failed and we were unable to recover it. 00:28:29.655 [2024-12-09 17:38:58.646847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.655 [2024-12-09 17:38:58.646879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.655 qpair failed and we were unable to recover it. 00:28:29.656 [2024-12-09 17:38:58.647081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.656 [2024-12-09 17:38:58.647113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.656 qpair failed and we were unable to recover it. 00:28:29.656 [2024-12-09 17:38:58.647388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.656 [2024-12-09 17:38:58.647423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.656 qpair failed and we were unable to recover it. 00:28:29.656 [2024-12-09 17:38:58.647711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.656 [2024-12-09 17:38:58.647743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.656 qpair failed and we were unable to recover it. 00:28:29.656 [2024-12-09 17:38:58.648021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.656 [2024-12-09 17:38:58.648054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.656 qpair failed and we were unable to recover it. 00:28:29.656 [2024-12-09 17:38:58.648345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.656 [2024-12-09 17:38:58.648379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.656 qpair failed and we were unable to recover it. 00:28:29.656 [2024-12-09 17:38:58.648650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.656 [2024-12-09 17:38:58.648683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.656 qpair failed and we were unable to recover it. 00:28:29.656 [2024-12-09 17:38:58.648894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.656 [2024-12-09 17:38:58.648927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.656 qpair failed and we were unable to recover it. 00:28:29.656 [2024-12-09 17:38:58.649207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.656 [2024-12-09 17:38:58.649253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.656 qpair failed and we were unable to recover it. 00:28:29.656 [2024-12-09 17:38:58.649450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.656 [2024-12-09 17:38:58.649482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.656 qpair failed and we were unable to recover it. 00:28:29.656 [2024-12-09 17:38:58.649778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.656 [2024-12-09 17:38:58.649810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.656 qpair failed and we were unable to recover it. 00:28:29.656 [2024-12-09 17:38:58.650112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.656 [2024-12-09 17:38:58.650145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.656 qpair failed and we were unable to recover it. 00:28:29.656 [2024-12-09 17:38:58.650450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.656 [2024-12-09 17:38:58.650485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.656 qpair failed and we were unable to recover it. 00:28:29.656 [2024-12-09 17:38:58.650779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.656 [2024-12-09 17:38:58.650811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.656 qpair failed and we were unable to recover it. 00:28:29.656 [2024-12-09 17:38:58.650997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.656 [2024-12-09 17:38:58.651030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.656 qpair failed and we were unable to recover it. 00:28:29.656 [2024-12-09 17:38:58.651214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.656 [2024-12-09 17:38:58.651255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.656 qpair failed and we were unable to recover it. 00:28:29.656 [2024-12-09 17:38:58.651537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.656 [2024-12-09 17:38:58.651569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.656 qpair failed and we were unable to recover it. 00:28:29.656 [2024-12-09 17:38:58.651830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.656 [2024-12-09 17:38:58.651861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.656 qpair failed and we were unable to recover it. 00:28:29.656 [2024-12-09 17:38:58.652165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.656 [2024-12-09 17:38:58.652196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.656 qpair failed and we were unable to recover it. 00:28:29.656 [2024-12-09 17:38:58.652464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.656 [2024-12-09 17:38:58.652497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.656 qpair failed and we were unable to recover it. 00:28:29.656 [2024-12-09 17:38:58.652798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.656 [2024-12-09 17:38:58.652836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.656 qpair failed and we were unable to recover it. 00:28:29.656 [2024-12-09 17:38:58.653047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.656 [2024-12-09 17:38:58.653079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.656 qpair failed and we were unable to recover it. 00:28:29.656 [2024-12-09 17:38:58.653352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.656 [2024-12-09 17:38:58.653385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.656 qpair failed and we were unable to recover it. 00:28:29.656 [2024-12-09 17:38:58.653669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.656 [2024-12-09 17:38:58.653701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.656 qpair failed and we were unable to recover it. 00:28:29.656 [2024-12-09 17:38:58.653944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.656 [2024-12-09 17:38:58.653975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.656 qpair failed and we were unable to recover it. 00:28:29.656 [2024-12-09 17:38:58.654159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.656 [2024-12-09 17:38:58.654192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.656 qpair failed and we were unable to recover it. 00:28:29.656 [2024-12-09 17:38:58.654490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.656 [2024-12-09 17:38:58.654522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.656 qpair failed and we were unable to recover it. 00:28:29.656 [2024-12-09 17:38:58.654794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.656 [2024-12-09 17:38:58.654827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.656 qpair failed and we were unable to recover it. 00:28:29.656 [2024-12-09 17:38:58.655124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.656 [2024-12-09 17:38:58.655155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.656 qpair failed and we were unable to recover it. 00:28:29.656 [2024-12-09 17:38:58.655427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.656 [2024-12-09 17:38:58.655461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.656 qpair failed and we were unable to recover it. 00:28:29.656 [2024-12-09 17:38:58.655783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.656 [2024-12-09 17:38:58.655815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.656 qpair failed and we were unable to recover it. 00:28:29.656 [2024-12-09 17:38:58.656089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.656 [2024-12-09 17:38:58.656121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.656 qpair failed and we were unable to recover it. 00:28:29.656 [2024-12-09 17:38:58.656334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.656 [2024-12-09 17:38:58.656368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.656 qpair failed and we were unable to recover it. 00:28:29.656 [2024-12-09 17:38:58.656564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.656 [2024-12-09 17:38:58.656596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.656 qpair failed and we were unable to recover it. 00:28:29.656 [2024-12-09 17:38:58.656863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.656 [2024-12-09 17:38:58.656895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.656 qpair failed and we were unable to recover it. 00:28:29.656 [2024-12-09 17:38:58.657040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.656 [2024-12-09 17:38:58.657072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.656 qpair failed and we were unable to recover it. 00:28:29.656 [2024-12-09 17:38:58.657277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.656 [2024-12-09 17:38:58.657310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.656 qpair failed and we were unable to recover it. 00:28:29.656 [2024-12-09 17:38:58.657528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.656 [2024-12-09 17:38:58.657560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.656 qpair failed and we were unable to recover it. 00:28:29.656 [2024-12-09 17:38:58.657835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.656 [2024-12-09 17:38:58.657868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.656 qpair failed and we were unable to recover it. 00:28:29.656 [2024-12-09 17:38:58.658077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.656 [2024-12-09 17:38:58.658108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.656 qpair failed and we were unable to recover it. 00:28:29.657 [2024-12-09 17:38:58.658313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.657 [2024-12-09 17:38:58.658346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.657 qpair failed and we were unable to recover it. 00:28:29.657 [2024-12-09 17:38:58.658480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.657 [2024-12-09 17:38:58.658513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.657 qpair failed and we were unable to recover it. 00:28:29.657 [2024-12-09 17:38:58.658696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.657 [2024-12-09 17:38:58.658728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.657 qpair failed and we were unable to recover it. 00:28:29.657 [2024-12-09 17:38:58.658936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.657 [2024-12-09 17:38:58.658969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.657 qpair failed and we were unable to recover it. 00:28:29.657 [2024-12-09 17:38:58.659237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.657 [2024-12-09 17:38:58.659272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.657 qpair failed and we were unable to recover it. 00:28:29.657 [2024-12-09 17:38:58.659464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.657 [2024-12-09 17:38:58.659495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.657 qpair failed and we were unable to recover it. 00:28:29.657 [2024-12-09 17:38:58.659768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.657 [2024-12-09 17:38:58.659799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.657 qpair failed and we were unable to recover it. 00:28:29.657 [2024-12-09 17:38:58.660021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.657 [2024-12-09 17:38:58.660061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.657 qpair failed and we were unable to recover it. 00:28:29.657 [2024-12-09 17:38:58.660194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.657 [2024-12-09 17:38:58.660245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.657 qpair failed and we were unable to recover it. 00:28:29.657 [2024-12-09 17:38:58.660523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.657 [2024-12-09 17:38:58.660555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.657 qpair failed and we were unable to recover it. 00:28:29.657 [2024-12-09 17:38:58.660829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.657 [2024-12-09 17:38:58.660861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.657 qpair failed and we were unable to recover it. 00:28:29.657 [2024-12-09 17:38:58.661158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.657 [2024-12-09 17:38:58.661190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.657 qpair failed and we were unable to recover it. 00:28:29.657 [2024-12-09 17:38:58.661412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.657 [2024-12-09 17:38:58.661445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.657 qpair failed and we were unable to recover it. 00:28:29.657 [2024-12-09 17:38:58.661639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.657 [2024-12-09 17:38:58.661672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.657 qpair failed and we were unable to recover it. 00:28:29.657 [2024-12-09 17:38:58.661977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.657 [2024-12-09 17:38:58.662008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.657 qpair failed and we were unable to recover it. 00:28:29.657 [2024-12-09 17:38:58.662296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.657 [2024-12-09 17:38:58.662329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.657 qpair failed and we were unable to recover it. 00:28:29.657 [2024-12-09 17:38:58.662581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.657 [2024-12-09 17:38:58.662614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.657 qpair failed and we were unable to recover it. 00:28:29.657 [2024-12-09 17:38:58.662798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.657 [2024-12-09 17:38:58.662829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.657 qpair failed and we were unable to recover it. 00:28:29.657 [2024-12-09 17:38:58.663011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.657 [2024-12-09 17:38:58.663042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.657 qpair failed and we were unable to recover it. 00:28:29.657 [2024-12-09 17:38:58.663296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.657 [2024-12-09 17:38:58.663329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.657 qpair failed and we were unable to recover it. 00:28:29.657 [2024-12-09 17:38:58.663545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.657 [2024-12-09 17:38:58.663576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.657 qpair failed and we were unable to recover it. 00:28:29.657 [2024-12-09 17:38:58.663764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.657 [2024-12-09 17:38:58.663796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.657 qpair failed and we were unable to recover it. 00:28:29.657 [2024-12-09 17:38:58.663946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.657 [2024-12-09 17:38:58.663977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.657 qpair failed and we were unable to recover it. 00:28:29.657 [2024-12-09 17:38:58.664091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.657 [2024-12-09 17:38:58.664124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.657 qpair failed and we were unable to recover it. 00:28:29.657 [2024-12-09 17:38:58.664396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.657 [2024-12-09 17:38:58.664430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.657 qpair failed and we were unable to recover it. 00:28:29.657 [2024-12-09 17:38:58.664710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.657 [2024-12-09 17:38:58.664743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.657 qpair failed and we were unable to recover it. 00:28:29.657 [2024-12-09 17:38:58.664891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.657 [2024-12-09 17:38:58.664923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.657 qpair failed and we were unable to recover it. 00:28:29.657 [2024-12-09 17:38:58.665105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.657 [2024-12-09 17:38:58.665136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.657 qpair failed and we were unable to recover it. 00:28:29.657 [2024-12-09 17:38:58.665413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.657 [2024-12-09 17:38:58.665446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.657 qpair failed and we were unable to recover it. 00:28:29.657 [2024-12-09 17:38:58.665723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.657 [2024-12-09 17:38:58.665754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.657 qpair failed and we were unable to recover it. 00:28:29.657 [2024-12-09 17:38:58.665957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.657 [2024-12-09 17:38:58.665988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.657 qpair failed and we were unable to recover it. 00:28:29.657 [2024-12-09 17:38:58.666262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.657 [2024-12-09 17:38:58.666296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.657 qpair failed and we were unable to recover it. 00:28:29.657 [2024-12-09 17:38:58.666581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.657 [2024-12-09 17:38:58.666614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.657 qpair failed and we were unable to recover it. 00:28:29.657 [2024-12-09 17:38:58.666874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.657 [2024-12-09 17:38:58.666905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.657 qpair failed and we were unable to recover it. 00:28:29.657 [2024-12-09 17:38:58.667185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.657 [2024-12-09 17:38:58.667226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.657 qpair failed and we were unable to recover it. 00:28:29.657 [2024-12-09 17:38:58.667512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.657 [2024-12-09 17:38:58.667545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.657 qpair failed and we were unable to recover it. 00:28:29.657 [2024-12-09 17:38:58.667763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.657 [2024-12-09 17:38:58.667796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.657 qpair failed and we were unable to recover it. 00:28:29.657 [2024-12-09 17:38:58.667998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.657 [2024-12-09 17:38:58.668030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.657 qpair failed and we were unable to recover it. 00:28:29.657 [2024-12-09 17:38:58.668306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.657 [2024-12-09 17:38:58.668340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.657 qpair failed and we were unable to recover it. 00:28:29.657 [2024-12-09 17:38:58.668524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.658 [2024-12-09 17:38:58.668555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.658 qpair failed and we were unable to recover it. 00:28:29.658 [2024-12-09 17:38:58.668685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.658 [2024-12-09 17:38:58.668716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.658 qpair failed and we were unable to recover it. 00:28:29.658 [2024-12-09 17:38:58.668899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.658 [2024-12-09 17:38:58.668931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.658 qpair failed and we were unable to recover it. 00:28:29.658 [2024-12-09 17:38:58.669204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.658 [2024-12-09 17:38:58.669246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.658 qpair failed and we were unable to recover it. 00:28:29.658 [2024-12-09 17:38:58.669445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.658 [2024-12-09 17:38:58.669476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.658 qpair failed and we were unable to recover it. 00:28:29.658 [2024-12-09 17:38:58.669757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.658 [2024-12-09 17:38:58.669789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.658 qpair failed and we were unable to recover it. 00:28:29.658 [2024-12-09 17:38:58.669977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.658 [2024-12-09 17:38:58.670009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.658 qpair failed and we were unable to recover it. 00:28:29.658 [2024-12-09 17:38:58.670278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.658 [2024-12-09 17:38:58.670311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.658 qpair failed and we were unable to recover it. 00:28:29.658 [2024-12-09 17:38:58.670596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.658 [2024-12-09 17:38:58.670635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.658 qpair failed and we were unable to recover it. 00:28:29.658 [2024-12-09 17:38:58.670910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.658 [2024-12-09 17:38:58.670942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.658 qpair failed and we were unable to recover it. 00:28:29.658 [2024-12-09 17:38:58.671145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.658 [2024-12-09 17:38:58.671176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.658 qpair failed and we were unable to recover it. 00:28:29.658 [2024-12-09 17:38:58.671388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.658 [2024-12-09 17:38:58.671421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.658 qpair failed and we were unable to recover it. 00:28:29.658 [2024-12-09 17:38:58.671676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.658 [2024-12-09 17:38:58.671709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.658 qpair failed and we were unable to recover it. 00:28:29.658 [2024-12-09 17:38:58.671980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.658 [2024-12-09 17:38:58.672012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.658 qpair failed and we were unable to recover it. 00:28:29.658 [2024-12-09 17:38:58.672138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.658 [2024-12-09 17:38:58.672171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.658 qpair failed and we were unable to recover it. 00:28:29.658 [2024-12-09 17:38:58.672492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.658 [2024-12-09 17:38:58.672524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.658 qpair failed and we were unable to recover it. 00:28:29.658 [2024-12-09 17:38:58.672675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.658 [2024-12-09 17:38:58.672707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.658 qpair failed and we were unable to recover it. 00:28:29.658 [2024-12-09 17:38:58.673010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.658 [2024-12-09 17:38:58.673043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.658 qpair failed and we were unable to recover it. 00:28:29.658 [2024-12-09 17:38:58.673332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.658 [2024-12-09 17:38:58.673365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.658 qpair failed and we were unable to recover it. 00:28:29.658 [2024-12-09 17:38:58.673645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.658 [2024-12-09 17:38:58.673678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.658 qpair failed and we were unable to recover it. 00:28:29.658 [2024-12-09 17:38:58.673863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.658 [2024-12-09 17:38:58.673896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.658 qpair failed and we were unable to recover it. 00:28:29.658 [2024-12-09 17:38:58.674098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.658 [2024-12-09 17:38:58.674131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.658 qpair failed and we were unable to recover it. 00:28:29.658 [2024-12-09 17:38:58.674390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.658 [2024-12-09 17:38:58.674424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.658 qpair failed and we were unable to recover it. 00:28:29.658 [2024-12-09 17:38:58.674697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.658 [2024-12-09 17:38:58.674730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.658 qpair failed and we were unable to recover it. 00:28:29.658 [2024-12-09 17:38:58.674994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.658 [2024-12-09 17:38:58.675026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.658 qpair failed and we were unable to recover it. 00:28:29.658 [2024-12-09 17:38:58.675280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.658 [2024-12-09 17:38:58.675313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.658 qpair failed and we were unable to recover it. 00:28:29.658 [2024-12-09 17:38:58.675636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.658 [2024-12-09 17:38:58.675670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.658 qpair failed and we were unable to recover it. 00:28:29.658 [2024-12-09 17:38:58.675949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.658 [2024-12-09 17:38:58.675980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.658 qpair failed and we were unable to recover it. 00:28:29.658 [2024-12-09 17:38:58.676198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.658 [2024-12-09 17:38:58.676238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.658 qpair failed and we were unable to recover it. 00:28:29.658 [2024-12-09 17:38:58.676351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.658 [2024-12-09 17:38:58.676383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.658 qpair failed and we were unable to recover it. 00:28:29.658 [2024-12-09 17:38:58.676611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.658 [2024-12-09 17:38:58.676643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.658 qpair failed and we were unable to recover it. 00:28:29.658 [2024-12-09 17:38:58.676935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.658 [2024-12-09 17:38:58.676967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.658 qpair failed and we were unable to recover it. 00:28:29.658 [2024-12-09 17:38:58.677241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.658 [2024-12-09 17:38:58.677274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.658 qpair failed and we were unable to recover it. 00:28:29.658 [2024-12-09 17:38:58.677477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.658 [2024-12-09 17:38:58.677509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.658 qpair failed and we were unable to recover it. 00:28:29.658 [2024-12-09 17:38:58.677757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.658 [2024-12-09 17:38:58.677789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.658 qpair failed and we were unable to recover it. 00:28:29.658 [2024-12-09 17:38:58.678051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.658 [2024-12-09 17:38:58.678083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.658 qpair failed and we were unable to recover it. 00:28:29.658 [2024-12-09 17:38:58.678389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.658 [2024-12-09 17:38:58.678422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.658 qpair failed and we were unable to recover it. 00:28:29.658 [2024-12-09 17:38:58.678686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.658 [2024-12-09 17:38:58.678718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.658 qpair failed and we were unable to recover it. 00:28:29.658 [2024-12-09 17:38:58.678902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.658 [2024-12-09 17:38:58.678934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.658 qpair failed and we were unable to recover it. 00:28:29.658 [2024-12-09 17:38:58.679243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.659 [2024-12-09 17:38:58.679276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.659 qpair failed and we were unable to recover it. 00:28:29.659 [2024-12-09 17:38:58.679523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.659 [2024-12-09 17:38:58.679556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.659 qpair failed and we were unable to recover it. 00:28:29.659 [2024-12-09 17:38:58.679735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.659 [2024-12-09 17:38:58.679766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.659 qpair failed and we were unable to recover it. 00:28:29.659 [2024-12-09 17:38:58.680045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.659 [2024-12-09 17:38:58.680077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.659 qpair failed and we were unable to recover it. 00:28:29.659 [2024-12-09 17:38:58.680356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.659 [2024-12-09 17:38:58.680390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.659 qpair failed and we were unable to recover it. 00:28:29.659 [2024-12-09 17:38:58.680602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.659 [2024-12-09 17:38:58.680633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.659 qpair failed and we were unable to recover it. 00:28:29.659 [2024-12-09 17:38:58.680932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.659 [2024-12-09 17:38:58.680964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.659 qpair failed and we were unable to recover it. 00:28:29.659 [2024-12-09 17:38:58.681152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.659 [2024-12-09 17:38:58.681184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.659 qpair failed and we were unable to recover it. 00:28:29.659 [2024-12-09 17:38:58.681477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.659 [2024-12-09 17:38:58.681510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.659 qpair failed and we were unable to recover it. 00:28:29.659 [2024-12-09 17:38:58.681785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.659 [2024-12-09 17:38:58.681823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.659 qpair failed and we were unable to recover it. 00:28:29.659 [2024-12-09 17:38:58.682031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.659 [2024-12-09 17:38:58.682062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.659 qpair failed and we were unable to recover it. 00:28:29.659 [2024-12-09 17:38:58.682340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.659 [2024-12-09 17:38:58.682374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.659 qpair failed and we were unable to recover it. 00:28:29.659 [2024-12-09 17:38:58.682576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.659 [2024-12-09 17:38:58.682607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.659 qpair failed and we were unable to recover it. 00:28:29.659 [2024-12-09 17:38:58.682909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.659 [2024-12-09 17:38:58.682940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.659 qpair failed and we were unable to recover it. 00:28:29.659 [2024-12-09 17:38:58.683208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.659 [2024-12-09 17:38:58.683248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.659 qpair failed and we were unable to recover it. 00:28:29.659 [2024-12-09 17:38:58.683534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.659 [2024-12-09 17:38:58.683566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.659 qpair failed and we were unable to recover it. 00:28:29.659 [2024-12-09 17:38:58.683842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.659 [2024-12-09 17:38:58.683874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.659 qpair failed and we were unable to recover it. 00:28:29.659 [2024-12-09 17:38:58.684134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.659 [2024-12-09 17:38:58.684166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.659 qpair failed and we were unable to recover it. 00:28:29.659 [2024-12-09 17:38:58.684478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.659 [2024-12-09 17:38:58.684512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.659 qpair failed and we were unable to recover it. 00:28:29.659 [2024-12-09 17:38:58.684809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.659 [2024-12-09 17:38:58.684842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.659 qpair failed and we were unable to recover it. 00:28:29.659 [2024-12-09 17:38:58.685095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.659 [2024-12-09 17:38:58.685127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.659 qpair failed and we were unable to recover it. 00:28:29.659 [2024-12-09 17:38:58.685380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.659 [2024-12-09 17:38:58.685414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.659 qpair failed and we were unable to recover it. 00:28:29.659 [2024-12-09 17:38:58.685606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.659 [2024-12-09 17:38:58.685638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.659 qpair failed and we were unable to recover it. 00:28:29.659 [2024-12-09 17:38:58.685918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.659 [2024-12-09 17:38:58.685951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.659 qpair failed and we were unable to recover it. 00:28:29.659 [2024-12-09 17:38:58.686233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.659 [2024-12-09 17:38:58.686267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.659 qpair failed and we were unable to recover it. 00:28:29.659 [2024-12-09 17:38:58.686551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.659 [2024-12-09 17:38:58.686583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.659 qpair failed and we were unable to recover it. 00:28:29.659 [2024-12-09 17:38:58.686857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.659 [2024-12-09 17:38:58.686889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.659 qpair failed and we were unable to recover it. 00:28:29.659 [2024-12-09 17:38:58.687185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.659 [2024-12-09 17:38:58.687227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.659 qpair failed and we were unable to recover it. 00:28:29.659 [2024-12-09 17:38:58.687431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.659 [2024-12-09 17:38:58.687462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.659 qpair failed and we were unable to recover it. 00:28:29.659 [2024-12-09 17:38:58.687717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.659 [2024-12-09 17:38:58.687749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.659 qpair failed and we were unable to recover it. 00:28:29.659 [2024-12-09 17:38:58.688030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.659 [2024-12-09 17:38:58.688061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.659 qpair failed and we were unable to recover it. 00:28:29.659 [2024-12-09 17:38:58.688251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.659 [2024-12-09 17:38:58.688284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.659 qpair failed and we were unable to recover it. 00:28:29.659 [2024-12-09 17:38:58.688490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.659 [2024-12-09 17:38:58.688521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.659 qpair failed and we were unable to recover it. 00:28:29.659 [2024-12-09 17:38:58.688794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.659 [2024-12-09 17:38:58.688826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.659 qpair failed and we were unable to recover it. 00:28:29.659 [2024-12-09 17:38:58.689013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.659 [2024-12-09 17:38:58.689045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.659 qpair failed and we were unable to recover it. 00:28:29.659 [2024-12-09 17:38:58.689325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.659 [2024-12-09 17:38:58.689358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.659 qpair failed and we were unable to recover it. 00:28:29.659 [2024-12-09 17:38:58.689629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.659 [2024-12-09 17:38:58.689661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.659 qpair failed and we were unable to recover it. 00:28:29.659 [2024-12-09 17:38:58.689883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.659 [2024-12-09 17:38:58.689914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.659 qpair failed and we were unable to recover it. 00:28:29.660 [2024-12-09 17:38:58.690099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.660 [2024-12-09 17:38:58.690131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.660 qpair failed and we were unable to recover it. 00:28:29.660 [2024-12-09 17:38:58.690336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.660 [2024-12-09 17:38:58.690368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.660 qpair failed and we were unable to recover it. 00:28:29.660 [2024-12-09 17:38:58.690551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.660 [2024-12-09 17:38:58.690582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.660 qpair failed and we were unable to recover it. 00:28:29.660 [2024-12-09 17:38:58.690858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.660 [2024-12-09 17:38:58.690890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.660 qpair failed and we were unable to recover it. 00:28:29.660 [2024-12-09 17:38:58.691111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.660 [2024-12-09 17:38:58.691141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.660 qpair failed and we were unable to recover it. 00:28:29.660 [2024-12-09 17:38:58.691420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.660 [2024-12-09 17:38:58.691453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.660 qpair failed and we were unable to recover it. 00:28:29.660 [2024-12-09 17:38:58.691653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.660 [2024-12-09 17:38:58.691685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.660 qpair failed and we were unable to recover it. 00:28:29.660 [2024-12-09 17:38:58.691866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.660 [2024-12-09 17:38:58.691898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.660 qpair failed and we were unable to recover it. 00:28:29.660 [2024-12-09 17:38:58.692104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.660 [2024-12-09 17:38:58.692136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.660 qpair failed and we were unable to recover it. 00:28:29.660 [2024-12-09 17:38:58.692332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.660 [2024-12-09 17:38:58.692366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.660 qpair failed and we were unable to recover it. 00:28:29.660 [2024-12-09 17:38:58.692548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.660 [2024-12-09 17:38:58.692580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.660 qpair failed and we were unable to recover it. 00:28:29.660 [2024-12-09 17:38:58.692785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.660 [2024-12-09 17:38:58.692822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.660 qpair failed and we were unable to recover it. 00:28:29.660 [2024-12-09 17:38:58.693006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.660 [2024-12-09 17:38:58.693038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.660 qpair failed and we were unable to recover it. 00:28:29.660 [2024-12-09 17:38:58.693313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.660 [2024-12-09 17:38:58.693347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.660 qpair failed and we were unable to recover it. 00:28:29.660 [2024-12-09 17:38:58.693671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.660 [2024-12-09 17:38:58.693703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.660 qpair failed and we were unable to recover it. 00:28:29.660 [2024-12-09 17:38:58.693929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.660 [2024-12-09 17:38:58.693962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.660 qpair failed and we were unable to recover it. 00:28:29.660 [2024-12-09 17:38:58.694258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.660 [2024-12-09 17:38:58.694290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.660 qpair failed and we were unable to recover it. 00:28:29.660 [2024-12-09 17:38:58.694419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.660 [2024-12-09 17:38:58.694451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.660 qpair failed and we were unable to recover it. 00:28:29.660 [2024-12-09 17:38:58.694753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.660 [2024-12-09 17:38:58.694784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.660 qpair failed and we were unable to recover it. 00:28:29.660 [2024-12-09 17:38:58.695011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.660 [2024-12-09 17:38:58.695043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.660 qpair failed and we were unable to recover it. 00:28:29.660 [2024-12-09 17:38:58.695244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.660 [2024-12-09 17:38:58.695277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.660 qpair failed and we were unable to recover it. 00:28:29.660 [2024-12-09 17:38:58.695579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.660 [2024-12-09 17:38:58.695611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.660 qpair failed and we were unable to recover it. 00:28:29.660 [2024-12-09 17:38:58.695871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.660 [2024-12-09 17:38:58.695903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.660 qpair failed and we were unable to recover it. 00:28:29.660 [2024-12-09 17:38:58.696195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.660 [2024-12-09 17:38:58.696237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.660 qpair failed and we were unable to recover it. 00:28:29.660 [2024-12-09 17:38:58.696507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.660 [2024-12-09 17:38:58.696540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.660 qpair failed and we were unable to recover it. 00:28:29.660 [2024-12-09 17:38:58.696801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.660 [2024-12-09 17:38:58.696833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.660 qpair failed and we were unable to recover it. 00:28:29.660 [2024-12-09 17:38:58.697062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.660 [2024-12-09 17:38:58.697094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.660 qpair failed and we were unable to recover it. 00:28:29.660 [2024-12-09 17:38:58.697312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.660 [2024-12-09 17:38:58.697346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.660 qpair failed and we were unable to recover it. 00:28:29.660 [2024-12-09 17:38:58.697550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.660 [2024-12-09 17:38:58.697583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.660 qpair failed and we were unable to recover it. 00:28:29.660 [2024-12-09 17:38:58.697883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.660 [2024-12-09 17:38:58.697916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.660 qpair failed and we were unable to recover it. 00:28:29.660 [2024-12-09 17:38:58.698204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.660 [2024-12-09 17:38:58.698243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.660 qpair failed and we were unable to recover it. 00:28:29.660 [2024-12-09 17:38:58.698457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.660 [2024-12-09 17:38:58.698490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.660 qpair failed and we were unable to recover it. 00:28:29.660 [2024-12-09 17:38:58.698677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.660 [2024-12-09 17:38:58.698710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.660 qpair failed and we were unable to recover it. 00:28:29.660 [2024-12-09 17:38:58.698991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.660 [2024-12-09 17:38:58.699023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.660 qpair failed and we were unable to recover it. 00:28:29.660 [2024-12-09 17:38:58.699206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.660 [2024-12-09 17:38:58.699250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.660 qpair failed and we were unable to recover it. 00:28:29.660 [2024-12-09 17:38:58.699508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.660 [2024-12-09 17:38:58.699541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.660 qpair failed and we were unable to recover it. 00:28:29.660 [2024-12-09 17:38:58.699817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.661 [2024-12-09 17:38:58.699849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.661 qpair failed and we were unable to recover it. 00:28:29.661 [2024-12-09 17:38:58.700137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.661 [2024-12-09 17:38:58.700170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.661 qpair failed and we were unable to recover it. 00:28:29.661 [2024-12-09 17:38:58.700471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.661 [2024-12-09 17:38:58.700505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.661 qpair failed and we were unable to recover it. 00:28:29.661 [2024-12-09 17:38:58.700769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.661 [2024-12-09 17:38:58.700802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.661 qpair failed and we were unable to recover it. 00:28:29.661 [2024-12-09 17:38:58.701107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.661 [2024-12-09 17:38:58.701139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.661 qpair failed and we were unable to recover it. 00:28:29.661 [2024-12-09 17:38:58.701338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.661 [2024-12-09 17:38:58.701373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.661 qpair failed and we were unable to recover it. 00:28:29.661 [2024-12-09 17:38:58.701570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.661 [2024-12-09 17:38:58.701603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.661 qpair failed and we were unable to recover it. 00:28:29.661 [2024-12-09 17:38:58.701816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.661 [2024-12-09 17:38:58.701850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.661 qpair failed and we were unable to recover it. 00:28:29.661 [2024-12-09 17:38:58.702048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.661 [2024-12-09 17:38:58.702081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.661 qpair failed and we were unable to recover it. 00:28:29.661 [2024-12-09 17:38:58.702275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.661 [2024-12-09 17:38:58.702308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.661 qpair failed and we were unable to recover it. 00:28:29.661 [2024-12-09 17:38:58.702585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.661 [2024-12-09 17:38:58.702617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.661 qpair failed and we were unable to recover it. 00:28:29.661 [2024-12-09 17:38:58.702817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.661 [2024-12-09 17:38:58.702849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.661 qpair failed and we were unable to recover it. 00:28:29.661 [2024-12-09 17:38:58.703113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.661 [2024-12-09 17:38:58.703144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.661 qpair failed and we were unable to recover it. 00:28:29.661 [2024-12-09 17:38:58.703437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.661 [2024-12-09 17:38:58.703471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.661 qpair failed and we were unable to recover it. 00:28:29.661 [2024-12-09 17:38:58.703704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.661 [2024-12-09 17:38:58.703736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.661 qpair failed and we were unable to recover it. 00:28:29.661 [2024-12-09 17:38:58.704052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.661 [2024-12-09 17:38:58.704096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.661 qpair failed and we were unable to recover it. 00:28:29.661 [2024-12-09 17:38:58.704442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.661 [2024-12-09 17:38:58.704476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.661 qpair failed and we were unable to recover it. 00:28:29.661 [2024-12-09 17:38:58.704689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.661 [2024-12-09 17:38:58.704721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.661 qpair failed and we were unable to recover it. 00:28:29.661 [2024-12-09 17:38:58.704854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.661 [2024-12-09 17:38:58.704885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.661 qpair failed and we were unable to recover it. 00:28:29.661 [2024-12-09 17:38:58.705093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.661 [2024-12-09 17:38:58.705125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.661 qpair failed and we were unable to recover it. 00:28:29.661 [2024-12-09 17:38:58.705330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.661 [2024-12-09 17:38:58.705364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.661 qpair failed and we were unable to recover it. 00:28:29.661 [2024-12-09 17:38:58.705559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.661 [2024-12-09 17:38:58.705592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.661 qpair failed and we were unable to recover it. 00:28:29.661 [2024-12-09 17:38:58.705817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.661 [2024-12-09 17:38:58.705849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.661 qpair failed and we were unable to recover it. 00:28:29.661 [2024-12-09 17:38:58.706196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.661 [2024-12-09 17:38:58.706253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.661 qpair failed and we were unable to recover it. 00:28:29.661 [2024-12-09 17:38:58.706533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.661 [2024-12-09 17:38:58.706565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.661 qpair failed and we were unable to recover it. 00:28:29.661 [2024-12-09 17:38:58.706855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.661 [2024-12-09 17:38:58.706886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.661 qpair failed and we were unable to recover it. 00:28:29.661 [2024-12-09 17:38:58.707114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.661 [2024-12-09 17:38:58.707145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.661 qpair failed and we were unable to recover it. 00:28:29.661 [2024-12-09 17:38:58.707467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.661 [2024-12-09 17:38:58.707500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.661 qpair failed and we were unable to recover it. 00:28:29.661 [2024-12-09 17:38:58.707748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.661 [2024-12-09 17:38:58.707779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.661 qpair failed and we were unable to recover it. 00:28:29.661 [2024-12-09 17:38:58.708072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.661 [2024-12-09 17:38:58.708105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.661 qpair failed and we were unable to recover it. 00:28:29.661 [2024-12-09 17:38:58.708369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.661 [2024-12-09 17:38:58.708402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.661 qpair failed and we were unable to recover it. 00:28:29.661 [2024-12-09 17:38:58.708598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.661 [2024-12-09 17:38:58.708631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.661 qpair failed and we were unable to recover it. 00:28:29.661 [2024-12-09 17:38:58.708829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.661 [2024-12-09 17:38:58.708861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.661 qpair failed and we were unable to recover it. 00:28:29.661 [2024-12-09 17:38:58.709089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.661 [2024-12-09 17:38:58.709120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.661 qpair failed and we were unable to recover it. 00:28:29.661 [2024-12-09 17:38:58.709399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.661 [2024-12-09 17:38:58.709433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.661 qpair failed and we were unable to recover it. 00:28:29.661 [2024-12-09 17:38:58.709579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.661 [2024-12-09 17:38:58.709611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.661 qpair failed and we were unable to recover it. 00:28:29.661 [2024-12-09 17:38:58.709797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.661 [2024-12-09 17:38:58.709829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.661 qpair failed and we were unable to recover it. 00:28:29.661 [2024-12-09 17:38:58.710031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.661 [2024-12-09 17:38:58.710064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.661 qpair failed and we were unable to recover it. 00:28:29.661 [2024-12-09 17:38:58.710260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.661 [2024-12-09 17:38:58.710293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.661 qpair failed and we were unable to recover it. 00:28:29.661 [2024-12-09 17:38:58.710502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.662 [2024-12-09 17:38:58.710533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.662 qpair failed and we were unable to recover it. 00:28:29.662 [2024-12-09 17:38:58.710810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.662 [2024-12-09 17:38:58.710842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.662 qpair failed and we were unable to recover it. 00:28:29.662 [2024-12-09 17:38:58.710979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.662 [2024-12-09 17:38:58.711010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.662 qpair failed and we were unable to recover it. 00:28:29.662 [2024-12-09 17:38:58.711298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.662 [2024-12-09 17:38:58.711331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.662 qpair failed and we were unable to recover it. 00:28:29.662 [2024-12-09 17:38:58.711611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.662 [2024-12-09 17:38:58.711643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.662 qpair failed and we were unable to recover it. 00:28:29.662 [2024-12-09 17:38:58.711915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.662 [2024-12-09 17:38:58.711947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.662 qpair failed and we were unable to recover it. 00:28:29.662 [2024-12-09 17:38:58.712146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.662 [2024-12-09 17:38:58.712179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.662 qpair failed and we were unable to recover it. 00:28:29.662 [2024-12-09 17:38:58.712372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.662 [2024-12-09 17:38:58.712405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.662 qpair failed and we were unable to recover it. 00:28:29.662 [2024-12-09 17:38:58.712662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.662 [2024-12-09 17:38:58.712694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.662 qpair failed and we were unable to recover it. 00:28:29.662 [2024-12-09 17:38:58.712905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.662 [2024-12-09 17:38:58.712937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.662 qpair failed and we were unable to recover it. 00:28:29.662 [2024-12-09 17:38:58.713205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.662 [2024-12-09 17:38:58.713250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.662 qpair failed and we were unable to recover it. 00:28:29.662 [2024-12-09 17:38:58.713503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.662 [2024-12-09 17:38:58.713535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.662 qpair failed and we were unable to recover it. 00:28:29.662 [2024-12-09 17:38:58.713672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.662 [2024-12-09 17:38:58.713704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.662 qpair failed and we were unable to recover it. 00:28:29.662 [2024-12-09 17:38:58.714056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.662 [2024-12-09 17:38:58.714088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.662 qpair failed and we were unable to recover it. 00:28:29.662 [2024-12-09 17:38:58.714350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.662 [2024-12-09 17:38:58.714384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.662 qpair failed and we were unable to recover it. 00:28:29.662 [2024-12-09 17:38:58.714652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.662 [2024-12-09 17:38:58.714685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.662 qpair failed and we were unable to recover it. 00:28:29.662 [2024-12-09 17:38:58.714911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.662 [2024-12-09 17:38:58.714949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.662 qpair failed and we were unable to recover it. 00:28:29.662 [2024-12-09 17:38:58.715253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.662 [2024-12-09 17:38:58.715286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.662 qpair failed and we were unable to recover it. 00:28:29.662 [2024-12-09 17:38:58.715548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.662 [2024-12-09 17:38:58.715579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.662 qpair failed and we were unable to recover it. 00:28:29.662 [2024-12-09 17:38:58.715834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.662 [2024-12-09 17:38:58.715867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.662 qpair failed and we were unable to recover it. 00:28:29.662 [2024-12-09 17:38:58.716124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.662 [2024-12-09 17:38:58.716155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.662 qpair failed and we were unable to recover it. 00:28:29.662 [2024-12-09 17:38:58.716443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.662 [2024-12-09 17:38:58.716476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.662 qpair failed and we were unable to recover it. 00:28:29.662 [2024-12-09 17:38:58.716757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.662 [2024-12-09 17:38:58.716791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.662 qpair failed and we were unable to recover it. 00:28:29.662 [2024-12-09 17:38:58.717008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.662 [2024-12-09 17:38:58.717040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.662 qpair failed and we were unable to recover it. 00:28:29.662 [2024-12-09 17:38:58.717240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.662 [2024-12-09 17:38:58.717274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.662 qpair failed and we were unable to recover it. 00:28:29.662 [2024-12-09 17:38:58.717475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.662 [2024-12-09 17:38:58.717507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.662 qpair failed and we were unable to recover it. 00:28:29.662 [2024-12-09 17:38:58.717661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.662 [2024-12-09 17:38:58.717694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.662 qpair failed and we were unable to recover it. 00:28:29.662 [2024-12-09 17:38:58.717978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.662 [2024-12-09 17:38:58.718010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.662 qpair failed and we were unable to recover it. 00:28:29.662 [2024-12-09 17:38:58.718232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.662 [2024-12-09 17:38:58.718266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.662 qpair failed and we were unable to recover it. 00:28:29.662 [2024-12-09 17:38:58.718517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.662 [2024-12-09 17:38:58.718549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.662 qpair failed and we were unable to recover it. 00:28:29.662 [2024-12-09 17:38:58.718861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.662 [2024-12-09 17:38:58.718894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.662 qpair failed and we were unable to recover it. 00:28:29.662 [2024-12-09 17:38:58.719152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.662 [2024-12-09 17:38:58.719185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.662 qpair failed and we were unable to recover it. 00:28:29.662 [2024-12-09 17:38:58.719470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.662 [2024-12-09 17:38:58.719551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.662 qpair failed and we were unable to recover it. 00:28:29.662 [2024-12-09 17:38:58.719928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.662 [2024-12-09 17:38:58.720005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.662 qpair failed and we were unable to recover it. 00:28:29.662 [2024-12-09 17:38:58.720319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.662 [2024-12-09 17:38:58.720359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.662 qpair failed and we were unable to recover it. 00:28:29.662 [2024-12-09 17:38:58.720643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.662 [2024-12-09 17:38:58.720676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.662 qpair failed and we were unable to recover it. 00:28:29.662 [2024-12-09 17:38:58.720985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.662 [2024-12-09 17:38:58.721018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.662 qpair failed and we were unable to recover it. 00:28:29.662 [2024-12-09 17:38:58.721239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.662 [2024-12-09 17:38:58.721273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.662 qpair failed and we were unable to recover it. 00:28:29.662 [2024-12-09 17:38:58.721411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.662 [2024-12-09 17:38:58.721444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.662 qpair failed and we were unable to recover it. 00:28:29.662 [2024-12-09 17:38:58.721718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.663 [2024-12-09 17:38:58.721751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.663 qpair failed and we were unable to recover it. 00:28:29.663 [2024-12-09 17:38:58.721924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.663 [2024-12-09 17:38:58.721956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.663 qpair failed and we were unable to recover it. 00:28:29.663 [2024-12-09 17:38:58.722143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.663 [2024-12-09 17:38:58.722175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.663 qpair failed and we were unable to recover it. 00:28:29.663 [2024-12-09 17:38:58.722448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.663 [2024-12-09 17:38:58.722482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.663 qpair failed and we were unable to recover it. 00:28:29.663 [2024-12-09 17:38:58.722639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.663 [2024-12-09 17:38:58.722670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.663 qpair failed and we were unable to recover it. 00:28:29.663 [2024-12-09 17:38:58.722891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.663 [2024-12-09 17:38:58.722925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.663 qpair failed and we were unable to recover it. 00:28:29.663 [2024-12-09 17:38:58.723236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.663 [2024-12-09 17:38:58.723270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.663 qpair failed and we were unable to recover it. 00:28:29.663 [2024-12-09 17:38:58.723471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.663 [2024-12-09 17:38:58.723503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.663 qpair failed and we were unable to recover it. 00:28:29.663 [2024-12-09 17:38:58.723781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.663 [2024-12-09 17:38:58.723813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.663 qpair failed and we were unable to recover it. 00:28:29.663 [2024-12-09 17:38:58.724097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.663 [2024-12-09 17:38:58.724129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.663 qpair failed and we were unable to recover it. 00:28:29.663 [2024-12-09 17:38:58.724415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.663 [2024-12-09 17:38:58.724448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.663 qpair failed and we were unable to recover it. 00:28:29.663 [2024-12-09 17:38:58.724703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.663 [2024-12-09 17:38:58.724735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.663 qpair failed and we were unable to recover it. 00:28:29.663 [2024-12-09 17:38:58.725033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.663 [2024-12-09 17:38:58.725065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.663 qpair failed and we were unable to recover it. 00:28:29.663 [2024-12-09 17:38:58.725353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.663 [2024-12-09 17:38:58.725385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.663 qpair failed and we were unable to recover it. 00:28:29.663 [2024-12-09 17:38:58.725595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.663 [2024-12-09 17:38:58.725627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.663 qpair failed and we were unable to recover it. 00:28:29.663 [2024-12-09 17:38:58.725903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.663 [2024-12-09 17:38:58.725935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.663 qpair failed and we were unable to recover it. 00:28:29.663 [2024-12-09 17:38:58.726132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.663 [2024-12-09 17:38:58.726164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.663 qpair failed and we were unable to recover it. 00:28:29.663 [2024-12-09 17:38:58.726456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.663 [2024-12-09 17:38:58.726495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.663 qpair failed and we were unable to recover it. 00:28:29.663 [2024-12-09 17:38:58.726781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.663 [2024-12-09 17:38:58.726814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.663 qpair failed and we were unable to recover it. 00:28:29.663 [2024-12-09 17:38:58.727053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.663 [2024-12-09 17:38:58.727085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.663 qpair failed and we were unable to recover it. 00:28:29.663 [2024-12-09 17:38:58.727351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.663 [2024-12-09 17:38:58.727383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.663 qpair failed and we were unable to recover it. 00:28:29.663 [2024-12-09 17:38:58.727684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.663 [2024-12-09 17:38:58.727717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.663 qpair failed and we were unable to recover it. 00:28:29.663 [2024-12-09 17:38:58.727918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.663 [2024-12-09 17:38:58.727950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.663 qpair failed and we were unable to recover it. 00:28:29.663 [2024-12-09 17:38:58.728155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.663 [2024-12-09 17:38:58.728187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.663 qpair failed and we were unable to recover it. 00:28:29.663 [2024-12-09 17:38:58.728388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.663 [2024-12-09 17:38:58.728422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.663 qpair failed and we were unable to recover it. 00:28:29.663 [2024-12-09 17:38:58.728677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.663 [2024-12-09 17:38:58.728709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.663 qpair failed and we were unable to recover it. 00:28:29.663 [2024-12-09 17:38:58.728851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.663 [2024-12-09 17:38:58.728884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.663 qpair failed and we were unable to recover it. 00:28:29.663 [2024-12-09 17:38:58.729166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.663 [2024-12-09 17:38:58.729199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.663 qpair failed and we were unable to recover it. 00:28:29.663 [2024-12-09 17:38:58.729501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.663 [2024-12-09 17:38:58.729534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.663 qpair failed and we were unable to recover it. 00:28:29.663 [2024-12-09 17:38:58.729805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.663 [2024-12-09 17:38:58.729837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.663 qpair failed and we were unable to recover it. 00:28:29.663 [2024-12-09 17:38:58.730097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.663 [2024-12-09 17:38:58.730129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.663 qpair failed and we were unable to recover it. 00:28:29.663 [2024-12-09 17:38:58.730434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.663 [2024-12-09 17:38:58.730469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.663 qpair failed and we were unable to recover it. 00:28:29.663 [2024-12-09 17:38:58.730738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.663 [2024-12-09 17:38:58.730770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.663 qpair failed and we were unable to recover it. 00:28:29.663 [2024-12-09 17:38:58.731027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.663 [2024-12-09 17:38:58.731059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.663 qpair failed and we were unable to recover it. 00:28:29.663 [2024-12-09 17:38:58.731292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.663 [2024-12-09 17:38:58.731326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.663 qpair failed and we were unable to recover it. 00:28:29.663 [2024-12-09 17:38:58.731581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.663 [2024-12-09 17:38:58.731613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.663 qpair failed and we were unable to recover it. 00:28:29.663 [2024-12-09 17:38:58.735406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.663 [2024-12-09 17:38:58.735442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.663 qpair failed and we were unable to recover it. 00:28:29.663 [2024-12-09 17:38:58.735671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.663 [2024-12-09 17:38:58.735701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.663 qpair failed and we were unable to recover it. 00:28:29.663 [2024-12-09 17:38:58.735925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.663 [2024-12-09 17:38:58.735957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.663 qpair failed and we were unable to recover it. 00:28:29.663 [2024-12-09 17:38:58.736238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.664 [2024-12-09 17:38:58.736272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.664 qpair failed and we were unable to recover it. 00:28:29.664 [2024-12-09 17:38:58.739495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.664 [2024-12-09 17:38:58.739579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.664 qpair failed and we were unable to recover it. 00:28:29.664 [2024-12-09 17:38:58.739862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.664 [2024-12-09 17:38:58.739902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.664 qpair failed and we were unable to recover it. 00:28:29.664 [2024-12-09 17:38:58.740169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.664 [2024-12-09 17:38:58.740204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.664 qpair failed and we were unable to recover it. 00:28:29.664 [2024-12-09 17:38:58.740408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.664 [2024-12-09 17:38:58.740442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.664 qpair failed and we were unable to recover it. 00:28:29.664 [2024-12-09 17:38:58.740732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.664 [2024-12-09 17:38:58.740764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.664 qpair failed and we were unable to recover it. 00:28:29.664 [2024-12-09 17:38:58.741010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.664 [2024-12-09 17:38:58.741042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.664 qpair failed and we were unable to recover it. 00:28:29.664 [2024-12-09 17:38:58.741346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.664 [2024-12-09 17:38:58.741379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.664 qpair failed and we were unable to recover it. 00:28:29.664 [2024-12-09 17:38:58.741673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.664 [2024-12-09 17:38:58.741705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.664 qpair failed and we were unable to recover it. 00:28:29.664 [2024-12-09 17:38:58.741959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.664 [2024-12-09 17:38:58.741991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.664 qpair failed and we were unable to recover it. 00:28:29.664 [2024-12-09 17:38:58.742255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.664 [2024-12-09 17:38:58.742288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.664 qpair failed and we were unable to recover it. 00:28:29.664 [2024-12-09 17:38:58.742523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.664 [2024-12-09 17:38:58.742556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.664 qpair failed and we were unable to recover it. 00:28:29.664 [2024-12-09 17:38:58.742706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.664 [2024-12-09 17:38:58.742737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.664 qpair failed and we were unable to recover it. 00:28:29.664 [2024-12-09 17:38:58.743015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.664 [2024-12-09 17:38:58.743045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.664 qpair failed and we were unable to recover it. 00:28:29.664 [2024-12-09 17:38:58.743247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.664 [2024-12-09 17:38:58.743281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.664 qpair failed and we were unable to recover it. 00:28:29.664 [2024-12-09 17:38:58.743561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.664 [2024-12-09 17:38:58.743593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.664 qpair failed and we were unable to recover it. 00:28:29.664 [2024-12-09 17:38:58.743876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.664 [2024-12-09 17:38:58.743907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.664 qpair failed and we were unable to recover it. 00:28:29.664 [2024-12-09 17:38:58.744118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.664 [2024-12-09 17:38:58.744150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.664 qpair failed and we were unable to recover it. 00:28:29.664 [2024-12-09 17:38:58.744405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.664 [2024-12-09 17:38:58.744444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.664 qpair failed and we were unable to recover it. 00:28:29.664 [2024-12-09 17:38:58.744664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.664 [2024-12-09 17:38:58.744697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.664 qpair failed and we were unable to recover it. 00:28:29.664 [2024-12-09 17:38:58.744976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.664 [2024-12-09 17:38:58.745007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.664 qpair failed and we were unable to recover it. 00:28:29.664 [2024-12-09 17:38:58.745292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.664 [2024-12-09 17:38:58.745327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.664 qpair failed and we were unable to recover it. 00:28:29.664 [2024-12-09 17:38:58.745476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.664 [2024-12-09 17:38:58.745508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.664 qpair failed and we were unable to recover it. 00:28:29.664 [2024-12-09 17:38:58.745700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.664 [2024-12-09 17:38:58.745731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.664 qpair failed and we were unable to recover it. 00:28:29.664 [2024-12-09 17:38:58.745969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.664 [2024-12-09 17:38:58.746002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.664 qpair failed and we were unable to recover it. 00:28:29.664 [2024-12-09 17:38:58.746247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.664 [2024-12-09 17:38:58.746282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.664 qpair failed and we were unable to recover it. 00:28:29.664 [2024-12-09 17:38:58.746570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.664 [2024-12-09 17:38:58.746602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.664 qpair failed and we were unable to recover it. 00:28:29.664 [2024-12-09 17:38:58.746880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.664 [2024-12-09 17:38:58.746913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.664 qpair failed and we were unable to recover it. 00:28:29.664 [2024-12-09 17:38:58.747137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.664 [2024-12-09 17:38:58.747169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.664 qpair failed and we were unable to recover it. 00:28:29.664 [2024-12-09 17:38:58.747482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.664 [2024-12-09 17:38:58.747517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.664 qpair failed and we were unable to recover it. 00:28:29.664 [2024-12-09 17:38:58.747797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.664 [2024-12-09 17:38:58.747829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.664 qpair failed and we were unable to recover it. 00:28:29.664 [2024-12-09 17:38:58.748029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.664 [2024-12-09 17:38:58.748061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.664 qpair failed and we were unable to recover it. 00:28:29.664 [2024-12-09 17:38:58.748287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.664 [2024-12-09 17:38:58.748322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.664 qpair failed and we were unable to recover it. 00:28:29.664 [2024-12-09 17:38:58.748520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.664 [2024-12-09 17:38:58.748552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.664 qpair failed and we were unable to recover it. 00:28:29.664 [2024-12-09 17:38:58.748850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.665 [2024-12-09 17:38:58.748881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.665 qpair failed and we were unable to recover it. 00:28:29.665 [2024-12-09 17:38:58.749108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.665 [2024-12-09 17:38:58.749140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.665 qpair failed and we were unable to recover it. 00:28:29.665 [2024-12-09 17:38:58.749406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.665 [2024-12-09 17:38:58.749439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.665 qpair failed and we were unable to recover it. 00:28:29.665 [2024-12-09 17:38:58.749693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.665 [2024-12-09 17:38:58.749724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.665 qpair failed and we were unable to recover it. 00:28:29.665 [2024-12-09 17:38:58.750027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.665 [2024-12-09 17:38:58.750059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.665 qpair failed and we were unable to recover it. 00:28:29.665 [2024-12-09 17:38:58.750327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.665 [2024-12-09 17:38:58.750360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.665 qpair failed and we were unable to recover it. 00:28:29.665 [2024-12-09 17:38:58.750613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.665 [2024-12-09 17:38:58.750645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.665 qpair failed and we were unable to recover it. 00:28:29.665 [2024-12-09 17:38:58.750901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.665 [2024-12-09 17:38:58.750932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.665 qpair failed and we were unable to recover it. 00:28:29.665 [2024-12-09 17:38:58.751061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.665 [2024-12-09 17:38:58.751094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.665 qpair failed and we were unable to recover it. 00:28:29.665 [2024-12-09 17:38:58.751369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.665 [2024-12-09 17:38:58.751402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.665 qpair failed and we were unable to recover it. 00:28:29.665 [2024-12-09 17:38:58.751592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.665 [2024-12-09 17:38:58.751623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.665 qpair failed and we were unable to recover it. 00:28:29.665 [2024-12-09 17:38:58.751896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.665 [2024-12-09 17:38:58.751929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.665 qpair failed and we were unable to recover it. 00:28:29.665 [2024-12-09 17:38:58.752203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.665 [2024-12-09 17:38:58.752247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.665 qpair failed and we were unable to recover it. 00:28:29.665 [2024-12-09 17:38:58.752527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.665 [2024-12-09 17:38:58.752559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.665 qpair failed and we were unable to recover it. 00:28:29.665 [2024-12-09 17:38:58.752834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.665 [2024-12-09 17:38:58.752867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.665 qpair failed and we were unable to recover it. 00:28:29.665 [2024-12-09 17:38:58.753129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.665 [2024-12-09 17:38:58.753161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.665 qpair failed and we were unable to recover it. 00:28:29.665 [2024-12-09 17:38:58.753470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.665 [2024-12-09 17:38:58.753504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.665 qpair failed and we were unable to recover it. 00:28:29.665 [2024-12-09 17:38:58.753779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.665 [2024-12-09 17:38:58.753812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.665 qpair failed and we were unable to recover it. 00:28:29.665 [2024-12-09 17:38:58.754101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.665 [2024-12-09 17:38:58.754133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.665 qpair failed and we were unable to recover it. 00:28:29.665 [2024-12-09 17:38:58.754319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.665 [2024-12-09 17:38:58.754352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.665 qpair failed and we were unable to recover it. 00:28:29.665 [2024-12-09 17:38:58.754628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.665 [2024-12-09 17:38:58.754660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.665 qpair failed and we were unable to recover it. 00:28:29.665 [2024-12-09 17:38:58.754934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.665 [2024-12-09 17:38:58.754966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.665 qpair failed and we were unable to recover it. 00:28:29.665 [2024-12-09 17:38:58.755103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.665 [2024-12-09 17:38:58.755135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.665 qpair failed and we were unable to recover it. 00:28:29.665 [2024-12-09 17:38:58.755395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.665 [2024-12-09 17:38:58.755429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.665 qpair failed and we were unable to recover it. 00:28:29.665 [2024-12-09 17:38:58.755700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.665 [2024-12-09 17:38:58.755737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.665 qpair failed and we were unable to recover it. 00:28:29.665 [2024-12-09 17:38:58.756017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.665 [2024-12-09 17:38:58.756049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.665 qpair failed and we were unable to recover it. 00:28:29.665 [2024-12-09 17:38:58.756331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.665 [2024-12-09 17:38:58.756364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.665 qpair failed and we were unable to recover it. 00:28:29.665 [2024-12-09 17:38:58.756553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.665 [2024-12-09 17:38:58.756585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.665 qpair failed and we were unable to recover it. 00:28:29.665 [2024-12-09 17:38:58.756780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.665 [2024-12-09 17:38:58.756811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.665 qpair failed and we were unable to recover it. 00:28:29.665 [2024-12-09 17:38:58.757082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.665 [2024-12-09 17:38:58.757114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.665 qpair failed and we were unable to recover it. 00:28:29.665 [2024-12-09 17:38:58.757408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.665 [2024-12-09 17:38:58.757441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.665 qpair failed and we were unable to recover it. 00:28:29.665 [2024-12-09 17:38:58.757639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.665 [2024-12-09 17:38:58.757671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.665 qpair failed and we were unable to recover it. 00:28:29.665 [2024-12-09 17:38:58.757931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.665 [2024-12-09 17:38:58.757963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.665 qpair failed and we were unable to recover it. 00:28:29.665 [2024-12-09 17:38:58.758189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.665 [2024-12-09 17:38:58.758242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.665 qpair failed and we were unable to recover it. 00:28:29.665 [2024-12-09 17:38:58.758447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.665 [2024-12-09 17:38:58.758480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.665 qpair failed and we were unable to recover it. 00:28:29.665 [2024-12-09 17:38:58.758706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.665 [2024-12-09 17:38:58.758737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.665 qpair failed and we were unable to recover it. 00:28:29.665 [2024-12-09 17:38:58.758958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.665 [2024-12-09 17:38:58.758990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.665 qpair failed and we were unable to recover it. 00:28:29.665 [2024-12-09 17:38:58.759173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.665 [2024-12-09 17:38:58.759205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.665 qpair failed and we were unable to recover it. 00:28:29.665 [2024-12-09 17:38:58.759435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.665 [2024-12-09 17:38:58.759469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.665 qpair failed and we were unable to recover it. 00:28:29.665 [2024-12-09 17:38:58.759725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.666 [2024-12-09 17:38:58.759758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.666 qpair failed and we were unable to recover it. 00:28:29.666 [2024-12-09 17:38:58.760063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.666 [2024-12-09 17:38:58.760095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.666 qpair failed and we were unable to recover it. 00:28:29.666 [2024-12-09 17:38:58.760386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.666 [2024-12-09 17:38:58.760419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.666 qpair failed and we were unable to recover it. 00:28:29.666 [2024-12-09 17:38:58.760694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.666 [2024-12-09 17:38:58.760726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.666 qpair failed and we were unable to recover it. 00:28:29.666 [2024-12-09 17:38:58.761020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.666 [2024-12-09 17:38:58.761053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.666 qpair failed and we were unable to recover it. 00:28:29.666 [2024-12-09 17:38:58.761337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.666 [2024-12-09 17:38:58.761371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.666 qpair failed and we were unable to recover it. 00:28:29.666 [2024-12-09 17:38:58.761654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.666 [2024-12-09 17:38:58.761687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.666 qpair failed and we were unable to recover it. 00:28:29.666 [2024-12-09 17:38:58.761967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.666 [2024-12-09 17:38:58.761999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.666 qpair failed and we were unable to recover it. 00:28:29.666 [2024-12-09 17:38:58.762287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.666 [2024-12-09 17:38:58.762320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.666 qpair failed and we were unable to recover it. 00:28:29.666 [2024-12-09 17:38:58.762541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.666 [2024-12-09 17:38:58.762573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.666 qpair failed and we were unable to recover it. 00:28:29.666 [2024-12-09 17:38:58.762721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.666 [2024-12-09 17:38:58.762753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.666 qpair failed and we were unable to recover it. 00:28:29.666 [2024-12-09 17:38:58.763031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.666 [2024-12-09 17:38:58.763063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.666 qpair failed and we were unable to recover it. 00:28:29.666 [2024-12-09 17:38:58.763377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.666 [2024-12-09 17:38:58.763410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.666 qpair failed and we were unable to recover it. 00:28:29.666 [2024-12-09 17:38:58.763615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.666 [2024-12-09 17:38:58.763646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.666 qpair failed and we were unable to recover it. 00:28:29.666 [2024-12-09 17:38:58.763925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.666 [2024-12-09 17:38:58.763958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.666 qpair failed and we were unable to recover it. 00:28:29.666 [2024-12-09 17:38:58.764243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.666 [2024-12-09 17:38:58.764276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.666 qpair failed and we were unable to recover it. 00:28:29.666 [2024-12-09 17:38:58.764568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.666 [2024-12-09 17:38:58.764600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.666 qpair failed and we were unable to recover it. 00:28:29.666 [2024-12-09 17:38:58.764800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.666 [2024-12-09 17:38:58.764832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.666 qpair failed and we were unable to recover it. 00:28:29.666 [2024-12-09 17:38:58.765088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.666 [2024-12-09 17:38:58.765120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.666 qpair failed and we were unable to recover it. 00:28:29.666 [2024-12-09 17:38:58.765421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.666 [2024-12-09 17:38:58.765454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.666 qpair failed and we were unable to recover it. 00:28:29.666 [2024-12-09 17:38:58.765672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.666 [2024-12-09 17:38:58.765704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.666 qpair failed and we were unable to recover it. 00:28:29.666 [2024-12-09 17:38:58.766012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.666 [2024-12-09 17:38:58.766043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.666 qpair failed and we were unable to recover it. 00:28:29.666 [2024-12-09 17:38:58.766161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.666 [2024-12-09 17:38:58.766192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.666 qpair failed and we were unable to recover it. 00:28:29.666 [2024-12-09 17:38:58.766499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.666 [2024-12-09 17:38:58.766532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.666 qpair failed and we were unable to recover it. 00:28:29.666 [2024-12-09 17:38:58.766788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.666 [2024-12-09 17:38:58.766820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.666 qpair failed and we were unable to recover it. 00:28:29.666 [2024-12-09 17:38:58.767118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.666 [2024-12-09 17:38:58.767157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.666 qpair failed and we were unable to recover it. 00:28:29.666 [2024-12-09 17:38:58.767462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.666 [2024-12-09 17:38:58.767495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.666 qpair failed and we were unable to recover it. 00:28:29.666 [2024-12-09 17:38:58.767718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.666 [2024-12-09 17:38:58.767749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.666 qpair failed and we were unable to recover it. 00:28:29.666 [2024-12-09 17:38:58.768025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.666 [2024-12-09 17:38:58.768056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.666 qpair failed and we were unable to recover it. 00:28:29.666 [2024-12-09 17:38:58.768241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.666 [2024-12-09 17:38:58.768274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.666 qpair failed and we were unable to recover it. 00:28:29.666 [2024-12-09 17:38:58.768474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.666 [2024-12-09 17:38:58.768505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.666 qpair failed and we were unable to recover it. 00:28:29.666 [2024-12-09 17:38:58.768662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.666 [2024-12-09 17:38:58.768693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.666 qpair failed and we were unable to recover it. 00:28:29.666 [2024-12-09 17:38:58.768878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.666 [2024-12-09 17:38:58.768910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.666 qpair failed and we were unable to recover it. 00:28:29.666 [2024-12-09 17:38:58.769186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.666 [2024-12-09 17:38:58.769226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.666 qpair failed and we were unable to recover it. 00:28:29.666 [2024-12-09 17:38:58.769514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.666 [2024-12-09 17:38:58.769546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.666 qpair failed and we were unable to recover it. 00:28:29.666 [2024-12-09 17:38:58.769725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.666 [2024-12-09 17:38:58.769757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.666 qpair failed and we were unable to recover it. 00:28:29.666 [2024-12-09 17:38:58.770041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.666 [2024-12-09 17:38:58.770073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.666 qpair failed and we were unable to recover it. 00:28:29.666 [2024-12-09 17:38:58.770355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.666 [2024-12-09 17:38:58.770389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.666 qpair failed and we were unable to recover it. 00:28:29.666 [2024-12-09 17:38:58.770611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.666 [2024-12-09 17:38:58.770643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.666 qpair failed and we were unable to recover it. 00:28:29.666 [2024-12-09 17:38:58.770845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.667 [2024-12-09 17:38:58.770877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.667 qpair failed and we were unable to recover it. 00:28:29.667 [2024-12-09 17:38:58.771077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.667 [2024-12-09 17:38:58.771108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.667 qpair failed and we were unable to recover it. 00:28:29.667 [2024-12-09 17:38:58.771386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.667 [2024-12-09 17:38:58.771419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.667 qpair failed and we were unable to recover it. 00:28:29.667 [2024-12-09 17:38:58.771689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.667 [2024-12-09 17:38:58.771721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.667 qpair failed and we were unable to recover it. 00:28:29.667 [2024-12-09 17:38:58.771932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.667 [2024-12-09 17:38:58.771963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.667 qpair failed and we were unable to recover it. 00:28:29.667 [2024-12-09 17:38:58.772160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.667 [2024-12-09 17:38:58.772191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.667 qpair failed and we were unable to recover it. 00:28:29.667 [2024-12-09 17:38:58.772401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.667 [2024-12-09 17:38:58.772434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.667 qpair failed and we were unable to recover it. 00:28:29.667 [2024-12-09 17:38:58.772715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.667 [2024-12-09 17:38:58.772747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.667 qpair failed and we were unable to recover it. 00:28:29.667 [2024-12-09 17:38:58.772985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.667 [2024-12-09 17:38:58.773016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.667 qpair failed and we were unable to recover it. 00:28:29.667 [2024-12-09 17:38:58.773299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.667 [2024-12-09 17:38:58.773332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.667 qpair failed and we were unable to recover it. 00:28:29.667 [2024-12-09 17:38:58.773613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.667 [2024-12-09 17:38:58.773645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.667 qpair failed and we were unable to recover it. 00:28:29.667 [2024-12-09 17:38:58.773915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.667 [2024-12-09 17:38:58.773947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.667 qpair failed and we were unable to recover it. 00:28:29.667 [2024-12-09 17:38:58.774165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.667 [2024-12-09 17:38:58.774197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.667 qpair failed and we were unable to recover it. 00:28:29.667 [2024-12-09 17:38:58.774443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.667 [2024-12-09 17:38:58.774478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.667 qpair failed and we were unable to recover it. 00:28:29.667 [2024-12-09 17:38:58.774691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.667 [2024-12-09 17:38:58.774723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.667 qpair failed and we were unable to recover it. 00:28:29.667 [2024-12-09 17:38:58.774967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.667 [2024-12-09 17:38:58.775000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.667 qpair failed and we were unable to recover it. 00:28:29.667 [2024-12-09 17:38:58.775277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.667 [2024-12-09 17:38:58.775310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.667 qpair failed and we were unable to recover it. 00:28:29.667 [2024-12-09 17:38:58.775494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.667 [2024-12-09 17:38:58.775526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.667 qpair failed and we were unable to recover it. 00:28:29.667 [2024-12-09 17:38:58.775744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.667 [2024-12-09 17:38:58.775776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.667 qpair failed and we were unable to recover it. 00:28:29.667 [2024-12-09 17:38:58.775925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.667 [2024-12-09 17:38:58.775958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.667 qpair failed and we were unable to recover it. 00:28:29.667 [2024-12-09 17:38:58.776239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.667 [2024-12-09 17:38:58.776272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.667 qpair failed and we were unable to recover it. 00:28:29.667 [2024-12-09 17:38:58.776500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.667 [2024-12-09 17:38:58.776544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.667 qpair failed and we were unable to recover it. 00:28:29.667 [2024-12-09 17:38:58.776818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.667 [2024-12-09 17:38:58.776850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.667 qpair failed and we were unable to recover it. 00:28:29.667 [2024-12-09 17:38:58.777035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.667 [2024-12-09 17:38:58.777066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.667 qpair failed and we were unable to recover it. 00:28:29.667 [2024-12-09 17:38:58.777347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.667 [2024-12-09 17:38:58.777382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.667 qpair failed and we were unable to recover it. 00:28:29.667 [2024-12-09 17:38:58.777634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.667 [2024-12-09 17:38:58.777665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.667 qpair failed and we were unable to recover it. 00:28:29.667 [2024-12-09 17:38:58.777917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.667 [2024-12-09 17:38:58.777956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.667 qpair failed and we were unable to recover it. 00:28:29.667 [2024-12-09 17:38:58.778212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.667 [2024-12-09 17:38:58.778256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.667 qpair failed and we were unable to recover it. 00:28:29.667 [2024-12-09 17:38:58.778534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.667 [2024-12-09 17:38:58.778566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.667 qpair failed and we were unable to recover it. 00:28:29.667 [2024-12-09 17:38:58.778870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.667 [2024-12-09 17:38:58.778903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.667 qpair failed and we were unable to recover it. 00:28:29.667 [2024-12-09 17:38:58.779110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.667 [2024-12-09 17:38:58.779142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.667 qpair failed and we were unable to recover it. 00:28:29.667 [2024-12-09 17:38:58.779397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.667 [2024-12-09 17:38:58.779431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.667 qpair failed and we were unable to recover it. 00:28:29.667 [2024-12-09 17:38:58.779634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.667 [2024-12-09 17:38:58.779666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.667 qpair failed and we were unable to recover it. 00:28:29.667 [2024-12-09 17:38:58.779870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.667 [2024-12-09 17:38:58.779901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.667 qpair failed and we were unable to recover it. 00:28:29.667 [2024-12-09 17:38:58.780176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.667 [2024-12-09 17:38:58.780230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.667 qpair failed and we were unable to recover it. 00:28:29.667 [2024-12-09 17:38:58.780535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.667 [2024-12-09 17:38:58.780578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.667 qpair failed and we were unable to recover it. 00:28:29.667 [2024-12-09 17:38:58.780915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.667 [2024-12-09 17:38:58.780957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.667 qpair failed and we were unable to recover it. 00:28:29.667 [2024-12-09 17:38:58.781276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.667 [2024-12-09 17:38:58.781310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.667 qpair failed and we were unable to recover it. 00:28:29.667 [2024-12-09 17:38:58.781512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.667 [2024-12-09 17:38:58.781545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.667 qpair failed and we were unable to recover it. 00:28:29.667 [2024-12-09 17:38:58.781729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.667 [2024-12-09 17:38:58.781763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.668 qpair failed and we were unable to recover it. 00:28:29.668 [2024-12-09 17:38:58.782049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.668 [2024-12-09 17:38:58.782082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.668 qpair failed and we were unable to recover it. 00:28:29.668 [2024-12-09 17:38:58.782346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.668 [2024-12-09 17:38:58.782380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.668 qpair failed and we were unable to recover it. 00:28:29.668 [2024-12-09 17:38:58.782659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.668 [2024-12-09 17:38:58.782692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.668 qpair failed and we were unable to recover it. 00:28:29.668 [2024-12-09 17:38:58.782976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.668 [2024-12-09 17:38:58.783009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.668 qpair failed and we were unable to recover it. 00:28:29.668 [2024-12-09 17:38:58.783215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.668 [2024-12-09 17:38:58.783259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.668 qpair failed and we were unable to recover it. 00:28:29.668 [2024-12-09 17:38:58.783539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.668 [2024-12-09 17:38:58.783571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.668 qpair failed and we were unable to recover it. 00:28:29.668 [2024-12-09 17:38:58.783772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.668 [2024-12-09 17:38:58.783805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.668 qpair failed and we were unable to recover it. 00:28:29.668 [2024-12-09 17:38:58.784055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.668 [2024-12-09 17:38:58.784088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.668 qpair failed and we were unable to recover it. 00:28:29.668 [2024-12-09 17:38:58.784285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.668 [2024-12-09 17:38:58.784319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.668 qpair failed and we were unable to recover it. 00:28:29.668 [2024-12-09 17:38:58.784575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.668 [2024-12-09 17:38:58.784607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.668 qpair failed and we were unable to recover it. 00:28:29.668 [2024-12-09 17:38:58.784861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.668 [2024-12-09 17:38:58.784893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.668 qpair failed and we were unable to recover it. 00:28:29.668 [2024-12-09 17:38:58.785195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.668 [2024-12-09 17:38:58.785239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.668 qpair failed and we were unable to recover it. 00:28:29.668 [2024-12-09 17:38:58.785358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.668 [2024-12-09 17:38:58.785390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.668 qpair failed and we were unable to recover it. 00:28:29.668 [2024-12-09 17:38:58.785676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.668 [2024-12-09 17:38:58.785709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.668 qpair failed and we were unable to recover it. 00:28:29.668 [2024-12-09 17:38:58.785987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.668 [2024-12-09 17:38:58.786020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.668 qpair failed and we were unable to recover it. 00:28:29.668 [2024-12-09 17:38:58.786285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.668 [2024-12-09 17:38:58.786320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.668 qpair failed and we were unable to recover it. 00:28:29.668 [2024-12-09 17:38:58.786522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.668 [2024-12-09 17:38:58.786554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.668 qpair failed and we were unable to recover it. 00:28:29.668 [2024-12-09 17:38:58.786763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.668 [2024-12-09 17:38:58.786795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.668 qpair failed and we were unable to recover it. 00:28:29.668 [2024-12-09 17:38:58.787075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.668 [2024-12-09 17:38:58.787108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.668 qpair failed and we were unable to recover it. 00:28:29.668 [2024-12-09 17:38:58.787417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.668 [2024-12-09 17:38:58.787451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.668 qpair failed and we were unable to recover it. 00:28:29.668 [2024-12-09 17:38:58.787709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.668 [2024-12-09 17:38:58.787741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.668 qpair failed and we were unable to recover it. 00:28:29.668 [2024-12-09 17:38:58.787970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.668 [2024-12-09 17:38:58.788003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.668 qpair failed and we were unable to recover it. 00:28:29.668 [2024-12-09 17:38:58.788261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.668 [2024-12-09 17:38:58.788294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.668 qpair failed and we were unable to recover it. 00:28:29.668 [2024-12-09 17:38:58.788597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.668 [2024-12-09 17:38:58.788629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.668 qpair failed and we were unable to recover it. 00:28:29.668 [2024-12-09 17:38:58.788837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.668 [2024-12-09 17:38:58.788870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.668 qpair failed and we were unable to recover it. 00:28:29.668 [2024-12-09 17:38:58.789062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.668 [2024-12-09 17:38:58.789096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.668 qpair failed and we were unable to recover it. 00:28:29.668 [2024-12-09 17:38:58.789300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.668 [2024-12-09 17:38:58.789339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.668 qpair failed and we were unable to recover it. 00:28:29.668 [2024-12-09 17:38:58.789526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.668 [2024-12-09 17:38:58.789558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.668 qpair failed and we were unable to recover it. 00:28:29.668 [2024-12-09 17:38:58.789708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.668 [2024-12-09 17:38:58.789740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.668 qpair failed and we were unable to recover it. 00:28:29.668 [2024-12-09 17:38:58.790040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.668 [2024-12-09 17:38:58.790072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.668 qpair failed and we were unable to recover it. 00:28:29.668 [2024-12-09 17:38:58.790263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.668 [2024-12-09 17:38:58.790298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.668 qpair failed and we were unable to recover it. 00:28:29.668 [2024-12-09 17:38:58.790566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.668 [2024-12-09 17:38:58.790599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.668 qpair failed and we were unable to recover it. 00:28:29.668 [2024-12-09 17:38:58.790808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.668 [2024-12-09 17:38:58.790840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.668 qpair failed and we were unable to recover it. 00:28:29.668 [2024-12-09 17:38:58.791096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.668 [2024-12-09 17:38:58.791129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.668 qpair failed and we were unable to recover it. 00:28:29.668 [2024-12-09 17:38:58.791347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.669 [2024-12-09 17:38:58.791381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.669 qpair failed and we were unable to recover it. 00:28:29.669 [2024-12-09 17:38:58.791589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.669 [2024-12-09 17:38:58.791621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.669 qpair failed and we were unable to recover it. 00:28:29.669 [2024-12-09 17:38:58.791803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.669 [2024-12-09 17:38:58.791836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.669 qpair failed and we were unable to recover it. 00:28:29.669 [2024-12-09 17:38:58.792042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.669 [2024-12-09 17:38:58.792075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.669 qpair failed and we were unable to recover it. 00:28:29.669 [2024-12-09 17:38:58.792357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.669 [2024-12-09 17:38:58.792390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.669 qpair failed and we were unable to recover it. 00:28:29.669 [2024-12-09 17:38:58.792716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.669 [2024-12-09 17:38:58.792749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.669 qpair failed and we were unable to recover it. 00:28:29.669 [2024-12-09 17:38:58.793044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.669 [2024-12-09 17:38:58.793077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.669 qpair failed and we were unable to recover it. 00:28:29.669 [2024-12-09 17:38:58.793350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.669 [2024-12-09 17:38:58.793384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.669 qpair failed and we were unable to recover it. 00:28:29.669 [2024-12-09 17:38:58.793697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.669 [2024-12-09 17:38:58.793730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.669 qpair failed and we were unable to recover it. 00:28:29.669 [2024-12-09 17:38:58.793984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.669 [2024-12-09 17:38:58.794017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.669 qpair failed and we were unable to recover it. 00:28:29.669 [2024-12-09 17:38:58.794288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.669 [2024-12-09 17:38:58.794322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.669 qpair failed and we were unable to recover it. 00:28:29.669 [2024-12-09 17:38:58.794454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.669 [2024-12-09 17:38:58.794488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.669 qpair failed and we were unable to recover it. 00:28:29.669 [2024-12-09 17:38:58.794742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.669 [2024-12-09 17:38:58.794774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.669 qpair failed and we were unable to recover it. 00:28:29.669 [2024-12-09 17:38:58.795050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.669 [2024-12-09 17:38:58.795082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.669 qpair failed and we were unable to recover it. 00:28:29.669 [2024-12-09 17:38:58.795370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.669 [2024-12-09 17:38:58.795404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.669 qpair failed and we were unable to recover it. 00:28:29.669 [2024-12-09 17:38:58.795592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.669 [2024-12-09 17:38:58.795624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.669 qpair failed and we were unable to recover it. 00:28:29.669 [2024-12-09 17:38:58.795851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.669 [2024-12-09 17:38:58.795883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.669 qpair failed and we were unable to recover it. 00:28:29.669 [2024-12-09 17:38:58.796157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.669 [2024-12-09 17:38:58.796189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.669 qpair failed and we were unable to recover it. 00:28:29.669 [2024-12-09 17:38:58.796486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.669 [2024-12-09 17:38:58.796520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.669 qpair failed and we were unable to recover it. 00:28:29.669 [2024-12-09 17:38:58.796861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.669 [2024-12-09 17:38:58.796940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.669 qpair failed and we were unable to recover it. 00:28:29.669 [2024-12-09 17:38:58.797187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.669 [2024-12-09 17:38:58.797240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.669 qpair failed and we were unable to recover it. 00:28:29.669 [2024-12-09 17:38:58.797455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.669 [2024-12-09 17:38:58.797489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.669 qpair failed and we were unable to recover it. 00:28:29.669 [2024-12-09 17:38:58.797718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.669 [2024-12-09 17:38:58.797750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.669 qpair failed and we were unable to recover it. 00:28:29.669 [2024-12-09 17:38:58.798056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.669 [2024-12-09 17:38:58.798088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.669 qpair failed and we were unable to recover it. 00:28:29.669 [2024-12-09 17:38:58.798356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.669 [2024-12-09 17:38:58.798390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.669 qpair failed and we were unable to recover it. 00:28:29.669 [2024-12-09 17:38:58.798668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.669 [2024-12-09 17:38:58.798700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.669 qpair failed and we were unable to recover it. 00:28:29.669 [2024-12-09 17:38:58.798991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.669 [2024-12-09 17:38:58.799022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.669 qpair failed and we were unable to recover it. 00:28:29.669 [2024-12-09 17:38:58.799207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.669 [2024-12-09 17:38:58.799249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.669 qpair failed and we were unable to recover it. 00:28:29.669 [2024-12-09 17:38:58.799374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.669 [2024-12-09 17:38:58.799406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.669 qpair failed and we were unable to recover it. 00:28:29.669 [2024-12-09 17:38:58.799601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.669 [2024-12-09 17:38:58.799632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.669 qpair failed and we were unable to recover it. 00:28:29.669 [2024-12-09 17:38:58.799881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.669 [2024-12-09 17:38:58.799913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.669 qpair failed and we were unable to recover it. 00:28:29.669 [2024-12-09 17:38:58.800191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.669 [2024-12-09 17:38:58.800230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.669 qpair failed and we were unable to recover it. 00:28:29.669 [2024-12-09 17:38:58.800512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.669 [2024-12-09 17:38:58.800544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.669 qpair failed and we were unable to recover it. 00:28:29.669 [2024-12-09 17:38:58.800759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.669 [2024-12-09 17:38:58.800792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.669 qpair failed and we were unable to recover it. 00:28:29.669 [2024-12-09 17:38:58.801075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.669 [2024-12-09 17:38:58.801108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.669 qpair failed and we were unable to recover it. 00:28:29.669 [2024-12-09 17:38:58.801414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.669 [2024-12-09 17:38:58.801447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.669 qpair failed and we were unable to recover it. 00:28:29.669 [2024-12-09 17:38:58.801697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.669 [2024-12-09 17:38:58.801730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.669 qpair failed and we were unable to recover it. 00:28:29.669 [2024-12-09 17:38:58.801996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.670 [2024-12-09 17:38:58.802028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.670 qpair failed and we were unable to recover it. 00:28:29.670 [2024-12-09 17:38:58.802332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.670 [2024-12-09 17:38:58.802365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.670 qpair failed and we were unable to recover it. 00:28:29.670 [2024-12-09 17:38:58.802645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.670 [2024-12-09 17:38:58.802677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.670 qpair failed and we were unable to recover it. 00:28:29.670 [2024-12-09 17:38:58.802869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.670 [2024-12-09 17:38:58.802900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.670 qpair failed and we were unable to recover it. 00:28:29.670 [2024-12-09 17:38:58.803170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.670 [2024-12-09 17:38:58.803202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.670 qpair failed and we were unable to recover it. 00:28:29.670 [2024-12-09 17:38:58.803481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.670 [2024-12-09 17:38:58.803513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.670 qpair failed and we were unable to recover it. 00:28:29.670 [2024-12-09 17:38:58.803698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.670 [2024-12-09 17:38:58.803730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.670 qpair failed and we were unable to recover it. 00:28:29.670 [2024-12-09 17:38:58.804006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.670 [2024-12-09 17:38:58.804038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.670 qpair failed and we were unable to recover it. 00:28:29.670 [2024-12-09 17:38:58.804324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.946 [2024-12-09 17:38:58.804358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.946 qpair failed and we were unable to recover it. 00:28:29.946 [2024-12-09 17:38:58.804639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.946 [2024-12-09 17:38:58.804671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.946 qpair failed and we were unable to recover it. 00:28:29.946 [2024-12-09 17:38:58.804893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.946 [2024-12-09 17:38:58.804926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.946 qpair failed and we were unable to recover it. 00:28:29.946 [2024-12-09 17:38:58.805205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.946 [2024-12-09 17:38:58.805246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.946 qpair failed and we were unable to recover it. 00:28:29.946 [2024-12-09 17:38:58.805435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.946 [2024-12-09 17:38:58.805468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.946 qpair failed and we were unable to recover it. 00:28:29.946 [2024-12-09 17:38:58.805743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.946 [2024-12-09 17:38:58.805775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.946 qpair failed and we were unable to recover it. 00:28:29.946 [2024-12-09 17:38:58.806056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.946 [2024-12-09 17:38:58.806089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.946 qpair failed and we were unable to recover it. 00:28:29.946 [2024-12-09 17:38:58.806299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.946 [2024-12-09 17:38:58.806332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.946 qpair failed and we were unable to recover it. 00:28:29.946 [2024-12-09 17:38:58.806609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.946 [2024-12-09 17:38:58.806642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.946 qpair failed and we were unable to recover it. 00:28:29.946 [2024-12-09 17:38:58.806902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.946 [2024-12-09 17:38:58.806933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.946 qpair failed and we were unable to recover it. 00:28:29.946 [2024-12-09 17:38:58.807080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.946 [2024-12-09 17:38:58.807112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.946 qpair failed and we were unable to recover it. 00:28:29.946 [2024-12-09 17:38:58.807370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.946 [2024-12-09 17:38:58.807404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.946 qpair failed and we were unable to recover it. 00:28:29.946 [2024-12-09 17:38:58.807548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.946 [2024-12-09 17:38:58.807580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.946 qpair failed and we were unable to recover it. 00:28:29.946 [2024-12-09 17:38:58.807808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.946 [2024-12-09 17:38:58.807841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.946 qpair failed and we were unable to recover it. 00:28:29.946 [2024-12-09 17:38:58.808095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.946 [2024-12-09 17:38:58.808133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.946 qpair failed and we were unable to recover it. 00:28:29.946 [2024-12-09 17:38:58.808401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.946 [2024-12-09 17:38:58.808434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.946 qpair failed and we were unable to recover it. 00:28:29.946 [2024-12-09 17:38:58.808716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.946 [2024-12-09 17:38:58.808749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.946 qpair failed and we were unable to recover it. 00:28:29.946 [2024-12-09 17:38:58.809032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.946 [2024-12-09 17:38:58.809065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.946 qpair failed and we were unable to recover it. 00:28:29.946 [2024-12-09 17:38:58.809288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.946 [2024-12-09 17:38:58.809321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.946 qpair failed and we were unable to recover it. 00:28:29.946 [2024-12-09 17:38:58.809577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.946 [2024-12-09 17:38:58.809610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.946 qpair failed and we were unable to recover it. 00:28:29.946 [2024-12-09 17:38:58.809833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.946 [2024-12-09 17:38:58.809866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.946 qpair failed and we were unable to recover it. 00:28:29.946 [2024-12-09 17:38:58.810059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.946 [2024-12-09 17:38:58.810090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.946 qpair failed and we were unable to recover it. 00:28:29.946 [2024-12-09 17:38:58.810367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.946 [2024-12-09 17:38:58.810401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.946 qpair failed and we were unable to recover it. 00:28:29.946 [2024-12-09 17:38:58.810621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.946 [2024-12-09 17:38:58.810653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.946 qpair failed and we were unable to recover it. 00:28:29.946 [2024-12-09 17:38:58.810932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.946 [2024-12-09 17:38:58.810964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.946 qpair failed and we were unable to recover it. 00:28:29.946 [2024-12-09 17:38:58.811251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.946 [2024-12-09 17:38:58.811284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.946 qpair failed and we were unable to recover it. 00:28:29.946 [2024-12-09 17:38:58.811567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.946 [2024-12-09 17:38:58.811599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.946 qpair failed and we were unable to recover it. 00:28:29.946 [2024-12-09 17:38:58.811884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.946 [2024-12-09 17:38:58.811916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.946 qpair failed and we were unable to recover it. 00:28:29.946 [2024-12-09 17:38:58.812200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.946 [2024-12-09 17:38:58.812242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.946 qpair failed and we were unable to recover it. 00:28:29.946 [2024-12-09 17:38:58.812360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.946 [2024-12-09 17:38:58.812391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.946 qpair failed and we were unable to recover it. 00:28:29.946 [2024-12-09 17:38:58.812648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.946 [2024-12-09 17:38:58.812680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.946 qpair failed and we were unable to recover it. 00:28:29.946 [2024-12-09 17:38:58.812982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.947 [2024-12-09 17:38:58.813014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.947 qpair failed and we were unable to recover it. 00:28:29.947 [2024-12-09 17:38:58.813280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.947 [2024-12-09 17:38:58.813313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.947 qpair failed and we were unable to recover it. 00:28:29.947 [2024-12-09 17:38:58.813591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.947 [2024-12-09 17:38:58.813624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.947 qpair failed and we were unable to recover it. 00:28:29.947 [2024-12-09 17:38:58.813760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.947 [2024-12-09 17:38:58.813791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.947 qpair failed and we were unable to recover it. 00:28:29.947 [2024-12-09 17:38:58.814066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.947 [2024-12-09 17:38:58.814097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.947 qpair failed and we were unable to recover it. 00:28:29.947 [2024-12-09 17:38:58.814379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.947 [2024-12-09 17:38:58.814414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.947 qpair failed and we were unable to recover it. 00:28:29.947 [2024-12-09 17:38:58.814694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.947 [2024-12-09 17:38:58.814726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.947 qpair failed and we were unable to recover it. 00:28:29.947 [2024-12-09 17:38:58.815012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.947 [2024-12-09 17:38:58.815044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.947 qpair failed and we were unable to recover it. 00:28:29.947 [2024-12-09 17:38:58.815320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.947 [2024-12-09 17:38:58.815354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.947 qpair failed and we were unable to recover it. 00:28:29.947 [2024-12-09 17:38:58.815647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.947 [2024-12-09 17:38:58.815679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.947 qpair failed and we were unable to recover it. 00:28:29.947 [2024-12-09 17:38:58.815955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.947 [2024-12-09 17:38:58.815988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.947 qpair failed and we were unable to recover it. 00:28:29.947 [2024-12-09 17:38:58.816305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.947 [2024-12-09 17:38:58.816337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.947 qpair failed and we were unable to recover it. 00:28:29.947 [2024-12-09 17:38:58.816538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.947 [2024-12-09 17:38:58.816571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.947 qpair failed and we were unable to recover it. 00:28:29.947 [2024-12-09 17:38:58.816848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.947 [2024-12-09 17:38:58.816881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.947 qpair failed and we were unable to recover it. 00:28:29.947 [2024-12-09 17:38:58.817134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.947 [2024-12-09 17:38:58.817167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.947 qpair failed and we were unable to recover it. 00:28:29.947 [2024-12-09 17:38:58.817480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.947 [2024-12-09 17:38:58.817514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.947 qpair failed and we were unable to recover it. 00:28:29.947 [2024-12-09 17:38:58.817813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.947 [2024-12-09 17:38:58.817845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.947 qpair failed and we were unable to recover it. 00:28:29.947 [2024-12-09 17:38:58.818050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.947 [2024-12-09 17:38:58.818083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.947 qpair failed and we were unable to recover it. 00:28:29.947 [2024-12-09 17:38:58.818303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.947 [2024-12-09 17:38:58.818337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.947 qpair failed and we were unable to recover it. 00:28:29.947 [2024-12-09 17:38:58.818600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.947 [2024-12-09 17:38:58.818633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.947 qpair failed and we were unable to recover it. 00:28:29.947 [2024-12-09 17:38:58.818932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.947 [2024-12-09 17:38:58.818964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.947 qpair failed and we were unable to recover it. 00:28:29.947 [2024-12-09 17:38:58.819238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.947 [2024-12-09 17:38:58.819272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.947 qpair failed and we were unable to recover it. 00:28:29.947 [2024-12-09 17:38:58.819489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.947 [2024-12-09 17:38:58.819521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.947 qpair failed and we were unable to recover it. 00:28:29.947 [2024-12-09 17:38:58.819638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.947 [2024-12-09 17:38:58.819681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.947 qpair failed and we were unable to recover it. 00:28:29.947 [2024-12-09 17:38:58.819974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.947 [2024-12-09 17:38:58.820007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.947 qpair failed and we were unable to recover it. 00:28:29.947 [2024-12-09 17:38:58.820283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.947 [2024-12-09 17:38:58.820317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.947 qpair failed and we were unable to recover it. 00:28:29.947 [2024-12-09 17:38:58.820607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.947 [2024-12-09 17:38:58.820640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.947 qpair failed and we were unable to recover it. 00:28:29.947 [2024-12-09 17:38:58.820884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.947 [2024-12-09 17:38:58.820917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.947 qpair failed and we were unable to recover it. 00:28:29.947 [2024-12-09 17:38:58.821186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.947 [2024-12-09 17:38:58.821226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.947 qpair failed and we were unable to recover it. 00:28:29.947 [2024-12-09 17:38:58.821375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.947 [2024-12-09 17:38:58.821408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.947 qpair failed and we were unable to recover it. 00:28:29.947 [2024-12-09 17:38:58.821711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.947 [2024-12-09 17:38:58.821742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.947 qpair failed and we were unable to recover it. 00:28:29.947 [2024-12-09 17:38:58.822007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.947 [2024-12-09 17:38:58.822040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.947 qpair failed and we were unable to recover it. 00:28:29.947 [2024-12-09 17:38:58.822265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.947 [2024-12-09 17:38:58.822300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.947 qpair failed and we were unable to recover it. 00:28:29.947 [2024-12-09 17:38:58.822582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.947 [2024-12-09 17:38:58.822615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.947 qpair failed and we were unable to recover it. 00:28:29.947 [2024-12-09 17:38:58.822749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.947 [2024-12-09 17:38:58.822781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.947 qpair failed and we were unable to recover it. 00:28:29.947 [2024-12-09 17:38:58.822983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.947 [2024-12-09 17:38:58.823015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.947 qpair failed and we were unable to recover it. 00:28:29.947 [2024-12-09 17:38:58.823200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.947 [2024-12-09 17:38:58.823242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.947 qpair failed and we were unable to recover it. 00:28:29.947 [2024-12-09 17:38:58.823553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.947 [2024-12-09 17:38:58.823586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.947 qpair failed and we were unable to recover it. 00:28:29.947 [2024-12-09 17:38:58.823773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.947 [2024-12-09 17:38:58.823805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.947 qpair failed and we were unable to recover it. 00:28:29.947 [2024-12-09 17:38:58.824105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.948 [2024-12-09 17:38:58.824137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.948 qpair failed and we were unable to recover it. 00:28:29.948 [2024-12-09 17:38:58.824322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.948 [2024-12-09 17:38:58.824357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.948 qpair failed and we were unable to recover it. 00:28:29.948 [2024-12-09 17:38:58.824656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.948 [2024-12-09 17:38:58.824689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.948 qpair failed and we were unable to recover it. 00:28:29.948 [2024-12-09 17:38:58.824959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.948 [2024-12-09 17:38:58.824991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.948 qpair failed and we were unable to recover it. 00:28:29.948 [2024-12-09 17:38:58.825214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.948 [2024-12-09 17:38:58.825256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.948 qpair failed and we were unable to recover it. 00:28:29.948 [2024-12-09 17:38:58.825461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.948 [2024-12-09 17:38:58.825493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.948 qpair failed and we were unable to recover it. 00:28:29.948 [2024-12-09 17:38:58.825773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.948 [2024-12-09 17:38:58.825805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.948 qpair failed and we were unable to recover it. 00:28:29.948 [2024-12-09 17:38:58.826107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.948 [2024-12-09 17:38:58.826140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.948 qpair failed and we were unable to recover it. 00:28:29.948 [2024-12-09 17:38:58.826424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.948 [2024-12-09 17:38:58.826459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.948 qpair failed and we were unable to recover it. 00:28:29.948 [2024-12-09 17:38:58.826712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.948 [2024-12-09 17:38:58.826745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.948 qpair failed and we were unable to recover it. 00:28:29.948 [2024-12-09 17:38:58.827001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.948 [2024-12-09 17:38:58.827033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.948 qpair failed and we were unable to recover it. 00:28:29.948 [2024-12-09 17:38:58.827243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.948 [2024-12-09 17:38:58.827278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.948 qpair failed and we were unable to recover it. 00:28:29.948 [2024-12-09 17:38:58.827549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.948 [2024-12-09 17:38:58.827581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.948 qpair failed and we were unable to recover it. 00:28:29.948 [2024-12-09 17:38:58.827837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.948 [2024-12-09 17:38:58.827869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.948 qpair failed and we were unable to recover it. 00:28:29.948 [2024-12-09 17:38:58.828173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.948 [2024-12-09 17:38:58.828205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.948 qpair failed and we were unable to recover it. 00:28:29.948 [2024-12-09 17:38:58.828493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.948 [2024-12-09 17:38:58.828526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.948 qpair failed and we were unable to recover it. 00:28:29.948 [2024-12-09 17:38:58.828713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.948 [2024-12-09 17:38:58.828745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.948 qpair failed and we were unable to recover it. 00:28:29.948 [2024-12-09 17:38:58.829026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.948 [2024-12-09 17:38:58.829058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.948 qpair failed and we were unable to recover it. 00:28:29.948 [2024-12-09 17:38:58.829332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.948 [2024-12-09 17:38:58.829366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.948 qpair failed and we were unable to recover it. 00:28:29.948 [2024-12-09 17:38:58.829601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.948 [2024-12-09 17:38:58.829633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.948 qpair failed and we were unable to recover it. 00:28:29.948 [2024-12-09 17:38:58.829913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.948 [2024-12-09 17:38:58.829945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.948 qpair failed and we were unable to recover it. 00:28:29.948 [2024-12-09 17:38:58.830142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.948 [2024-12-09 17:38:58.830174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.948 qpair failed and we were unable to recover it. 00:28:29.948 [2024-12-09 17:38:58.830440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.948 [2024-12-09 17:38:58.830474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.948 qpair failed and we were unable to recover it. 00:28:29.948 [2024-12-09 17:38:58.830778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.948 [2024-12-09 17:38:58.830811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.948 qpair failed and we were unable to recover it. 00:28:29.948 [2024-12-09 17:38:58.831000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.948 [2024-12-09 17:38:58.831039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.948 qpair failed and we were unable to recover it. 00:28:29.948 [2024-12-09 17:38:58.831267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.948 [2024-12-09 17:38:58.831301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.948 qpair failed and we were unable to recover it. 00:28:29.948 [2024-12-09 17:38:58.831497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.948 [2024-12-09 17:38:58.831529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.948 qpair failed and we were unable to recover it. 00:28:29.948 [2024-12-09 17:38:58.831714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.948 [2024-12-09 17:38:58.831747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.948 qpair failed and we were unable to recover it. 00:28:29.948 [2024-12-09 17:38:58.832023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.948 [2024-12-09 17:38:58.832055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.948 qpair failed and we were unable to recover it. 00:28:29.948 [2024-12-09 17:38:58.832326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.948 [2024-12-09 17:38:58.832361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.948 qpair failed and we were unable to recover it. 00:28:29.948 [2024-12-09 17:38:58.832644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.948 [2024-12-09 17:38:58.832676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.948 qpair failed and we were unable to recover it. 00:28:29.948 [2024-12-09 17:38:58.832874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.948 [2024-12-09 17:38:58.832907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.948 qpair failed and we were unable to recover it. 00:28:29.948 [2024-12-09 17:38:58.833182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.948 [2024-12-09 17:38:58.833214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.948 qpair failed and we were unable to recover it. 00:28:29.948 [2024-12-09 17:38:58.833491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.948 [2024-12-09 17:38:58.833524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.948 qpair failed and we were unable to recover it. 00:28:29.948 [2024-12-09 17:38:58.833712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.948 [2024-12-09 17:38:58.833745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.948 qpair failed and we were unable to recover it. 00:28:29.948 [2024-12-09 17:38:58.834020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.948 [2024-12-09 17:38:58.834051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.948 qpair failed and we were unable to recover it. 00:28:29.948 [2024-12-09 17:38:58.834261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.948 [2024-12-09 17:38:58.834296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.948 qpair failed and we were unable to recover it. 00:28:29.948 [2024-12-09 17:38:58.834550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.948 [2024-12-09 17:38:58.834582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.948 qpair failed and we were unable to recover it. 00:28:29.948 [2024-12-09 17:38:58.834869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.948 [2024-12-09 17:38:58.834902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.948 qpair failed and we were unable to recover it. 00:28:29.948 [2024-12-09 17:38:58.835183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.948 [2024-12-09 17:38:58.835228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.949 qpair failed and we were unable to recover it. 00:28:29.949 [2024-12-09 17:38:58.835499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.949 [2024-12-09 17:38:58.835531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.949 qpair failed and we were unable to recover it. 00:28:29.949 [2024-12-09 17:38:58.835718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.949 [2024-12-09 17:38:58.835751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.949 qpair failed and we were unable to recover it. 00:28:29.949 [2024-12-09 17:38:58.835956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.949 [2024-12-09 17:38:58.835989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.949 qpair failed and we were unable to recover it. 00:28:29.949 [2024-12-09 17:38:58.836241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.949 [2024-12-09 17:38:58.836275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.949 qpair failed and we were unable to recover it. 00:28:29.949 [2024-12-09 17:38:58.836472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.949 [2024-12-09 17:38:58.836505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.949 qpair failed and we were unable to recover it. 00:28:29.949 [2024-12-09 17:38:58.836645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.949 [2024-12-09 17:38:58.836677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.949 qpair failed and we were unable to recover it. 00:28:29.949 [2024-12-09 17:38:58.836934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.949 [2024-12-09 17:38:58.836966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.949 qpair failed and we were unable to recover it. 00:28:29.949 [2024-12-09 17:38:58.837160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.949 [2024-12-09 17:38:58.837193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.949 qpair failed and we were unable to recover it. 00:28:29.949 [2024-12-09 17:38:58.837390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.949 [2024-12-09 17:38:58.837423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.949 qpair failed and we were unable to recover it. 00:28:29.949 [2024-12-09 17:38:58.837707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.949 [2024-12-09 17:38:58.837739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.949 qpair failed and we were unable to recover it. 00:28:29.949 [2024-12-09 17:38:58.837998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.949 [2024-12-09 17:38:58.838031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.949 qpair failed and we were unable to recover it. 00:28:29.949 [2024-12-09 17:38:58.838317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.949 [2024-12-09 17:38:58.838353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.949 qpair failed and we were unable to recover it. 00:28:29.949 [2024-12-09 17:38:58.838584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.949 [2024-12-09 17:38:58.838616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.949 qpair failed and we were unable to recover it. 00:28:29.949 [2024-12-09 17:38:58.838820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.949 [2024-12-09 17:38:58.838854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.949 qpair failed and we were unable to recover it. 00:28:29.949 [2024-12-09 17:38:58.839057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.949 [2024-12-09 17:38:58.839089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.949 qpair failed and we were unable to recover it. 00:28:29.949 [2024-12-09 17:38:58.839240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.949 [2024-12-09 17:38:58.839273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.949 qpair failed and we were unable to recover it. 00:28:29.949 [2024-12-09 17:38:58.839527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.949 [2024-12-09 17:38:58.839560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.949 qpair failed and we were unable to recover it. 00:28:29.949 [2024-12-09 17:38:58.839685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.949 [2024-12-09 17:38:58.839717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.949 qpair failed and we were unable to recover it. 00:28:29.949 [2024-12-09 17:38:58.839916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.949 [2024-12-09 17:38:58.839949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.949 qpair failed and we were unable to recover it. 00:28:29.949 [2024-12-09 17:38:58.840133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.949 [2024-12-09 17:38:58.840165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.949 qpair failed and we were unable to recover it. 00:28:29.949 [2024-12-09 17:38:58.840293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.949 [2024-12-09 17:38:58.840326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.949 qpair failed and we were unable to recover it. 00:28:29.949 [2024-12-09 17:38:58.840608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.949 [2024-12-09 17:38:58.840640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.949 qpair failed and we were unable to recover it. 00:28:29.949 [2024-12-09 17:38:58.840944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.949 [2024-12-09 17:38:58.840977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.949 qpair failed and we were unable to recover it. 00:28:29.949 [2024-12-09 17:38:58.841180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.949 [2024-12-09 17:38:58.841213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.949 qpair failed and we were unable to recover it. 00:28:29.949 [2024-12-09 17:38:58.841440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.949 [2024-12-09 17:38:58.841478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.949 qpair failed and we were unable to recover it. 00:28:29.949 [2024-12-09 17:38:58.841734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.949 [2024-12-09 17:38:58.841766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.949 qpair failed and we were unable to recover it. 00:28:29.949 [2024-12-09 17:38:58.842066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.949 [2024-12-09 17:38:58.842097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.949 qpair failed and we were unable to recover it. 00:28:29.949 [2024-12-09 17:38:58.842315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.949 [2024-12-09 17:38:58.842349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.949 qpair failed and we were unable to recover it. 00:28:29.949 [2024-12-09 17:38:58.842627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.949 [2024-12-09 17:38:58.842659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.949 qpair failed and we were unable to recover it. 00:28:29.949 [2024-12-09 17:38:58.842940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.949 [2024-12-09 17:38:58.842972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.949 qpair failed and we were unable to recover it. 00:28:29.949 [2024-12-09 17:38:58.843177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.949 [2024-12-09 17:38:58.843209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.949 qpair failed and we were unable to recover it. 00:28:29.949 [2024-12-09 17:38:58.843451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.949 [2024-12-09 17:38:58.843484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.949 qpair failed and we were unable to recover it. 00:28:29.949 [2024-12-09 17:38:58.843760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.949 [2024-12-09 17:38:58.843793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.949 qpair failed and we were unable to recover it. 00:28:29.949 [2024-12-09 17:38:58.844071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.949 [2024-12-09 17:38:58.844103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.949 qpair failed and we were unable to recover it. 00:28:29.949 [2024-12-09 17:38:58.844396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.949 [2024-12-09 17:38:58.844430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.949 qpair failed and we were unable to recover it. 00:28:29.949 [2024-12-09 17:38:58.844686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.949 [2024-12-09 17:38:58.844719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.949 qpair failed and we were unable to recover it. 00:28:29.949 [2024-12-09 17:38:58.845017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.949 [2024-12-09 17:38:58.845050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.949 qpair failed and we were unable to recover it. 00:28:29.949 [2024-12-09 17:38:58.845247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.949 [2024-12-09 17:38:58.845281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.949 qpair failed and we were unable to recover it. 00:28:29.949 [2024-12-09 17:38:58.845574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.950 [2024-12-09 17:38:58.845607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.950 qpair failed and we were unable to recover it. 00:28:29.950 [2024-12-09 17:38:58.845836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.950 [2024-12-09 17:38:58.845869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.950 qpair failed and we were unable to recover it. 00:28:29.950 [2024-12-09 17:38:58.846083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.950 [2024-12-09 17:38:58.846115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.950 qpair failed and we were unable to recover it. 00:28:29.950 [2024-12-09 17:38:58.846398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.950 [2024-12-09 17:38:58.846432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.950 qpair failed and we were unable to recover it. 00:28:29.950 [2024-12-09 17:38:58.846576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.950 [2024-12-09 17:38:58.846607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.950 qpair failed and we were unable to recover it. 00:28:29.950 [2024-12-09 17:38:58.846810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.950 [2024-12-09 17:38:58.846842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.950 qpair failed and we were unable to recover it. 00:28:29.950 [2024-12-09 17:38:58.847050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.950 [2024-12-09 17:38:58.847083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.950 qpair failed and we were unable to recover it. 00:28:29.950 [2024-12-09 17:38:58.847359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.950 [2024-12-09 17:38:58.847394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.950 qpair failed and we were unable to recover it. 00:28:29.950 [2024-12-09 17:38:58.847709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.950 [2024-12-09 17:38:58.847742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.950 qpair failed and we were unable to recover it. 00:28:29.950 [2024-12-09 17:38:58.848015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.950 [2024-12-09 17:38:58.848048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.950 qpair failed and we were unable to recover it. 00:28:29.950 [2024-12-09 17:38:58.848326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.950 [2024-12-09 17:38:58.848360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.950 qpair failed and we were unable to recover it. 00:28:29.950 [2024-12-09 17:38:58.848554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.950 [2024-12-09 17:38:58.848586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.950 qpair failed and we were unable to recover it. 00:28:29.950 [2024-12-09 17:38:58.848835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.950 [2024-12-09 17:38:58.848868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.950 qpair failed and we were unable to recover it. 00:28:29.950 [2024-12-09 17:38:58.849159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.950 [2024-12-09 17:38:58.849193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.950 qpair failed and we were unable to recover it. 00:28:29.950 [2024-12-09 17:38:58.849528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.950 [2024-12-09 17:38:58.849563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.950 qpair failed and we were unable to recover it. 00:28:29.950 [2024-12-09 17:38:58.849787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.950 [2024-12-09 17:38:58.849820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.950 qpair failed and we were unable to recover it. 00:28:29.950 [2024-12-09 17:38:58.850082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.950 [2024-12-09 17:38:58.850115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.950 qpair failed and we were unable to recover it. 00:28:29.950 [2024-12-09 17:38:58.850447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.950 [2024-12-09 17:38:58.850482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.950 qpair failed and we were unable to recover it. 00:28:29.950 [2024-12-09 17:38:58.850685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.950 [2024-12-09 17:38:58.850716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.950 qpair failed and we were unable to recover it. 00:28:29.950 [2024-12-09 17:38:58.850899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.950 [2024-12-09 17:38:58.850933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.950 qpair failed and we were unable to recover it. 00:28:29.950 [2024-12-09 17:38:58.851130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.950 [2024-12-09 17:38:58.851162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.950 qpair failed and we were unable to recover it. 00:28:29.950 [2024-12-09 17:38:58.851395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.950 [2024-12-09 17:38:58.851428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.950 qpair failed and we were unable to recover it. 00:28:29.950 [2024-12-09 17:38:58.851540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.950 [2024-12-09 17:38:58.851573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.950 qpair failed and we were unable to recover it. 00:28:29.950 [2024-12-09 17:38:58.851848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.950 [2024-12-09 17:38:58.851880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.950 qpair failed and we were unable to recover it. 00:28:29.950 [2024-12-09 17:38:58.852080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.950 [2024-12-09 17:38:58.852112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.950 qpair failed and we were unable to recover it. 00:28:29.950 [2024-12-09 17:38:58.852294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.950 [2024-12-09 17:38:58.852328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.950 qpair failed and we were unable to recover it. 00:28:29.950 [2024-12-09 17:38:58.852629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.950 [2024-12-09 17:38:58.852668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.950 qpair failed and we were unable to recover it. 00:28:29.950 [2024-12-09 17:38:58.852995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.950 [2024-12-09 17:38:58.853028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.950 qpair failed and we were unable to recover it. 00:28:29.950 [2024-12-09 17:38:58.853282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.950 [2024-12-09 17:38:58.853316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.950 qpair failed and we were unable to recover it. 00:28:29.950 [2024-12-09 17:38:58.853456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.950 [2024-12-09 17:38:58.853489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.950 qpair failed and we were unable to recover it. 00:28:29.950 [2024-12-09 17:38:58.853727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.950 [2024-12-09 17:38:58.853760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.950 qpair failed and we were unable to recover it. 00:28:29.950 [2024-12-09 17:38:58.854042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.950 [2024-12-09 17:38:58.854075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.950 qpair failed and we were unable to recover it. 00:28:29.950 [2024-12-09 17:38:58.854262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.950 [2024-12-09 17:38:58.854297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.950 qpair failed and we were unable to recover it. 00:28:29.950 [2024-12-09 17:38:58.854502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.950 [2024-12-09 17:38:58.854535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.950 qpair failed and we were unable to recover it. 00:28:29.950 [2024-12-09 17:38:58.854691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.951 [2024-12-09 17:38:58.854723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.951 qpair failed and we were unable to recover it. 00:28:29.951 [2024-12-09 17:38:58.854927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.951 [2024-12-09 17:38:58.854959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.951 qpair failed and we were unable to recover it. 00:28:29.951 [2024-12-09 17:38:58.855246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.951 [2024-12-09 17:38:58.855280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.951 qpair failed and we were unable to recover it. 00:28:29.951 [2024-12-09 17:38:58.855483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.951 [2024-12-09 17:38:58.855515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.951 qpair failed and we were unable to recover it. 00:28:29.951 [2024-12-09 17:38:58.855770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.951 [2024-12-09 17:38:58.855804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.951 qpair failed and we were unable to recover it. 00:28:29.951 [2024-12-09 17:38:58.856009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.951 [2024-12-09 17:38:58.856042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.951 qpair failed and we were unable to recover it. 00:28:29.951 [2024-12-09 17:38:58.856241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.951 [2024-12-09 17:38:58.856275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.951 qpair failed and we were unable to recover it. 00:28:29.951 [2024-12-09 17:38:58.856524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.951 [2024-12-09 17:38:58.856557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.951 qpair failed and we were unable to recover it. 00:28:29.951 [2024-12-09 17:38:58.856776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.951 [2024-12-09 17:38:58.856809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.951 qpair failed and we were unable to recover it. 00:28:29.951 [2024-12-09 17:38:58.857107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.951 [2024-12-09 17:38:58.857142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.951 qpair failed and we were unable to recover it. 00:28:29.951 [2024-12-09 17:38:58.857356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.951 [2024-12-09 17:38:58.857390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.951 qpair failed and we were unable to recover it. 00:28:29.951 [2024-12-09 17:38:58.857648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.951 [2024-12-09 17:38:58.857680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.951 qpair failed and we were unable to recover it. 00:28:29.951 [2024-12-09 17:38:58.857924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.951 [2024-12-09 17:38:58.857957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.951 qpair failed and we were unable to recover it. 00:28:29.951 [2024-12-09 17:38:58.858240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.951 [2024-12-09 17:38:58.858274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.951 qpair failed and we were unable to recover it. 00:28:29.951 [2024-12-09 17:38:58.858405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.951 [2024-12-09 17:38:58.858439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.951 qpair failed and we were unable to recover it. 00:28:29.951 [2024-12-09 17:38:58.858650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.951 [2024-12-09 17:38:58.858683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.951 qpair failed and we were unable to recover it. 00:28:29.951 [2024-12-09 17:38:58.859007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.951 [2024-12-09 17:38:58.859039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.951 qpair failed and we were unable to recover it. 00:28:29.951 [2024-12-09 17:38:58.859236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.951 [2024-12-09 17:38:58.859271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.951 qpair failed and we were unable to recover it. 00:28:29.951 [2024-12-09 17:38:58.859525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.951 [2024-12-09 17:38:58.859558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.951 qpair failed and we were unable to recover it. 00:28:29.951 [2024-12-09 17:38:58.859713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.951 [2024-12-09 17:38:58.859744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.951 qpair failed and we were unable to recover it. 00:28:29.951 [2024-12-09 17:38:58.860034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.951 [2024-12-09 17:38:58.860067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.951 qpair failed and we were unable to recover it. 00:28:29.951 [2024-12-09 17:38:58.860338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.951 [2024-12-09 17:38:58.860374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.951 qpair failed and we were unable to recover it. 00:28:29.951 [2024-12-09 17:38:58.860591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.951 [2024-12-09 17:38:58.860622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.951 qpair failed and we were unable to recover it. 00:28:29.951 [2024-12-09 17:38:58.860828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.951 [2024-12-09 17:38:58.860860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.951 qpair failed and we were unable to recover it. 00:28:29.951 [2024-12-09 17:38:58.861005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.951 [2024-12-09 17:38:58.861036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.951 qpair failed and we were unable to recover it. 00:28:29.951 [2024-12-09 17:38:58.861346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.951 [2024-12-09 17:38:58.861380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.951 qpair failed and we were unable to recover it. 00:28:29.951 [2024-12-09 17:38:58.861598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.951 [2024-12-09 17:38:58.861631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.951 qpair failed and we were unable to recover it. 00:28:29.951 [2024-12-09 17:38:58.861779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.951 [2024-12-09 17:38:58.861811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.951 qpair failed and we were unable to recover it. 00:28:29.951 [2024-12-09 17:38:58.862005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.951 [2024-12-09 17:38:58.862038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.951 qpair failed and we were unable to recover it. 00:28:29.951 [2024-12-09 17:38:58.862331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.951 [2024-12-09 17:38:58.862366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.951 qpair failed and we were unable to recover it. 00:28:29.951 [2024-12-09 17:38:58.862659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.951 [2024-12-09 17:38:58.862691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.951 qpair failed and we were unable to recover it. 00:28:29.951 [2024-12-09 17:38:58.862887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.951 [2024-12-09 17:38:58.862919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.951 qpair failed and we were unable to recover it. 00:28:29.951 [2024-12-09 17:38:58.863175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.951 [2024-12-09 17:38:58.863213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.951 qpair failed and we were unable to recover it. 00:28:29.951 [2024-12-09 17:38:58.863421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.951 [2024-12-09 17:38:58.863459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.951 qpair failed and we were unable to recover it. 00:28:29.951 [2024-12-09 17:38:58.863670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.951 [2024-12-09 17:38:58.863704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.951 qpair failed and we were unable to recover it. 00:28:29.951 [2024-12-09 17:38:58.863835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.951 [2024-12-09 17:38:58.863867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.951 qpair failed and we were unable to recover it. 00:28:29.951 [2024-12-09 17:38:58.864049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.951 [2024-12-09 17:38:58.864082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.951 qpair failed and we were unable to recover it. 00:28:29.951 [2024-12-09 17:38:58.864296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.951 [2024-12-09 17:38:58.864330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.951 qpair failed and we were unable to recover it. 00:28:29.951 [2024-12-09 17:38:58.864460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.951 [2024-12-09 17:38:58.864493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.951 qpair failed and we were unable to recover it. 00:28:29.951 [2024-12-09 17:38:58.864678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.952 [2024-12-09 17:38:58.864711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.952 qpair failed and we were unable to recover it. 00:28:29.952 [2024-12-09 17:38:58.864901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.952 [2024-12-09 17:38:58.864933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.952 qpair failed and we were unable to recover it. 00:28:29.952 [2024-12-09 17:38:58.865185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.952 [2024-12-09 17:38:58.865230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.952 qpair failed and we were unable to recover it. 00:28:29.952 [2024-12-09 17:38:58.865422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.952 [2024-12-09 17:38:58.865456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.952 qpair failed and we were unable to recover it. 00:28:29.952 [2024-12-09 17:38:58.865679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.952 [2024-12-09 17:38:58.865712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.952 qpair failed and we were unable to recover it. 00:28:29.952 [2024-12-09 17:38:58.865830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.952 [2024-12-09 17:38:58.865862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.952 qpair failed and we were unable to recover it. 00:28:29.952 [2024-12-09 17:38:58.866119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.952 [2024-12-09 17:38:58.866153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.952 qpair failed and we were unable to recover it. 00:28:29.952 [2024-12-09 17:38:58.866461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.952 [2024-12-09 17:38:58.866495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.952 qpair failed and we were unable to recover it. 00:28:29.952 [2024-12-09 17:38:58.866781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.952 [2024-12-09 17:38:58.866814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.952 qpair failed and we were unable to recover it. 00:28:29.952 [2024-12-09 17:38:58.867094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.952 [2024-12-09 17:38:58.867126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.952 qpair failed and we were unable to recover it. 00:28:29.952 [2024-12-09 17:38:58.867408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.952 [2024-12-09 17:38:58.867442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.952 qpair failed and we were unable to recover it. 00:28:29.952 [2024-12-09 17:38:58.867727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.952 [2024-12-09 17:38:58.867760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.952 qpair failed and we were unable to recover it. 00:28:29.952 [2024-12-09 17:38:58.868045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.952 [2024-12-09 17:38:58.868079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.952 qpair failed and we were unable to recover it. 00:28:29.952 [2024-12-09 17:38:58.868346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.952 [2024-12-09 17:38:58.868381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.952 qpair failed and we were unable to recover it. 00:28:29.952 [2024-12-09 17:38:58.868583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.952 [2024-12-09 17:38:58.868616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.952 qpair failed and we were unable to recover it. 00:28:29.952 [2024-12-09 17:38:58.868849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.952 [2024-12-09 17:38:58.868882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.952 qpair failed and we were unable to recover it. 00:28:29.952 [2024-12-09 17:38:58.869154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.952 [2024-12-09 17:38:58.869187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.952 qpair failed and we were unable to recover it. 00:28:29.952 [2024-12-09 17:38:58.869480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.952 [2024-12-09 17:38:58.869515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.952 qpair failed and we were unable to recover it. 00:28:29.952 [2024-12-09 17:38:58.869792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.952 [2024-12-09 17:38:58.869825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.952 qpair failed and we were unable to recover it. 00:28:29.952 [2024-12-09 17:38:58.870109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.952 [2024-12-09 17:38:58.870144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.952 qpair failed and we were unable to recover it. 00:28:29.952 [2024-12-09 17:38:58.870450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.952 [2024-12-09 17:38:58.870485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.952 qpair failed and we were unable to recover it. 00:28:29.952 [2024-12-09 17:38:58.870708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.952 [2024-12-09 17:38:58.870743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.952 qpair failed and we were unable to recover it. 00:28:29.952 [2024-12-09 17:38:58.870884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.952 [2024-12-09 17:38:58.870918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.952 qpair failed and we were unable to recover it. 00:28:29.952 [2024-12-09 17:38:58.871191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.952 [2024-12-09 17:38:58.871233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.952 qpair failed and we were unable to recover it. 00:28:29.952 [2024-12-09 17:38:58.871385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.952 [2024-12-09 17:38:58.871418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.952 qpair failed and we were unable to recover it. 00:28:29.952 [2024-12-09 17:38:58.871603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.952 [2024-12-09 17:38:58.871637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.952 qpair failed and we were unable to recover it. 00:28:29.952 [2024-12-09 17:38:58.871918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.952 [2024-12-09 17:38:58.871950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.952 qpair failed and we were unable to recover it. 00:28:29.952 [2024-12-09 17:38:58.872237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.952 [2024-12-09 17:38:58.872271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.952 qpair failed and we were unable to recover it. 00:28:29.952 [2024-12-09 17:38:58.872476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.952 [2024-12-09 17:38:58.872508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.952 qpair failed and we were unable to recover it. 00:28:29.952 [2024-12-09 17:38:58.872763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.952 [2024-12-09 17:38:58.872796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.952 qpair failed and we were unable to recover it. 00:28:29.952 [2024-12-09 17:38:58.873078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.952 [2024-12-09 17:38:58.873111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.952 qpair failed and we were unable to recover it. 00:28:29.952 [2024-12-09 17:38:58.873417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.952 [2024-12-09 17:38:58.873451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.952 qpair failed and we were unable to recover it. 00:28:29.952 [2024-12-09 17:38:58.873690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.952 [2024-12-09 17:38:58.873723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.952 qpair failed and we were unable to recover it. 00:28:29.952 [2024-12-09 17:38:58.874046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.952 [2024-12-09 17:38:58.874090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.952 qpair failed and we were unable to recover it. 00:28:29.952 [2024-12-09 17:38:58.874390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.952 [2024-12-09 17:38:58.874424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.952 qpair failed and we were unable to recover it. 00:28:29.952 [2024-12-09 17:38:58.874699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.952 [2024-12-09 17:38:58.874733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.952 qpair failed and we were unable to recover it. 00:28:29.952 [2024-12-09 17:38:58.874869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.952 [2024-12-09 17:38:58.874903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.952 qpair failed and we were unable to recover it. 00:28:29.952 [2024-12-09 17:38:58.875092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.952 [2024-12-09 17:38:58.875126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.952 qpair failed and we were unable to recover it. 00:28:29.952 [2024-12-09 17:38:58.875339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.952 [2024-12-09 17:38:58.875374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.953 qpair failed and we were unable to recover it. 00:28:29.953 [2024-12-09 17:38:58.875592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.953 [2024-12-09 17:38:58.875625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.953 qpair failed and we were unable to recover it. 00:28:29.953 [2024-12-09 17:38:58.875924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.953 [2024-12-09 17:38:58.875958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.953 qpair failed and we were unable to recover it. 00:28:29.953 [2024-12-09 17:38:58.876142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.953 [2024-12-09 17:38:58.876176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.953 qpair failed and we were unable to recover it. 00:28:29.953 [2024-12-09 17:38:58.876323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.953 [2024-12-09 17:38:58.876357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.953 qpair failed and we were unable to recover it. 00:28:29.953 [2024-12-09 17:38:58.876573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.953 [2024-12-09 17:38:58.876606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.953 qpair failed and we were unable to recover it. 00:28:29.953 [2024-12-09 17:38:58.876906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.953 [2024-12-09 17:38:58.876939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.953 qpair failed and we were unable to recover it. 00:28:29.953 [2024-12-09 17:38:58.877136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.953 [2024-12-09 17:38:58.877168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.953 qpair failed and we were unable to recover it. 00:28:29.953 [2024-12-09 17:38:58.877313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.953 [2024-12-09 17:38:58.877348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.953 qpair failed and we were unable to recover it. 00:28:29.953 [2024-12-09 17:38:58.877538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.953 [2024-12-09 17:38:58.877571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.953 qpair failed and we were unable to recover it. 00:28:29.953 [2024-12-09 17:38:58.877763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.953 [2024-12-09 17:38:58.877796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.953 qpair failed and we were unable to recover it. 00:28:29.953 [2024-12-09 17:38:58.878069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.953 [2024-12-09 17:38:58.878103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.953 qpair failed and we were unable to recover it. 00:28:29.953 [2024-12-09 17:38:58.878399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.953 [2024-12-09 17:38:58.878434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.953 qpair failed and we were unable to recover it. 00:28:29.953 [2024-12-09 17:38:58.878651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.953 [2024-12-09 17:38:58.878684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.953 qpair failed and we were unable to recover it. 00:28:29.953 [2024-12-09 17:38:58.878902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.953 [2024-12-09 17:38:58.878936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.953 qpair failed and we were unable to recover it. 00:28:29.953 [2024-12-09 17:38:58.879122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.953 [2024-12-09 17:38:58.879157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.953 qpair failed and we were unable to recover it. 00:28:29.953 [2024-12-09 17:38:58.879370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.953 [2024-12-09 17:38:58.879405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.953 qpair failed and we were unable to recover it. 00:28:29.953 [2024-12-09 17:38:58.879627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.953 [2024-12-09 17:38:58.879660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.953 qpair failed and we were unable to recover it. 00:28:29.953 [2024-12-09 17:38:58.879844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.953 [2024-12-09 17:38:58.879877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.953 qpair failed and we were unable to recover it. 00:28:29.953 [2024-12-09 17:38:58.880145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.953 [2024-12-09 17:38:58.880178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.953 qpair failed and we were unable to recover it. 00:28:29.953 [2024-12-09 17:38:58.880533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.953 [2024-12-09 17:38:58.880572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.953 qpair failed and we were unable to recover it. 00:28:29.953 [2024-12-09 17:38:58.880866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.953 [2024-12-09 17:38:58.880898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.953 qpair failed and we were unable to recover it. 00:28:29.953 [2024-12-09 17:38:58.881243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.953 [2024-12-09 17:38:58.881324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.953 qpair failed and we were unable to recover it. 00:28:29.953 [2024-12-09 17:38:58.881614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.953 [2024-12-09 17:38:58.881651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.953 qpair failed and we were unable to recover it. 00:28:29.953 [2024-12-09 17:38:58.881924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.953 [2024-12-09 17:38:58.881957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.953 qpair failed and we were unable to recover it. 00:28:29.953 [2024-12-09 17:38:58.882248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.953 [2024-12-09 17:38:58.882283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.953 qpair failed and we were unable to recover it. 00:28:29.953 [2024-12-09 17:38:58.882541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.953 [2024-12-09 17:38:58.882573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.953 qpair failed and we were unable to recover it. 00:28:29.953 [2024-12-09 17:38:58.882803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.953 [2024-12-09 17:38:58.882836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.953 qpair failed and we were unable to recover it. 00:28:29.953 [2024-12-09 17:38:58.883023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.953 [2024-12-09 17:38:58.883058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.953 qpair failed and we were unable to recover it. 00:28:29.953 [2024-12-09 17:38:58.883346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.953 [2024-12-09 17:38:58.883379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.953 qpair failed and we were unable to recover it. 00:28:29.953 [2024-12-09 17:38:58.883676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.953 [2024-12-09 17:38:58.883709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.953 qpair failed and we were unable to recover it. 00:28:29.953 [2024-12-09 17:38:58.883988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.953 [2024-12-09 17:38:58.884021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.953 qpair failed and we were unable to recover it. 00:28:29.953 [2024-12-09 17:38:58.884335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.953 [2024-12-09 17:38:58.884369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.953 qpair failed and we were unable to recover it. 00:28:29.953 [2024-12-09 17:38:58.884632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.953 [2024-12-09 17:38:58.884664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.953 qpair failed and we were unable to recover it. 00:28:29.953 [2024-12-09 17:38:58.884965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.953 [2024-12-09 17:38:58.884998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.953 qpair failed and we were unable to recover it. 00:28:29.953 [2024-12-09 17:38:58.885267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.953 [2024-12-09 17:38:58.885310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.953 qpair failed and we were unable to recover it. 00:28:29.953 [2024-12-09 17:38:58.885464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.953 [2024-12-09 17:38:58.885496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.953 qpair failed and we were unable to recover it. 00:28:29.953 [2024-12-09 17:38:58.885751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.953 [2024-12-09 17:38:58.885783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.953 qpair failed and we were unable to recover it. 00:28:29.954 [2024-12-09 17:38:58.886057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.954 [2024-12-09 17:38:58.886089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.954 qpair failed and we were unable to recover it. 00:28:29.954 [2024-12-09 17:38:58.886324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.954 [2024-12-09 17:38:58.886357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.954 qpair failed and we were unable to recover it. 00:28:29.954 [2024-12-09 17:38:58.886488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.954 [2024-12-09 17:38:58.886518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.954 qpair failed and we were unable to recover it. 00:28:29.954 [2024-12-09 17:38:58.886804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.954 [2024-12-09 17:38:58.886834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.954 qpair failed and we were unable to recover it. 00:28:29.954 [2024-12-09 17:38:58.887064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.954 [2024-12-09 17:38:58.887097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.954 qpair failed and we were unable to recover it. 00:28:29.954 [2024-12-09 17:38:58.887384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.954 [2024-12-09 17:38:58.887416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.954 qpair failed and we were unable to recover it. 00:28:29.954 [2024-12-09 17:38:58.887697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.954 [2024-12-09 17:38:58.887729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.954 qpair failed and we were unable to recover it. 00:28:29.954 [2024-12-09 17:38:58.887876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.954 [2024-12-09 17:38:58.887909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.954 qpair failed and we were unable to recover it. 00:28:29.954 [2024-12-09 17:38:58.888123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.954 [2024-12-09 17:38:58.888157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.954 qpair failed and we were unable to recover it. 00:28:29.954 [2024-12-09 17:38:58.888459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.954 [2024-12-09 17:38:58.888493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.954 qpair failed and we were unable to recover it. 00:28:29.954 [2024-12-09 17:38:58.888628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.954 [2024-12-09 17:38:58.888661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.954 qpair failed and we were unable to recover it. 00:28:29.954 [2024-12-09 17:38:58.888816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.954 [2024-12-09 17:38:58.888850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.954 qpair failed and we were unable to recover it. 00:28:29.954 [2024-12-09 17:38:58.889103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.954 [2024-12-09 17:38:58.889135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.954 qpair failed and we were unable to recover it. 00:28:29.954 [2024-12-09 17:38:58.889276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.954 [2024-12-09 17:38:58.889310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.954 qpair failed and we were unable to recover it. 00:28:29.954 [2024-12-09 17:38:58.889492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.954 [2024-12-09 17:38:58.889524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.954 qpair failed and we were unable to recover it. 00:28:29.954 [2024-12-09 17:38:58.889656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.954 [2024-12-09 17:38:58.889688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.954 qpair failed and we were unable to recover it. 00:28:29.954 [2024-12-09 17:38:58.889897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.954 [2024-12-09 17:38:58.889930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.954 qpair failed and we were unable to recover it. 00:28:29.954 [2024-12-09 17:38:58.890127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.954 [2024-12-09 17:38:58.890158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.954 qpair failed and we were unable to recover it. 00:28:29.954 [2024-12-09 17:38:58.890366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.954 [2024-12-09 17:38:58.890400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.954 qpair failed and we were unable to recover it. 00:28:29.954 [2024-12-09 17:38:58.890605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.954 [2024-12-09 17:38:58.890639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.954 qpair failed and we were unable to recover it. 00:28:29.954 [2024-12-09 17:38:58.890859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.954 [2024-12-09 17:38:58.890895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.954 qpair failed and we were unable to recover it. 00:28:29.954 [2024-12-09 17:38:58.891083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.954 [2024-12-09 17:38:58.891116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.954 qpair failed and we were unable to recover it. 00:28:29.954 [2024-12-09 17:38:58.891388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.954 [2024-12-09 17:38:58.891421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.954 qpair failed and we were unable to recover it. 00:28:29.954 [2024-12-09 17:38:58.891701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.954 [2024-12-09 17:38:58.891733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.954 qpair failed and we were unable to recover it. 00:28:29.954 [2024-12-09 17:38:58.892060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.954 [2024-12-09 17:38:58.892093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.954 qpair failed and we were unable to recover it. 00:28:29.954 [2024-12-09 17:38:58.892374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.954 [2024-12-09 17:38:58.892407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.954 qpair failed and we were unable to recover it. 00:28:29.954 [2024-12-09 17:38:58.892614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.954 [2024-12-09 17:38:58.892646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.954 qpair failed and we were unable to recover it. 00:28:29.954 [2024-12-09 17:38:58.892928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.954 [2024-12-09 17:38:58.892962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.954 qpair failed and we were unable to recover it. 00:28:29.954 [2024-12-09 17:38:58.893247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.954 [2024-12-09 17:38:58.893283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.954 qpair failed and we were unable to recover it. 00:28:29.954 [2024-12-09 17:38:58.893538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.954 [2024-12-09 17:38:58.893571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.954 qpair failed and we were unable to recover it. 00:28:29.954 [2024-12-09 17:38:58.893775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.954 [2024-12-09 17:38:58.893807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.954 qpair failed and we were unable to recover it. 00:28:29.954 [2024-12-09 17:38:58.894084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.954 [2024-12-09 17:38:58.894117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.954 qpair failed and we were unable to recover it. 00:28:29.954 [2024-12-09 17:38:58.894391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.954 [2024-12-09 17:38:58.894429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.954 qpair failed and we were unable to recover it. 00:28:29.954 [2024-12-09 17:38:58.894656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.954 [2024-12-09 17:38:58.894690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.954 qpair failed and we were unable to recover it. 00:28:29.954 [2024-12-09 17:38:58.894915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.954 [2024-12-09 17:38:58.894947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.954 qpair failed and we were unable to recover it. 00:28:29.954 [2024-12-09 17:38:58.895135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.954 [2024-12-09 17:38:58.895167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.954 qpair failed and we were unable to recover it. 00:28:29.954 [2024-12-09 17:38:58.895454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.954 [2024-12-09 17:38:58.895487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.954 qpair failed and we were unable to recover it. 00:28:29.954 [2024-12-09 17:38:58.895694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.954 [2024-12-09 17:38:58.895733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.954 qpair failed and we were unable to recover it. 00:28:29.954 [2024-12-09 17:38:58.895954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.954 [2024-12-09 17:38:58.895986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.954 qpair failed and we were unable to recover it. 00:28:29.954 [2024-12-09 17:38:58.896181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.955 [2024-12-09 17:38:58.896214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.955 qpair failed and we were unable to recover it. 00:28:29.955 [2024-12-09 17:38:58.896484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.955 [2024-12-09 17:38:58.896516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.955 qpair failed and we were unable to recover it. 00:28:29.955 [2024-12-09 17:38:58.896715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.955 [2024-12-09 17:38:58.896746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.955 qpair failed and we were unable to recover it. 00:28:29.955 [2024-12-09 17:38:58.896963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.955 [2024-12-09 17:38:58.896995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.955 qpair failed and we were unable to recover it. 00:28:29.955 [2024-12-09 17:38:58.897197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.955 [2024-12-09 17:38:58.897240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.955 qpair failed and we were unable to recover it. 00:28:29.955 [2024-12-09 17:38:58.897448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.955 [2024-12-09 17:38:58.897480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.955 qpair failed and we were unable to recover it. 00:28:29.955 [2024-12-09 17:38:58.897626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.955 [2024-12-09 17:38:58.897661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.955 qpair failed and we were unable to recover it. 00:28:29.955 [2024-12-09 17:38:58.897893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.955 [2024-12-09 17:38:58.897924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.955 qpair failed and we were unable to recover it. 00:28:29.955 [2024-12-09 17:38:58.898109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.955 [2024-12-09 17:38:58.898140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.955 qpair failed and we were unable to recover it. 00:28:29.955 [2024-12-09 17:38:58.898401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.955 [2024-12-09 17:38:58.898435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.955 qpair failed and we were unable to recover it. 00:28:29.955 [2024-12-09 17:38:58.898657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.955 [2024-12-09 17:38:58.898688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.955 qpair failed and we were unable to recover it. 00:28:29.955 [2024-12-09 17:38:58.898974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.955 [2024-12-09 17:38:58.899006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.955 qpair failed and we were unable to recover it. 00:28:29.955 [2024-12-09 17:38:58.899195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.955 [2024-12-09 17:38:58.899235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.955 qpair failed and we were unable to recover it. 00:28:29.955 [2024-12-09 17:38:58.899445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.955 [2024-12-09 17:38:58.899478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.955 qpair failed and we were unable to recover it. 00:28:29.955 [2024-12-09 17:38:58.899613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.955 [2024-12-09 17:38:58.899645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.955 qpair failed and we were unable to recover it. 00:28:29.955 [2024-12-09 17:38:58.899921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.955 [2024-12-09 17:38:58.899953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.955 qpair failed and we were unable to recover it. 00:28:29.955 [2024-12-09 17:38:58.900235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.955 [2024-12-09 17:38:58.900269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.955 qpair failed and we were unable to recover it. 00:28:29.955 [2024-12-09 17:38:58.900418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.955 [2024-12-09 17:38:58.900449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.955 qpair failed and we were unable to recover it. 00:28:29.955 [2024-12-09 17:38:58.900653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.955 [2024-12-09 17:38:58.900686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.955 qpair failed and we were unable to recover it. 00:28:29.955 [2024-12-09 17:38:58.900945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.955 [2024-12-09 17:38:58.900976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.955 qpair failed and we were unable to recover it. 00:28:29.955 [2024-12-09 17:38:58.901280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.955 [2024-12-09 17:38:58.901312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.955 qpair failed and we were unable to recover it. 00:28:29.955 [2024-12-09 17:38:58.901518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.955 [2024-12-09 17:38:58.901550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.955 qpair failed and we were unable to recover it. 00:28:29.955 [2024-12-09 17:38:58.901751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.955 [2024-12-09 17:38:58.901783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.955 qpair failed and we were unable to recover it. 00:28:29.955 [2024-12-09 17:38:58.902065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.955 [2024-12-09 17:38:58.902098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.955 qpair failed and we were unable to recover it. 00:28:29.955 [2024-12-09 17:38:58.902348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.955 [2024-12-09 17:38:58.902382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.955 qpair failed and we were unable to recover it. 00:28:29.955 [2024-12-09 17:38:58.902545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.955 [2024-12-09 17:38:58.902578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.955 qpair failed and we were unable to recover it. 00:28:29.955 [2024-12-09 17:38:58.902837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.955 [2024-12-09 17:38:58.902870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.955 qpair failed and we were unable to recover it. 00:28:29.955 [2024-12-09 17:38:58.903169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.955 [2024-12-09 17:38:58.903201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.955 qpair failed and we were unable to recover it. 00:28:29.955 [2024-12-09 17:38:58.903471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.955 [2024-12-09 17:38:58.903503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.955 qpair failed and we were unable to recover it. 00:28:29.955 [2024-12-09 17:38:58.903793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.955 [2024-12-09 17:38:58.903825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.955 qpair failed and we were unable to recover it. 00:28:29.955 [2024-12-09 17:38:58.904036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.955 [2024-12-09 17:38:58.904067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.955 qpair failed and we were unable to recover it. 00:28:29.955 [2024-12-09 17:38:58.904289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.955 [2024-12-09 17:38:58.904321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.955 qpair failed and we were unable to recover it. 00:28:29.955 [2024-12-09 17:38:58.904454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.955 [2024-12-09 17:38:58.904487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.955 qpair failed and we were unable to recover it. 00:28:29.955 [2024-12-09 17:38:58.904676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.955 [2024-12-09 17:38:58.904708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.955 qpair failed and we were unable to recover it. 00:28:29.955 [2024-12-09 17:38:58.904961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.955 [2024-12-09 17:38:58.904993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.955 qpair failed and we were unable to recover it. 00:28:29.955 [2024-12-09 17:38:58.905132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.956 [2024-12-09 17:38:58.905165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.956 qpair failed and we were unable to recover it. 00:28:29.956 [2024-12-09 17:38:58.905368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.956 [2024-12-09 17:38:58.905401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.956 qpair failed and we were unable to recover it. 00:28:29.956 [2024-12-09 17:38:58.905611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.956 [2024-12-09 17:38:58.905644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.956 qpair failed and we were unable to recover it. 00:28:29.956 [2024-12-09 17:38:58.905912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.956 [2024-12-09 17:38:58.905951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.956 qpair failed and we were unable to recover it. 00:28:29.956 [2024-12-09 17:38:58.906155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.956 [2024-12-09 17:38:58.906186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.956 qpair failed and we were unable to recover it. 00:28:29.956 [2024-12-09 17:38:58.906462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.956 [2024-12-09 17:38:58.906495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.956 qpair failed and we were unable to recover it. 00:28:29.956 [2024-12-09 17:38:58.906760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.956 [2024-12-09 17:38:58.906791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.956 qpair failed and we were unable to recover it. 00:28:29.956 [2024-12-09 17:38:58.907092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.956 [2024-12-09 17:38:58.907125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.956 qpair failed and we were unable to recover it. 00:28:29.956 [2024-12-09 17:38:58.907417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.956 [2024-12-09 17:38:58.907452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.956 qpair failed and we were unable to recover it. 00:28:29.956 [2024-12-09 17:38:58.907725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.956 [2024-12-09 17:38:58.907757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.956 qpair failed and we were unable to recover it. 00:28:29.956 [2024-12-09 17:38:58.908051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.956 [2024-12-09 17:38:58.908083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.956 qpair failed and we were unable to recover it. 00:28:29.956 [2024-12-09 17:38:58.908287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.956 [2024-12-09 17:38:58.908320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.956 qpair failed and we were unable to recover it. 00:28:29.956 [2024-12-09 17:38:58.908502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.956 [2024-12-09 17:38:58.908534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.956 qpair failed and we were unable to recover it. 00:28:29.956 [2024-12-09 17:38:58.908744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.956 [2024-12-09 17:38:58.908776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.956 qpair failed and we were unable to recover it. 00:28:29.956 [2024-12-09 17:38:58.909106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.956 [2024-12-09 17:38:58.909139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.956 qpair failed and we were unable to recover it. 00:28:29.956 [2024-12-09 17:38:58.909463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.956 [2024-12-09 17:38:58.909497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.956 qpair failed and we were unable to recover it. 00:28:29.956 [2024-12-09 17:38:58.909755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.956 [2024-12-09 17:38:58.909786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.956 qpair failed and we were unable to recover it. 00:28:29.956 [2024-12-09 17:38:58.910068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.956 [2024-12-09 17:38:58.910101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.956 qpair failed and we were unable to recover it. 00:28:29.956 [2024-12-09 17:38:58.910381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.956 [2024-12-09 17:38:58.910414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.956 qpair failed and we were unable to recover it. 00:28:29.956 [2024-12-09 17:38:58.910697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.956 [2024-12-09 17:38:58.910728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.956 qpair failed and we were unable to recover it. 00:28:29.956 [2024-12-09 17:38:58.910956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.956 [2024-12-09 17:38:58.910988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.956 qpair failed and we were unable to recover it. 00:28:29.956 [2024-12-09 17:38:58.911190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.956 [2024-12-09 17:38:58.911232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.956 qpair failed and we were unable to recover it. 00:28:29.956 [2024-12-09 17:38:58.911466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.956 [2024-12-09 17:38:58.911499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.956 qpair failed and we were unable to recover it. 00:28:29.956 [2024-12-09 17:38:58.911778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.956 [2024-12-09 17:38:58.911809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.956 qpair failed and we were unable to recover it. 00:28:29.956 [2024-12-09 17:38:58.912112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.956 [2024-12-09 17:38:58.912145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.956 qpair failed and we were unable to recover it. 00:28:29.956 [2024-12-09 17:38:58.912353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.956 [2024-12-09 17:38:58.912387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.956 qpair failed and we were unable to recover it. 00:28:29.956 [2024-12-09 17:38:58.912521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.956 [2024-12-09 17:38:58.912552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.956 qpair failed and we were unable to recover it. 00:28:29.956 [2024-12-09 17:38:58.912687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.956 [2024-12-09 17:38:58.912719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.956 qpair failed and we were unable to recover it. 00:28:29.956 [2024-12-09 17:38:58.912831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.956 [2024-12-09 17:38:58.912862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.956 qpair failed and we were unable to recover it. 00:28:29.956 [2024-12-09 17:38:58.912988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.956 [2024-12-09 17:38:58.913021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.956 qpair failed and we were unable to recover it. 00:28:29.956 [2024-12-09 17:38:58.913330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.956 [2024-12-09 17:38:58.913409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.956 qpair failed and we were unable to recover it. 00:28:29.956 [2024-12-09 17:38:58.913730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.956 [2024-12-09 17:38:58.913767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.956 qpair failed and we were unable to recover it. 00:28:29.956 [2024-12-09 17:38:58.913971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.956 [2024-12-09 17:38:58.914005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.956 qpair failed and we were unable to recover it. 00:28:29.956 [2024-12-09 17:38:58.914266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.956 [2024-12-09 17:38:58.914301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.956 qpair failed and we were unable to recover it. 00:28:29.956 [2024-12-09 17:38:58.914439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.956 [2024-12-09 17:38:58.914471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.956 qpair failed and we were unable to recover it. 00:28:29.956 [2024-12-09 17:38:58.914670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.956 [2024-12-09 17:38:58.914703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.956 qpair failed and we were unable to recover it. 00:28:29.956 [2024-12-09 17:38:58.914974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.956 [2024-12-09 17:38:58.915006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.956 qpair failed and we were unable to recover it. 00:28:29.956 [2024-12-09 17:38:58.915272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.956 [2024-12-09 17:38:58.915307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.956 qpair failed and we were unable to recover it. 00:28:29.956 [2024-12-09 17:38:58.915494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.956 [2024-12-09 17:38:58.915527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.956 qpair failed and we were unable to recover it. 00:28:29.956 [2024-12-09 17:38:58.915729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.957 [2024-12-09 17:38:58.915760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.957 qpair failed and we were unable to recover it. 00:28:29.957 [2024-12-09 17:38:58.916039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.957 [2024-12-09 17:38:58.916072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.957 qpair failed and we were unable to recover it. 00:28:29.957 [2024-12-09 17:38:58.916276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.957 [2024-12-09 17:38:58.916310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.957 qpair failed and we were unable to recover it. 00:28:29.957 [2024-12-09 17:38:58.916439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.957 [2024-12-09 17:38:58.916471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.957 qpair failed and we were unable to recover it. 00:28:29.957 [2024-12-09 17:38:58.916750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.957 [2024-12-09 17:38:58.916793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.957 qpair failed and we were unable to recover it. 00:28:29.957 [2024-12-09 17:38:58.917070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.957 [2024-12-09 17:38:58.917102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.957 qpair failed and we were unable to recover it. 00:28:29.957 [2024-12-09 17:38:58.917286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.957 [2024-12-09 17:38:58.917320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.957 qpair failed and we were unable to recover it. 00:28:29.957 [2024-12-09 17:38:58.917605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.957 [2024-12-09 17:38:58.917637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.957 qpair failed and we were unable to recover it. 00:28:29.957 [2024-12-09 17:38:58.917923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.957 [2024-12-09 17:38:58.917957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.957 qpair failed and we were unable to recover it. 00:28:29.957 [2024-12-09 17:38:58.918157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.957 [2024-12-09 17:38:58.918189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.957 qpair failed and we were unable to recover it. 00:28:29.957 [2024-12-09 17:38:58.918429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.957 [2024-12-09 17:38:58.918463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.957 qpair failed and we were unable to recover it. 00:28:29.957 [2024-12-09 17:38:58.918710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.957 [2024-12-09 17:38:58.918743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.957 qpair failed and we were unable to recover it. 00:28:29.957 [2024-12-09 17:38:58.918958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.957 [2024-12-09 17:38:58.918990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.957 qpair failed and we were unable to recover it. 00:28:29.957 [2024-12-09 17:38:58.919272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.957 [2024-12-09 17:38:58.919307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.957 qpair failed and we were unable to recover it. 00:28:29.957 [2024-12-09 17:38:58.919538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.957 [2024-12-09 17:38:58.919571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.957 qpair failed and we were unable to recover it. 00:28:29.957 [2024-12-09 17:38:58.919768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.957 [2024-12-09 17:38:58.919800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.957 qpair failed and we were unable to recover it. 00:28:29.957 [2024-12-09 17:38:58.920097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.957 [2024-12-09 17:38:58.920129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.957 qpair failed and we were unable to recover it. 00:28:29.957 [2024-12-09 17:38:58.920424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.957 [2024-12-09 17:38:58.920458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.957 qpair failed and we were unable to recover it. 00:28:29.957 [2024-12-09 17:38:58.920774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.957 [2024-12-09 17:38:58.920806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.957 qpair failed and we were unable to recover it. 00:28:29.957 [2024-12-09 17:38:58.921059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.957 [2024-12-09 17:38:58.921090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.957 qpair failed and we were unable to recover it. 00:28:29.957 [2024-12-09 17:38:58.921398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.957 [2024-12-09 17:38:58.921433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.957 qpair failed and we were unable to recover it. 00:28:29.957 [2024-12-09 17:38:58.921715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.957 [2024-12-09 17:38:58.921748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.957 qpair failed and we were unable to recover it. 00:28:29.957 [2024-12-09 17:38:58.921950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.957 [2024-12-09 17:38:58.921983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.957 qpair failed and we were unable to recover it. 00:28:29.957 [2024-12-09 17:38:58.922263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.957 [2024-12-09 17:38:58.922297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.957 qpair failed and we were unable to recover it. 00:28:29.957 [2024-12-09 17:38:58.922584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.957 [2024-12-09 17:38:58.922616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.957 qpair failed and we were unable to recover it. 00:28:29.957 [2024-12-09 17:38:58.922823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.957 [2024-12-09 17:38:58.922855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.957 qpair failed and we were unable to recover it. 00:28:29.957 [2024-12-09 17:38:58.923078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.957 [2024-12-09 17:38:58.923111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.957 qpair failed and we were unable to recover it. 00:28:29.957 [2024-12-09 17:38:58.923241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.957 [2024-12-09 17:38:58.923275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.957 qpair failed and we were unable to recover it. 00:28:29.957 [2024-12-09 17:38:58.923530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.957 [2024-12-09 17:38:58.923562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.957 qpair failed and we were unable to recover it. 00:28:29.957 [2024-12-09 17:38:58.923766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.957 [2024-12-09 17:38:58.923798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.957 qpair failed and we were unable to recover it. 00:28:29.957 [2024-12-09 17:38:58.924072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.957 [2024-12-09 17:38:58.924104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.957 qpair failed and we were unable to recover it. 00:28:29.957 [2024-12-09 17:38:58.924386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.957 [2024-12-09 17:38:58.924420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.957 qpair failed and we were unable to recover it. 00:28:29.957 [2024-12-09 17:38:58.924614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.957 [2024-12-09 17:38:58.924647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.957 qpair failed and we were unable to recover it. 00:28:29.957 [2024-12-09 17:38:58.924917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.957 [2024-12-09 17:38:58.924950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.957 qpair failed and we were unable to recover it. 00:28:29.957 [2024-12-09 17:38:58.925240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.957 [2024-12-09 17:38:58.925274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.957 qpair failed and we were unable to recover it. 00:28:29.957 [2024-12-09 17:38:58.925549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.957 [2024-12-09 17:38:58.925581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.957 qpair failed and we were unable to recover it. 00:28:29.957 [2024-12-09 17:38:58.925869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.957 [2024-12-09 17:38:58.925902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.957 qpair failed and we were unable to recover it. 00:28:29.957 [2024-12-09 17:38:58.926180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.957 [2024-12-09 17:38:58.926211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.957 qpair failed and we were unable to recover it. 00:28:29.957 [2024-12-09 17:38:58.926439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.957 [2024-12-09 17:38:58.926472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.957 qpair failed and we were unable to recover it. 00:28:29.957 [2024-12-09 17:38:58.926776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.958 [2024-12-09 17:38:58.926809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.958 qpair failed and we were unable to recover it. 00:28:29.958 [2024-12-09 17:38:58.926931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.958 [2024-12-09 17:38:58.926964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.958 qpair failed and we were unable to recover it. 00:28:29.958 [2024-12-09 17:38:58.927167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.958 [2024-12-09 17:38:58.927199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.958 qpair failed and we were unable to recover it. 00:28:29.958 [2024-12-09 17:38:58.927415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.958 [2024-12-09 17:38:58.927447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.958 qpair failed and we were unable to recover it. 00:28:29.958 [2024-12-09 17:38:58.927641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.958 [2024-12-09 17:38:58.927674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.958 qpair failed and we were unable to recover it. 00:28:29.958 [2024-12-09 17:38:58.927949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.958 [2024-12-09 17:38:58.927989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.958 qpair failed and we were unable to recover it. 00:28:29.958 [2024-12-09 17:38:58.928271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.958 [2024-12-09 17:38:58.928305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.958 qpair failed and we were unable to recover it. 00:28:29.958 [2024-12-09 17:38:58.928496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.958 [2024-12-09 17:38:58.928528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.958 qpair failed and we were unable to recover it. 00:28:29.958 [2024-12-09 17:38:58.928726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.958 [2024-12-09 17:38:58.928758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.958 qpair failed and we were unable to recover it. 00:28:29.958 [2024-12-09 17:38:58.929037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.958 [2024-12-09 17:38:58.929069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.958 qpair failed and we were unable to recover it. 00:28:29.958 [2024-12-09 17:38:58.929326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.958 [2024-12-09 17:38:58.929360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.958 qpair failed and we were unable to recover it. 00:28:29.958 [2024-12-09 17:38:58.929569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.958 [2024-12-09 17:38:58.929602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.958 qpair failed and we were unable to recover it. 00:28:29.958 [2024-12-09 17:38:58.929888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.958 [2024-12-09 17:38:58.929918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.958 qpair failed and we were unable to recover it. 00:28:29.958 [2024-12-09 17:38:58.930173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.958 [2024-12-09 17:38:58.930206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.958 qpair failed and we were unable to recover it. 00:28:29.958 [2024-12-09 17:38:58.930363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.958 [2024-12-09 17:38:58.930396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.958 qpair failed and we were unable to recover it. 00:28:29.958 [2024-12-09 17:38:58.930653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.958 [2024-12-09 17:38:58.930685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.958 qpair failed and we were unable to recover it. 00:28:29.958 [2024-12-09 17:38:58.930834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.958 [2024-12-09 17:38:58.930865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.958 qpair failed and we were unable to recover it. 00:28:29.958 [2024-12-09 17:38:58.931052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.958 [2024-12-09 17:38:58.931084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.958 qpair failed and we were unable to recover it. 00:28:29.958 [2024-12-09 17:38:58.931286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.958 [2024-12-09 17:38:58.931319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.958 qpair failed and we were unable to recover it. 00:28:29.958 [2024-12-09 17:38:58.931533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.958 [2024-12-09 17:38:58.931566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.958 qpair failed and we were unable to recover it. 00:28:29.958 [2024-12-09 17:38:58.931695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.958 [2024-12-09 17:38:58.931727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.958 qpair failed and we were unable to recover it. 00:28:29.958 [2024-12-09 17:38:58.931858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.958 [2024-12-09 17:38:58.931890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.958 qpair failed and we were unable to recover it. 00:28:29.958 [2024-12-09 17:38:58.932115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.958 [2024-12-09 17:38:58.932148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.958 qpair failed and we were unable to recover it. 00:28:29.958 [2024-12-09 17:38:58.932361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.958 [2024-12-09 17:38:58.932395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.958 qpair failed and we were unable to recover it. 00:28:29.958 [2024-12-09 17:38:58.932552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.958 [2024-12-09 17:38:58.932583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.958 qpair failed and we were unable to recover it. 00:28:29.958 [2024-12-09 17:38:58.932779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.958 [2024-12-09 17:38:58.932812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.958 qpair failed and we were unable to recover it. 00:28:29.958 [2024-12-09 17:38:58.932933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.958 [2024-12-09 17:38:58.932965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.958 qpair failed and we were unable to recover it. 00:28:29.958 [2024-12-09 17:38:58.933168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.958 [2024-12-09 17:38:58.933201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.958 qpair failed and we were unable to recover it. 00:28:29.958 [2024-12-09 17:38:58.933340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.958 [2024-12-09 17:38:58.933372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.958 qpair failed and we were unable to recover it. 00:28:29.958 [2024-12-09 17:38:58.933646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.958 [2024-12-09 17:38:58.933678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.958 qpair failed and we were unable to recover it. 00:28:29.958 [2024-12-09 17:38:58.933907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.958 [2024-12-09 17:38:58.933940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.958 qpair failed and we were unable to recover it. 00:28:29.958 [2024-12-09 17:38:58.934145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.958 [2024-12-09 17:38:58.934178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.958 qpair failed and we were unable to recover it. 00:28:29.958 [2024-12-09 17:38:58.934478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.958 [2024-12-09 17:38:58.934555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.958 qpair failed and we were unable to recover it. 00:28:29.958 [2024-12-09 17:38:58.934846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.958 [2024-12-09 17:38:58.934925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.958 qpair failed and we were unable to recover it. 00:28:29.958 [2024-12-09 17:38:58.935275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.958 [2024-12-09 17:38:58.935352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.958 qpair failed and we were unable to recover it. 00:28:29.958 [2024-12-09 17:38:58.935598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.958 [2024-12-09 17:38:58.935635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.958 qpair failed and we were unable to recover it. 00:28:29.958 [2024-12-09 17:38:58.935865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.958 [2024-12-09 17:38:58.935897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.958 qpair failed and we were unable to recover it. 00:28:29.958 [2024-12-09 17:38:58.936166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.958 [2024-12-09 17:38:58.936198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.958 qpair failed and we were unable to recover it. 00:28:29.958 [2024-12-09 17:38:58.936487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.958 [2024-12-09 17:38:58.936519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.958 qpair failed and we were unable to recover it. 00:28:29.958 [2024-12-09 17:38:58.936821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.958 [2024-12-09 17:38:58.936853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.959 qpair failed and we were unable to recover it. 00:28:29.959 [2024-12-09 17:38:58.936998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.959 [2024-12-09 17:38:58.937031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.959 qpair failed and we were unable to recover it. 00:28:29.959 [2024-12-09 17:38:58.937156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.959 [2024-12-09 17:38:58.937188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.959 qpair failed and we were unable to recover it. 00:28:29.959 [2024-12-09 17:38:58.937382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.959 [2024-12-09 17:38:58.937416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.959 qpair failed and we were unable to recover it. 00:28:29.959 [2024-12-09 17:38:58.937698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.959 [2024-12-09 17:38:58.937730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.959 qpair failed and we were unable to recover it. 00:28:29.959 [2024-12-09 17:38:58.937932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.959 [2024-12-09 17:38:58.937963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.959 qpair failed and we were unable to recover it. 00:28:29.959 [2024-12-09 17:38:58.938148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.959 [2024-12-09 17:38:58.938180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:29.959 qpair failed and we were unable to recover it. 00:28:29.959 [2024-12-09 17:38:58.938419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.959 [2024-12-09 17:38:58.938460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.959 qpair failed and we were unable to recover it. 00:28:29.959 [2024-12-09 17:38:58.938678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.959 [2024-12-09 17:38:58.938712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.959 qpair failed and we were unable to recover it. 00:28:29.959 [2024-12-09 17:38:58.938919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.959 [2024-12-09 17:38:58.938952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.959 qpair failed and we were unable to recover it. 00:28:29.959 [2024-12-09 17:38:58.939070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.959 [2024-12-09 17:38:58.939102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.959 qpair failed and we were unable to recover it. 00:28:29.959 [2024-12-09 17:38:58.939299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.959 [2024-12-09 17:38:58.939333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.959 qpair failed and we were unable to recover it. 00:28:29.959 [2024-12-09 17:38:58.939590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.959 [2024-12-09 17:38:58.939622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.959 qpair failed and we were unable to recover it. 00:28:29.959 [2024-12-09 17:38:58.939733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.959 [2024-12-09 17:38:58.939765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.959 qpair failed and we were unable to recover it. 00:28:29.959 [2024-12-09 17:38:58.939912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.959 [2024-12-09 17:38:58.939944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.959 qpair failed and we were unable to recover it. 00:28:29.959 [2024-12-09 17:38:58.940076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.959 [2024-12-09 17:38:58.940109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.959 qpair failed and we were unable to recover it. 00:28:29.959 [2024-12-09 17:38:58.940256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.959 [2024-12-09 17:38:58.940289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.959 qpair failed and we were unable to recover it. 00:28:29.959 [2024-12-09 17:38:58.940479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.959 [2024-12-09 17:38:58.940510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.959 qpair failed and we were unable to recover it. 00:28:29.959 [2024-12-09 17:38:58.940789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.959 [2024-12-09 17:38:58.940821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.959 qpair failed and we were unable to recover it. 00:28:29.959 [2024-12-09 17:38:58.941022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.959 [2024-12-09 17:38:58.941054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.959 qpair failed and we were unable to recover it. 00:28:29.959 [2024-12-09 17:38:58.941347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.959 [2024-12-09 17:38:58.941380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.959 qpair failed and we were unable to recover it. 00:28:29.959 [2024-12-09 17:38:58.941507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.959 [2024-12-09 17:38:58.941539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.959 qpair failed and we were unable to recover it. 00:28:29.959 [2024-12-09 17:38:58.941771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.959 [2024-12-09 17:38:58.941803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.959 qpair failed and we were unable to recover it. 00:28:29.959 [2024-12-09 17:38:58.942001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.959 [2024-12-09 17:38:58.942034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.959 qpair failed and we were unable to recover it. 00:28:29.959 [2024-12-09 17:38:58.942175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.959 [2024-12-09 17:38:58.942206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.959 qpair failed and we were unable to recover it. 00:28:29.959 [2024-12-09 17:38:58.942410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.959 [2024-12-09 17:38:58.942441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.959 qpair failed and we were unable to recover it. 00:28:29.959 [2024-12-09 17:38:58.942641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.959 [2024-12-09 17:38:58.942674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.959 qpair failed and we were unable to recover it. 00:28:29.959 [2024-12-09 17:38:58.942802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.959 [2024-12-09 17:38:58.942834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.959 qpair failed and we were unable to recover it. 00:28:29.959 [2024-12-09 17:38:58.943036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.959 [2024-12-09 17:38:58.943068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.959 qpair failed and we were unable to recover it. 00:28:29.959 [2024-12-09 17:38:58.943197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.959 [2024-12-09 17:38:58.943241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.959 qpair failed and we were unable to recover it. 00:28:29.959 [2024-12-09 17:38:58.943494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.959 [2024-12-09 17:38:58.943526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.959 qpair failed and we were unable to recover it. 00:28:29.959 [2024-12-09 17:38:58.943824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.959 [2024-12-09 17:38:58.943855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.959 qpair failed and we were unable to recover it. 00:28:29.959 [2024-12-09 17:38:58.943985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.959 [2024-12-09 17:38:58.944018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.959 qpair failed and we were unable to recover it. 00:28:29.959 [2024-12-09 17:38:58.944250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.959 [2024-12-09 17:38:58.944295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.959 qpair failed and we were unable to recover it. 00:28:29.959 [2024-12-09 17:38:58.944601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.959 [2024-12-09 17:38:58.944633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.959 qpair failed and we were unable to recover it. 00:28:29.959 [2024-12-09 17:38:58.944777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.959 [2024-12-09 17:38:58.944809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.960 qpair failed and we were unable to recover it. 00:28:29.960 [2024-12-09 17:38:58.945014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.960 [2024-12-09 17:38:58.945046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.960 qpair failed and we were unable to recover it. 00:28:29.960 [2024-12-09 17:38:58.945177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.960 [2024-12-09 17:38:58.945209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.960 qpair failed and we were unable to recover it. 00:28:29.960 [2024-12-09 17:38:58.945495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.960 [2024-12-09 17:38:58.945528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.960 qpair failed and we were unable to recover it. 00:28:29.960 [2024-12-09 17:38:58.945710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.960 [2024-12-09 17:38:58.945742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.960 qpair failed and we were unable to recover it. 00:28:29.960 [2024-12-09 17:38:58.946007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.960 [2024-12-09 17:38:58.946040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.960 qpair failed and we were unable to recover it. 00:28:29.960 [2024-12-09 17:38:58.946325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.960 [2024-12-09 17:38:58.946359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.960 qpair failed and we were unable to recover it. 00:28:29.960 [2024-12-09 17:38:58.946498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.960 [2024-12-09 17:38:58.946530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.960 qpair failed and we were unable to recover it. 00:28:29.960 [2024-12-09 17:38:58.946802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.960 [2024-12-09 17:38:58.946834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.960 qpair failed and we were unable to recover it. 00:28:29.960 [2024-12-09 17:38:58.946972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.960 [2024-12-09 17:38:58.947004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.960 qpair failed and we were unable to recover it. 00:28:29.960 [2024-12-09 17:38:58.947238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.960 [2024-12-09 17:38:58.947271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.960 qpair failed and we were unable to recover it. 00:28:29.960 [2024-12-09 17:38:58.947545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.960 [2024-12-09 17:38:58.947578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.960 qpair failed and we were unable to recover it. 00:28:29.960 [2024-12-09 17:38:58.947710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.960 [2024-12-09 17:38:58.947742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.960 qpair failed and we were unable to recover it. 00:28:29.960 [2024-12-09 17:38:58.947928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.960 [2024-12-09 17:38:58.947960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.960 qpair failed and we were unable to recover it. 00:28:29.960 [2024-12-09 17:38:58.948096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.960 [2024-12-09 17:38:58.948129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.960 qpair failed and we were unable to recover it. 00:28:29.960 [2024-12-09 17:38:58.948309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.960 [2024-12-09 17:38:58.948342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.960 qpair failed and we were unable to recover it. 00:28:29.960 [2024-12-09 17:38:58.948483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.960 [2024-12-09 17:38:58.948514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.960 qpair failed and we were unable to recover it. 00:28:29.960 [2024-12-09 17:38:58.948721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.960 [2024-12-09 17:38:58.948753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.960 qpair failed and we were unable to recover it. 00:28:29.960 [2024-12-09 17:38:58.948886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.960 [2024-12-09 17:38:58.948918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.960 qpair failed and we were unable to recover it. 00:28:29.960 [2024-12-09 17:38:58.949099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.960 [2024-12-09 17:38:58.949131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.960 qpair failed and we were unable to recover it. 00:28:29.960 [2024-12-09 17:38:58.949265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.960 [2024-12-09 17:38:58.949299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.960 qpair failed and we were unable to recover it. 00:28:29.960 [2024-12-09 17:38:58.949493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.960 [2024-12-09 17:38:58.949524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.960 qpair failed and we were unable to recover it. 00:28:29.960 [2024-12-09 17:38:58.949730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.960 [2024-12-09 17:38:58.949761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.960 qpair failed and we were unable to recover it. 00:28:29.960 [2024-12-09 17:38:58.949877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.960 [2024-12-09 17:38:58.949908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.960 qpair failed and we were unable to recover it. 00:28:29.960 [2024-12-09 17:38:58.950160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.960 [2024-12-09 17:38:58.950192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.960 qpair failed and we were unable to recover it. 00:28:29.960 [2024-12-09 17:38:58.950334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.960 [2024-12-09 17:38:58.950366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.960 qpair failed and we were unable to recover it. 00:28:29.960 [2024-12-09 17:38:58.950497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.960 [2024-12-09 17:38:58.950529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.960 qpair failed and we were unable to recover it. 00:28:29.960 [2024-12-09 17:38:58.950654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.960 [2024-12-09 17:38:58.950684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.960 qpair failed and we were unable to recover it. 00:28:29.960 [2024-12-09 17:38:58.950870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.960 [2024-12-09 17:38:58.950901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.960 qpair failed and we were unable to recover it. 00:28:29.960 [2024-12-09 17:38:58.951018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.960 [2024-12-09 17:38:58.951050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.960 qpair failed and we were unable to recover it. 00:28:29.960 [2024-12-09 17:38:58.951244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.960 [2024-12-09 17:38:58.951278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.960 qpair failed and we were unable to recover it. 00:28:29.960 [2024-12-09 17:38:58.951486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.960 [2024-12-09 17:38:58.951518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.960 qpair failed and we were unable to recover it. 00:28:29.960 [2024-12-09 17:38:58.951637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.960 [2024-12-09 17:38:58.951668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.960 qpair failed and we were unable to recover it. 00:28:29.960 [2024-12-09 17:38:58.951789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.960 [2024-12-09 17:38:58.951821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.960 qpair failed and we were unable to recover it. 00:28:29.960 [2024-12-09 17:38:58.952099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.960 [2024-12-09 17:38:58.952132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.960 qpair failed and we were unable to recover it. 00:28:29.960 [2024-12-09 17:38:58.952276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.960 [2024-12-09 17:38:58.952310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.960 qpair failed and we were unable to recover it. 00:28:29.960 [2024-12-09 17:38:58.952514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.960 [2024-12-09 17:38:58.952546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.960 qpair failed and we were unable to recover it. 00:28:29.960 [2024-12-09 17:38:58.952680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.960 [2024-12-09 17:38:58.952710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.960 qpair failed and we were unable to recover it. 00:28:29.960 [2024-12-09 17:38:58.952966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.960 [2024-12-09 17:38:58.953003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.960 qpair failed and we were unable to recover it. 00:28:29.960 [2024-12-09 17:38:58.953209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.960 [2024-12-09 17:38:58.953249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.960 qpair failed and we were unable to recover it. 00:28:29.961 [2024-12-09 17:38:58.953451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.961 [2024-12-09 17:38:58.953483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.961 qpair failed and we were unable to recover it. 00:28:29.961 [2024-12-09 17:38:58.953595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.961 [2024-12-09 17:38:58.953627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.961 qpair failed and we were unable to recover it. 00:28:29.961 [2024-12-09 17:38:58.953806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.961 [2024-12-09 17:38:58.953839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.961 qpair failed and we were unable to recover it. 00:28:29.961 [2024-12-09 17:38:58.954057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.961 [2024-12-09 17:38:58.954089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.961 qpair failed and we were unable to recover it. 00:28:29.961 [2024-12-09 17:38:58.954348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.961 [2024-12-09 17:38:58.954382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.961 qpair failed and we were unable to recover it. 00:28:29.961 [2024-12-09 17:38:58.954580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.961 [2024-12-09 17:38:58.954612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.961 qpair failed and we were unable to recover it. 00:28:29.961 [2024-12-09 17:38:58.954813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.961 [2024-12-09 17:38:58.954846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.961 qpair failed and we were unable to recover it. 00:28:29.961 [2024-12-09 17:38:58.954972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.961 [2024-12-09 17:38:58.955004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.961 qpair failed and we were unable to recover it. 00:28:29.961 [2024-12-09 17:38:58.955253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.961 [2024-12-09 17:38:58.955287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.961 qpair failed and we were unable to recover it. 00:28:29.961 [2024-12-09 17:38:58.955399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.961 [2024-12-09 17:38:58.955430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.961 qpair failed and we were unable to recover it. 00:28:29.961 [2024-12-09 17:38:58.955634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.961 [2024-12-09 17:38:58.955667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.961 qpair failed and we were unable to recover it. 00:28:29.961 [2024-12-09 17:38:58.955891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.961 [2024-12-09 17:38:58.955924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.961 qpair failed and we were unable to recover it. 00:28:29.961 [2024-12-09 17:38:58.956129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.961 [2024-12-09 17:38:58.956160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.961 qpair failed and we were unable to recover it. 00:28:29.961 [2024-12-09 17:38:58.956350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.961 [2024-12-09 17:38:58.956384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.961 qpair failed and we were unable to recover it. 00:28:29.961 [2024-12-09 17:38:58.956566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.961 [2024-12-09 17:38:58.956598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.961 qpair failed and we were unable to recover it. 00:28:29.961 [2024-12-09 17:38:58.956794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.961 [2024-12-09 17:38:58.956827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.961 qpair failed and we were unable to recover it. 00:28:29.961 [2024-12-09 17:38:58.957035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.961 [2024-12-09 17:38:58.957068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.961 qpair failed and we were unable to recover it. 00:28:29.961 [2024-12-09 17:38:58.957343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.961 [2024-12-09 17:38:58.957378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.961 qpair failed and we were unable to recover it. 00:28:29.961 [2024-12-09 17:38:58.957495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.961 [2024-12-09 17:38:58.957527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.961 qpair failed and we were unable to recover it. 00:28:29.961 [2024-12-09 17:38:58.957666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.961 [2024-12-09 17:38:58.957698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.961 qpair failed and we were unable to recover it. 00:28:29.961 [2024-12-09 17:38:58.957913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.961 [2024-12-09 17:38:58.957945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.961 qpair failed and we were unable to recover it. 00:28:29.961 [2024-12-09 17:38:58.958065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.961 [2024-12-09 17:38:58.958097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.961 qpair failed and we were unable to recover it. 00:28:29.961 [2024-12-09 17:38:58.958351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.961 [2024-12-09 17:38:58.958384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.961 qpair failed and we were unable to recover it. 00:28:29.961 [2024-12-09 17:38:58.958661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.961 [2024-12-09 17:38:58.958692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.961 qpair failed and we were unable to recover it. 00:28:29.961 [2024-12-09 17:38:58.958896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.961 [2024-12-09 17:38:58.958928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.961 qpair failed and we were unable to recover it. 00:28:29.961 [2024-12-09 17:38:58.959053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.961 [2024-12-09 17:38:58.959085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.961 qpair failed and we were unable to recover it. 00:28:29.961 [2024-12-09 17:38:58.959305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.961 [2024-12-09 17:38:58.959338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.961 qpair failed and we were unable to recover it. 00:28:29.961 [2024-12-09 17:38:58.959465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.961 [2024-12-09 17:38:58.959496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.961 qpair failed and we were unable to recover it. 00:28:29.961 [2024-12-09 17:38:58.959677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.961 [2024-12-09 17:38:58.959708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.961 qpair failed and we were unable to recover it. 00:28:29.961 [2024-12-09 17:38:58.959975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.961 [2024-12-09 17:38:58.960007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.961 qpair failed and we were unable to recover it. 00:28:29.961 [2024-12-09 17:38:58.960231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.961 [2024-12-09 17:38:58.960265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.961 qpair failed and we were unable to recover it. 00:28:29.961 [2024-12-09 17:38:58.960409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.961 [2024-12-09 17:38:58.960439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.961 qpair failed and we were unable to recover it. 00:28:29.961 [2024-12-09 17:38:58.960547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.961 [2024-12-09 17:38:58.960580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.961 qpair failed and we were unable to recover it. 00:28:29.961 [2024-12-09 17:38:58.960855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.961 [2024-12-09 17:38:58.960887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.961 qpair failed and we were unable to recover it. 00:28:29.961 [2024-12-09 17:38:58.961024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.961 [2024-12-09 17:38:58.961057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.961 qpair failed and we were unable to recover it. 00:28:29.961 [2024-12-09 17:38:58.961265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.961 [2024-12-09 17:38:58.961299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.961 qpair failed and we were unable to recover it. 00:28:29.961 [2024-12-09 17:38:58.961488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.961 [2024-12-09 17:38:58.961521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.961 qpair failed and we were unable to recover it. 00:28:29.961 [2024-12-09 17:38:58.961795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.961 [2024-12-09 17:38:58.961828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.961 qpair failed and we were unable to recover it. 00:28:29.961 [2024-12-09 17:38:58.961951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.961 [2024-12-09 17:38:58.961990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.961 qpair failed and we were unable to recover it. 00:28:29.962 [2024-12-09 17:38:58.962137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.962 [2024-12-09 17:38:58.962169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.962 qpair failed and we were unable to recover it. 00:28:29.962 [2024-12-09 17:38:58.962366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.962 [2024-12-09 17:38:58.962400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.962 qpair failed and we were unable to recover it. 00:28:29.962 [2024-12-09 17:38:58.962605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.962 [2024-12-09 17:38:58.962638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.962 qpair failed and we were unable to recover it. 00:28:29.962 [2024-12-09 17:38:58.962832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.962 [2024-12-09 17:38:58.962865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.962 qpair failed and we were unable to recover it. 00:28:29.962 [2024-12-09 17:38:58.962994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.962 [2024-12-09 17:38:58.963025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.962 qpair failed and we were unable to recover it. 00:28:29.962 [2024-12-09 17:38:58.963209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.962 [2024-12-09 17:38:58.963250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.962 qpair failed and we were unable to recover it. 00:28:29.962 [2024-12-09 17:38:58.963503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.962 [2024-12-09 17:38:58.963535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.962 qpair failed and we were unable to recover it. 00:28:29.962 [2024-12-09 17:38:58.963803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.962 [2024-12-09 17:38:58.963835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.962 qpair failed and we were unable to recover it. 00:28:29.962 [2024-12-09 17:38:58.964028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.962 [2024-12-09 17:38:58.964061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.962 qpair failed and we were unable to recover it. 00:28:29.962 [2024-12-09 17:38:58.964203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.962 [2024-12-09 17:38:58.964243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.962 qpair failed and we were unable to recover it. 00:28:29.962 [2024-12-09 17:38:58.964495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.962 [2024-12-09 17:38:58.964526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.962 qpair failed and we were unable to recover it. 00:28:29.962 [2024-12-09 17:38:58.964723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.962 [2024-12-09 17:38:58.964756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.962 qpair failed and we were unable to recover it. 00:28:29.962 [2024-12-09 17:38:58.964886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.962 [2024-12-09 17:38:58.964918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.962 qpair failed and we were unable to recover it. 00:28:29.962 [2024-12-09 17:38:58.965061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.962 [2024-12-09 17:38:58.965094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.962 qpair failed and we were unable to recover it. 00:28:29.962 [2024-12-09 17:38:58.965233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.962 [2024-12-09 17:38:58.965267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.962 qpair failed and we were unable to recover it. 00:28:29.962 [2024-12-09 17:38:58.965534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.962 [2024-12-09 17:38:58.965567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.962 qpair failed and we were unable to recover it. 00:28:29.962 [2024-12-09 17:38:58.965774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.962 [2024-12-09 17:38:58.965807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.962 qpair failed and we were unable to recover it. 00:28:29.962 [2024-12-09 17:38:58.966026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.962 [2024-12-09 17:38:58.966058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.962 qpair failed and we were unable to recover it. 00:28:29.962 [2024-12-09 17:38:58.966274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.962 [2024-12-09 17:38:58.966308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.962 qpair failed and we were unable to recover it. 00:28:29.962 [2024-12-09 17:38:58.966587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.962 [2024-12-09 17:38:58.966619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.962 qpair failed and we were unable to recover it. 00:28:29.962 [2024-12-09 17:38:58.966745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.962 [2024-12-09 17:38:58.966776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.962 qpair failed and we were unable to recover it. 00:28:29.962 [2024-12-09 17:38:58.966966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.962 [2024-12-09 17:38:58.966999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.962 qpair failed and we were unable to recover it. 00:28:29.962 [2024-12-09 17:38:58.967272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.962 [2024-12-09 17:38:58.967305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.962 qpair failed and we were unable to recover it. 00:28:29.962 [2024-12-09 17:38:58.967509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.962 [2024-12-09 17:38:58.967541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.962 qpair failed and we were unable to recover it. 00:28:29.962 [2024-12-09 17:38:58.967756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.962 [2024-12-09 17:38:58.967788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.962 qpair failed and we were unable to recover it. 00:28:29.962 [2024-12-09 17:38:58.967917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.962 [2024-12-09 17:38:58.967949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.962 qpair failed and we were unable to recover it. 00:28:29.962 [2024-12-09 17:38:58.968267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.962 [2024-12-09 17:38:58.968302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.962 qpair failed and we were unable to recover it. 00:28:29.962 [2024-12-09 17:38:58.968486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.962 [2024-12-09 17:38:58.968516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.962 qpair failed and we were unable to recover it. 00:28:29.962 [2024-12-09 17:38:58.968707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.962 [2024-12-09 17:38:58.968740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.962 qpair failed and we were unable to recover it. 00:28:29.962 [2024-12-09 17:38:58.968960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.962 [2024-12-09 17:38:58.968993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.962 qpair failed and we were unable to recover it. 00:28:29.962 [2024-12-09 17:38:58.969270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.962 [2024-12-09 17:38:58.969304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.962 qpair failed and we were unable to recover it. 00:28:29.962 [2024-12-09 17:38:58.969413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.962 [2024-12-09 17:38:58.969445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.962 qpair failed and we were unable to recover it. 00:28:29.962 [2024-12-09 17:38:58.969647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.962 [2024-12-09 17:38:58.969679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.962 qpair failed and we were unable to recover it. 00:28:29.962 [2024-12-09 17:38:58.969858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.962 [2024-12-09 17:38:58.969891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.962 qpair failed and we were unable to recover it. 00:28:29.962 [2024-12-09 17:38:58.970086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.962 [2024-12-09 17:38:58.970118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.962 qpair failed and we were unable to recover it. 00:28:29.962 [2024-12-09 17:38:58.970367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.962 [2024-12-09 17:38:58.970401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.962 qpair failed and we were unable to recover it. 00:28:29.962 [2024-12-09 17:38:58.970678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.962 [2024-12-09 17:38:58.970711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.962 qpair failed and we were unable to recover it. 00:28:29.962 [2024-12-09 17:38:58.970914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.962 [2024-12-09 17:38:58.970947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.962 qpair failed and we were unable to recover it. 00:28:29.962 [2024-12-09 17:38:58.971203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.962 [2024-12-09 17:38:58.971245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.962 qpair failed and we were unable to recover it. 00:28:29.962 [2024-12-09 17:38:58.971464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.963 [2024-12-09 17:38:58.971501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.963 qpair failed and we were unable to recover it. 00:28:29.963 [2024-12-09 17:38:58.971721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.963 [2024-12-09 17:38:58.971754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.963 qpair failed and we were unable to recover it. 00:28:29.963 [2024-12-09 17:38:58.971951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.963 [2024-12-09 17:38:58.971982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.963 qpair failed and we were unable to recover it. 00:28:29.963 [2024-12-09 17:38:58.972237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.963 [2024-12-09 17:38:58.972270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.963 qpair failed and we were unable to recover it. 00:28:29.963 [2024-12-09 17:38:58.972415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.963 [2024-12-09 17:38:58.972446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.963 qpair failed and we were unable to recover it. 00:28:29.963 [2024-12-09 17:38:58.972636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.963 [2024-12-09 17:38:58.972668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.963 qpair failed and we were unable to recover it. 00:28:29.963 [2024-12-09 17:38:58.972852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.963 [2024-12-09 17:38:58.972884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.963 qpair failed and we were unable to recover it. 00:28:29.963 [2024-12-09 17:38:58.973091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.963 [2024-12-09 17:38:58.973123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.963 qpair failed and we were unable to recover it. 00:28:29.963 [2024-12-09 17:38:58.973312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.963 [2024-12-09 17:38:58.973345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.963 qpair failed and we were unable to recover it. 00:28:29.963 [2024-12-09 17:38:58.973538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.963 [2024-12-09 17:38:58.973569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.963 qpair failed and we were unable to recover it. 00:28:29.963 [2024-12-09 17:38:58.973776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.963 [2024-12-09 17:38:58.973807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.963 qpair failed and we were unable to recover it. 00:28:29.963 [2024-12-09 17:38:58.973915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.963 [2024-12-09 17:38:58.973947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.963 qpair failed and we were unable to recover it. 00:28:29.963 [2024-12-09 17:38:58.974194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.963 [2024-12-09 17:38:58.974250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.963 qpair failed and we were unable to recover it. 00:28:29.963 [2024-12-09 17:38:58.974450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.963 [2024-12-09 17:38:58.974481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.963 qpair failed and we were unable to recover it. 00:28:29.963 [2024-12-09 17:38:58.974784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.963 [2024-12-09 17:38:58.974818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.963 qpair failed and we were unable to recover it. 00:28:29.963 [2024-12-09 17:38:58.975022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.963 [2024-12-09 17:38:58.975054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.963 qpair failed and we were unable to recover it. 00:28:29.963 [2024-12-09 17:38:58.975246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.963 [2024-12-09 17:38:58.975280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.963 qpair failed and we were unable to recover it. 00:28:29.963 [2024-12-09 17:38:58.975526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.963 [2024-12-09 17:38:58.975559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.963 qpair failed and we were unable to recover it. 00:28:29.963 [2024-12-09 17:38:58.975732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.963 [2024-12-09 17:38:58.975763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.963 qpair failed and we were unable to recover it. 00:28:29.963 [2024-12-09 17:38:58.976031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.963 [2024-12-09 17:38:58.976063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.963 qpair failed and we were unable to recover it. 00:28:29.963 [2024-12-09 17:38:58.976254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.963 [2024-12-09 17:38:58.976287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.963 qpair failed and we were unable to recover it. 00:28:29.963 [2024-12-09 17:38:58.976499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.963 [2024-12-09 17:38:58.976530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.963 qpair failed and we were unable to recover it. 00:28:29.963 [2024-12-09 17:38:58.976782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.963 [2024-12-09 17:38:58.976813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.963 qpair failed and we were unable to recover it. 00:28:29.963 [2024-12-09 17:38:58.976953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.963 [2024-12-09 17:38:58.976984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.963 qpair failed and we were unable to recover it. 00:28:29.963 [2024-12-09 17:38:58.977179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.963 [2024-12-09 17:38:58.977210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.963 qpair failed and we were unable to recover it. 00:28:29.963 [2024-12-09 17:38:58.977485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.963 [2024-12-09 17:38:58.977517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.963 qpair failed and we were unable to recover it. 00:28:29.963 [2024-12-09 17:38:58.977625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.963 [2024-12-09 17:38:58.977656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.963 qpair failed and we were unable to recover it. 00:28:29.963 [2024-12-09 17:38:58.977851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.963 [2024-12-09 17:38:58.977883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.963 qpair failed and we were unable to recover it. 00:28:29.963 [2024-12-09 17:38:58.978136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.963 [2024-12-09 17:38:58.978168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.963 qpair failed and we were unable to recover it. 00:28:29.963 [2024-12-09 17:38:58.978290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.963 [2024-12-09 17:38:58.978324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.963 qpair failed and we were unable to recover it. 00:28:29.963 [2024-12-09 17:38:58.978512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.963 [2024-12-09 17:38:58.978544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.963 qpair failed and we were unable to recover it. 00:28:29.963 [2024-12-09 17:38:58.978748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.963 [2024-12-09 17:38:58.978780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.963 qpair failed and we were unable to recover it. 00:28:29.963 [2024-12-09 17:38:58.978927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.963 [2024-12-09 17:38:58.978959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.963 qpair failed and we were unable to recover it. 00:28:29.963 [2024-12-09 17:38:58.979241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.963 [2024-12-09 17:38:58.979275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.963 qpair failed and we were unable to recover it. 00:28:29.963 [2024-12-09 17:38:58.979544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.964 [2024-12-09 17:38:58.979576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.964 qpair failed and we were unable to recover it. 00:28:29.964 [2024-12-09 17:38:58.979798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.964 [2024-12-09 17:38:58.979830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.964 qpair failed and we were unable to recover it. 00:28:29.964 [2024-12-09 17:38:58.980009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.964 [2024-12-09 17:38:58.980040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.964 qpair failed and we were unable to recover it. 00:28:29.964 [2024-12-09 17:38:58.980227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.964 [2024-12-09 17:38:58.980260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.964 qpair failed and we were unable to recover it. 00:28:29.964 [2024-12-09 17:38:58.980537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.964 [2024-12-09 17:38:58.980569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.964 qpair failed and we were unable to recover it. 00:28:29.964 [2024-12-09 17:38:58.980777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.964 [2024-12-09 17:38:58.980809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.964 qpair failed and we were unable to recover it. 00:28:29.964 [2024-12-09 17:38:58.980994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.964 [2024-12-09 17:38:58.981029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.964 qpair failed and we were unable to recover it. 00:28:29.964 [2024-12-09 17:38:58.981207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.964 [2024-12-09 17:38:58.981252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.964 qpair failed and we were unable to recover it. 00:28:29.964 [2024-12-09 17:38:58.981504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.964 [2024-12-09 17:38:58.981536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.964 qpair failed and we were unable to recover it. 00:28:29.964 [2024-12-09 17:38:58.981778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.964 [2024-12-09 17:38:58.981810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.964 qpair failed and we were unable to recover it. 00:28:29.964 [2024-12-09 17:38:58.982092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.964 [2024-12-09 17:38:58.982124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.964 qpair failed and we were unable to recover it. 00:28:29.964 [2024-12-09 17:38:58.982377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.964 [2024-12-09 17:38:58.982417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.964 qpair failed and we were unable to recover it. 00:28:29.964 [2024-12-09 17:38:58.982611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.964 [2024-12-09 17:38:58.982642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.964 qpair failed and we were unable to recover it. 00:28:29.964 [2024-12-09 17:38:58.982762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.964 [2024-12-09 17:38:58.982794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.964 qpair failed and we were unable to recover it. 00:28:29.964 [2024-12-09 17:38:58.982931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.964 [2024-12-09 17:38:58.982962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.964 qpair failed and we were unable to recover it. 00:28:29.964 [2024-12-09 17:38:58.983143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.964 [2024-12-09 17:38:58.983174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.964 qpair failed and we were unable to recover it. 00:28:29.964 [2024-12-09 17:38:58.983369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.964 [2024-12-09 17:38:58.983401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.964 qpair failed and we were unable to recover it. 00:28:29.964 [2024-12-09 17:38:58.983683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.964 [2024-12-09 17:38:58.983713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.964 qpair failed and we were unable to recover it. 00:28:29.964 [2024-12-09 17:38:58.983839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.964 [2024-12-09 17:38:58.983870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.964 qpair failed and we were unable to recover it. 00:28:29.964 [2024-12-09 17:38:58.984047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.964 [2024-12-09 17:38:58.984077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.964 qpair failed and we were unable to recover it. 00:28:29.964 [2024-12-09 17:38:58.984299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.964 [2024-12-09 17:38:58.984353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.964 qpair failed and we were unable to recover it. 00:28:29.964 [2024-12-09 17:38:58.984541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.964 [2024-12-09 17:38:58.984574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.964 qpair failed and we were unable to recover it. 00:28:29.964 [2024-12-09 17:38:58.984818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.964 [2024-12-09 17:38:58.984850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.964 qpair failed and we were unable to recover it. 00:28:29.964 [2024-12-09 17:38:58.984981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.964 [2024-12-09 17:38:58.985030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.964 qpair failed and we were unable to recover it. 00:28:29.964 [2024-12-09 17:38:58.985230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.964 [2024-12-09 17:38:58.985263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.964 qpair failed and we were unable to recover it. 00:28:29.964 [2024-12-09 17:38:58.985475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.964 [2024-12-09 17:38:58.985507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.964 qpair failed and we were unable to recover it. 00:28:29.964 [2024-12-09 17:38:58.985684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.964 [2024-12-09 17:38:58.985716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.964 qpair failed and we were unable to recover it. 00:28:29.964 [2024-12-09 17:38:58.985919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.964 [2024-12-09 17:38:58.985950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.964 qpair failed and we were unable to recover it. 00:28:29.964 [2024-12-09 17:38:58.986128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.964 [2024-12-09 17:38:58.986161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.964 qpair failed and we were unable to recover it. 00:28:29.964 [2024-12-09 17:38:58.986370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.964 [2024-12-09 17:38:58.986403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.964 qpair failed and we were unable to recover it. 00:28:29.964 [2024-12-09 17:38:58.986673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.964 [2024-12-09 17:38:58.986705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.964 qpair failed and we were unable to recover it. 00:28:29.964 [2024-12-09 17:38:58.986811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.964 [2024-12-09 17:38:58.986843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.964 qpair failed and we were unable to recover it. 00:28:29.964 [2024-12-09 17:38:58.987112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.964 [2024-12-09 17:38:58.987144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.964 qpair failed and we were unable to recover it. 00:28:29.964 [2024-12-09 17:38:58.987336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.964 [2024-12-09 17:38:58.987369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.964 qpair failed and we were unable to recover it. 00:28:29.964 [2024-12-09 17:38:58.987501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.964 [2024-12-09 17:38:58.987533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.964 qpair failed and we were unable to recover it. 00:28:29.964 [2024-12-09 17:38:58.987715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.964 [2024-12-09 17:38:58.987746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.964 qpair failed and we were unable to recover it. 00:28:29.964 [2024-12-09 17:38:58.987992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.964 [2024-12-09 17:38:58.988023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.964 qpair failed and we were unable to recover it. 00:28:29.964 [2024-12-09 17:38:58.988251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.964 [2024-12-09 17:38:58.988284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.964 qpair failed and we were unable to recover it. 00:28:29.964 [2024-12-09 17:38:58.988530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.964 [2024-12-09 17:38:58.988562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.964 qpair failed and we were unable to recover it. 00:28:29.964 [2024-12-09 17:38:58.988863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.965 [2024-12-09 17:38:58.988894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.965 qpair failed and we were unable to recover it. 00:28:29.965 [2024-12-09 17:38:58.989081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.965 [2024-12-09 17:38:58.989112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.965 qpair failed and we were unable to recover it. 00:28:29.965 [2024-12-09 17:38:58.989298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.965 [2024-12-09 17:38:58.989331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.965 qpair failed and we were unable to recover it. 00:28:29.965 [2024-12-09 17:38:58.989505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.965 [2024-12-09 17:38:58.989536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.965 qpair failed and we were unable to recover it. 00:28:29.965 [2024-12-09 17:38:58.989785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.965 [2024-12-09 17:38:58.989816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.965 qpair failed and we were unable to recover it. 00:28:29.965 [2024-12-09 17:38:58.990077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.965 [2024-12-09 17:38:58.990108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.965 qpair failed and we were unable to recover it. 00:28:29.965 [2024-12-09 17:38:58.990252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.965 [2024-12-09 17:38:58.990286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.965 qpair failed and we were unable to recover it. 00:28:29.965 [2024-12-09 17:38:58.990480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.965 [2024-12-09 17:38:58.990524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.965 qpair failed and we were unable to recover it. 00:28:29.965 [2024-12-09 17:38:58.990638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.965 [2024-12-09 17:38:58.990670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.965 qpair failed and we were unable to recover it. 00:28:29.965 [2024-12-09 17:38:58.990857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.965 [2024-12-09 17:38:58.990888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.965 qpair failed and we were unable to recover it. 00:28:29.965 [2024-12-09 17:38:58.991152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.965 [2024-12-09 17:38:58.991184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.965 qpair failed and we were unable to recover it. 00:28:29.965 [2024-12-09 17:38:58.991400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.965 [2024-12-09 17:38:58.991432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.965 qpair failed and we were unable to recover it. 00:28:29.965 [2024-12-09 17:38:58.991641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.965 [2024-12-09 17:38:58.991673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.965 qpair failed and we were unable to recover it. 00:28:29.965 [2024-12-09 17:38:58.991934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.965 [2024-12-09 17:38:58.991965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.965 qpair failed and we were unable to recover it. 00:28:29.965 [2024-12-09 17:38:58.992181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.965 [2024-12-09 17:38:58.992213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.965 qpair failed and we were unable to recover it. 00:28:29.965 [2024-12-09 17:38:58.992484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.965 [2024-12-09 17:38:58.992517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.965 qpair failed and we were unable to recover it. 00:28:29.965 [2024-12-09 17:38:58.992784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.965 [2024-12-09 17:38:58.992816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.965 qpair failed and we were unable to recover it. 00:28:29.965 [2024-12-09 17:38:58.992950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.965 [2024-12-09 17:38:58.992982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.965 qpair failed and we were unable to recover it. 00:28:29.965 [2024-12-09 17:38:58.993110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.965 [2024-12-09 17:38:58.993142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.965 qpair failed and we were unable to recover it. 00:28:29.965 [2024-12-09 17:38:58.993315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.965 [2024-12-09 17:38:58.993348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.965 qpair failed and we were unable to recover it. 00:28:29.965 [2024-12-09 17:38:58.993547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.965 [2024-12-09 17:38:58.993579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.965 qpair failed and we were unable to recover it. 00:28:29.965 [2024-12-09 17:38:58.993761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.965 [2024-12-09 17:38:58.993792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.965 qpair failed and we were unable to recover it. 00:28:29.965 [2024-12-09 17:38:58.994064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.965 [2024-12-09 17:38:58.994094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.965 qpair failed and we were unable to recover it. 00:28:29.965 [2024-12-09 17:38:58.994272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.965 [2024-12-09 17:38:58.994305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.965 qpair failed and we were unable to recover it. 00:28:29.965 [2024-12-09 17:38:58.994491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.965 [2024-12-09 17:38:58.994524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.965 qpair failed and we were unable to recover it. 00:28:29.965 [2024-12-09 17:38:58.994700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.965 [2024-12-09 17:38:58.994750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.965 qpair failed and we were unable to recover it. 00:28:29.965 [2024-12-09 17:38:58.994890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.965 [2024-12-09 17:38:58.994921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.965 qpair failed and we were unable to recover it. 00:28:29.965 [2024-12-09 17:38:58.995131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.965 [2024-12-09 17:38:58.995163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.965 qpair failed and we were unable to recover it. 00:28:29.965 [2024-12-09 17:38:58.995413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.965 [2024-12-09 17:38:58.995445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.965 qpair failed and we were unable to recover it. 00:28:29.965 [2024-12-09 17:38:58.995620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.965 [2024-12-09 17:38:58.995653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.965 qpair failed and we were unable to recover it. 00:28:29.965 [2024-12-09 17:38:58.995924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.965 [2024-12-09 17:38:58.995956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.965 qpair failed and we were unable to recover it. 00:28:29.965 [2024-12-09 17:38:58.996171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.965 [2024-12-09 17:38:58.996202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.965 qpair failed and we were unable to recover it. 00:28:29.965 [2024-12-09 17:38:58.996425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.965 [2024-12-09 17:38:58.996457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.965 qpair failed and we were unable to recover it. 00:28:29.965 [2024-12-09 17:38:58.996699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.965 [2024-12-09 17:38:58.996731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.965 qpair failed and we were unable to recover it. 00:28:29.965 [2024-12-09 17:38:58.996947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.965 [2024-12-09 17:38:58.996978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.965 qpair failed and we were unable to recover it. 00:28:29.965 [2024-12-09 17:38:58.997176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.965 [2024-12-09 17:38:58.997208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.965 qpair failed and we were unable to recover it. 00:28:29.965 [2024-12-09 17:38:58.997406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.965 [2024-12-09 17:38:58.997437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.965 qpair failed and we were unable to recover it. 00:28:29.965 [2024-12-09 17:38:58.997633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.965 [2024-12-09 17:38:58.997666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.965 qpair failed and we were unable to recover it. 00:28:29.965 [2024-12-09 17:38:58.997772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.965 [2024-12-09 17:38:58.997802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.965 qpair failed and we were unable to recover it. 00:28:29.965 [2024-12-09 17:38:58.997999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.966 [2024-12-09 17:38:58.998031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.966 qpair failed and we were unable to recover it. 00:28:29.966 [2024-12-09 17:38:58.998263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.966 [2024-12-09 17:38:58.998296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.966 qpair failed and we were unable to recover it. 00:28:29.966 [2024-12-09 17:38:58.998492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.966 [2024-12-09 17:38:58.998525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.966 qpair failed and we were unable to recover it. 00:28:29.966 [2024-12-09 17:38:58.998714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.966 [2024-12-09 17:38:58.998746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.966 qpair failed and we were unable to recover it. 00:28:29.966 [2024-12-09 17:38:58.998998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.966 [2024-12-09 17:38:58.999029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.966 qpair failed and we were unable to recover it. 00:28:29.966 [2024-12-09 17:38:58.999299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.966 [2024-12-09 17:38:58.999332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.966 qpair failed and we were unable to recover it. 00:28:29.966 [2024-12-09 17:38:58.999527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.966 [2024-12-09 17:38:58.999557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.966 qpair failed and we were unable to recover it. 00:28:29.966 [2024-12-09 17:38:58.999751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.966 [2024-12-09 17:38:58.999783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.966 qpair failed and we were unable to recover it. 00:28:29.966 [2024-12-09 17:38:59.000026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.966 [2024-12-09 17:38:59.000063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.966 qpair failed and we were unable to recover it. 00:28:29.966 [2024-12-09 17:38:59.000275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.966 [2024-12-09 17:38:59.000308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.966 qpair failed and we were unable to recover it. 00:28:29.966 [2024-12-09 17:38:59.000489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.966 [2024-12-09 17:38:59.000521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.966 qpair failed and we were unable to recover it. 00:28:29.966 [2024-12-09 17:38:59.000648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.966 [2024-12-09 17:38:59.000680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.966 qpair failed and we were unable to recover it. 00:28:29.966 [2024-12-09 17:38:59.000860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.966 [2024-12-09 17:38:59.000891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.966 qpair failed and we were unable to recover it. 00:28:29.966 [2024-12-09 17:38:59.001018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.966 [2024-12-09 17:38:59.001050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.966 qpair failed and we were unable to recover it. 00:28:29.966 [2024-12-09 17:38:59.001187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.966 [2024-12-09 17:38:59.001225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.966 qpair failed and we were unable to recover it. 00:28:29.966 [2024-12-09 17:38:59.001349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.966 [2024-12-09 17:38:59.001381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.966 qpair failed and we were unable to recover it. 00:28:29.966 [2024-12-09 17:38:59.001524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.966 [2024-12-09 17:38:59.001556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.966 qpair failed and we were unable to recover it. 00:28:29.966 [2024-12-09 17:38:59.001733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.966 [2024-12-09 17:38:59.001764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.966 qpair failed and we were unable to recover it. 00:28:29.966 [2024-12-09 17:38:59.001954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.966 [2024-12-09 17:38:59.001986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.966 qpair failed and we were unable to recover it. 00:28:29.966 [2024-12-09 17:38:59.002098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.966 [2024-12-09 17:38:59.002129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.966 qpair failed and we were unable to recover it. 00:28:29.966 [2024-12-09 17:38:59.002323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.966 [2024-12-09 17:38:59.002356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.966 qpair failed and we were unable to recover it. 00:28:29.966 [2024-12-09 17:38:59.002623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.966 [2024-12-09 17:38:59.002654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.966 qpair failed and we were unable to recover it. 00:28:29.966 [2024-12-09 17:38:59.002839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.966 [2024-12-09 17:38:59.002871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.966 qpair failed and we were unable to recover it. 00:28:29.966 [2024-12-09 17:38:59.003046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.966 [2024-12-09 17:38:59.003077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.966 qpair failed and we were unable to recover it. 00:28:29.966 [2024-12-09 17:38:59.003229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.966 [2024-12-09 17:38:59.003262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.966 qpair failed and we were unable to recover it. 00:28:29.966 [2024-12-09 17:38:59.003448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.966 [2024-12-09 17:38:59.003479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.966 qpair failed and we were unable to recover it. 00:28:29.966 [2024-12-09 17:38:59.003588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.966 [2024-12-09 17:38:59.003619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.966 qpair failed and we were unable to recover it. 00:28:29.966 [2024-12-09 17:38:59.003809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.966 [2024-12-09 17:38:59.003840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.966 qpair failed and we were unable to recover it. 00:28:29.966 [2024-12-09 17:38:59.004037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.966 [2024-12-09 17:38:59.004070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.966 qpair failed and we were unable to recover it. 00:28:29.966 [2024-12-09 17:38:59.004337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.966 [2024-12-09 17:38:59.004368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.966 qpair failed and we were unable to recover it. 00:28:29.966 [2024-12-09 17:38:59.004520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.966 [2024-12-09 17:38:59.004551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.966 qpair failed and we were unable to recover it. 00:28:29.966 [2024-12-09 17:38:59.004729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.966 [2024-12-09 17:38:59.004760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.966 qpair failed and we were unable to recover it. 00:28:29.966 [2024-12-09 17:38:59.004988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.966 [2024-12-09 17:38:59.005018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.966 qpair failed and we were unable to recover it. 00:28:29.966 [2024-12-09 17:38:59.005277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.966 [2024-12-09 17:38:59.005310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.966 qpair failed and we were unable to recover it. 00:28:29.966 [2024-12-09 17:38:59.005505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.966 [2024-12-09 17:38:59.005537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.966 qpair failed and we were unable to recover it. 00:28:29.966 [2024-12-09 17:38:59.005734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.966 [2024-12-09 17:38:59.005766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.966 qpair failed and we were unable to recover it. 00:28:29.966 [2024-12-09 17:38:59.005904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.966 [2024-12-09 17:38:59.005935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.966 qpair failed and we were unable to recover it. 00:28:29.966 [2024-12-09 17:38:59.006063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.966 [2024-12-09 17:38:59.006095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.966 qpair failed and we were unable to recover it. 00:28:29.966 [2024-12-09 17:38:59.006269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.966 [2024-12-09 17:38:59.006302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.966 qpair failed and we were unable to recover it. 00:28:29.966 [2024-12-09 17:38:59.006478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.967 [2024-12-09 17:38:59.006509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.967 qpair failed and we were unable to recover it. 00:28:29.967 [2024-12-09 17:38:59.006750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.967 [2024-12-09 17:38:59.006783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.967 qpair failed and we were unable to recover it. 00:28:29.967 [2024-12-09 17:38:59.006990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.967 [2024-12-09 17:38:59.007021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.967 qpair failed and we were unable to recover it. 00:28:29.967 [2024-12-09 17:38:59.007203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.967 [2024-12-09 17:38:59.007243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.967 qpair failed and we were unable to recover it. 00:28:29.967 [2024-12-09 17:38:59.007383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.967 [2024-12-09 17:38:59.007414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.967 qpair failed and we were unable to recover it. 00:28:29.967 [2024-12-09 17:38:59.007647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.967 [2024-12-09 17:38:59.007679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.967 qpair failed and we were unable to recover it. 00:28:29.967 [2024-12-09 17:38:59.007812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.967 [2024-12-09 17:38:59.007842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.967 qpair failed and we were unable to recover it. 00:28:29.967 [2024-12-09 17:38:59.008047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.967 [2024-12-09 17:38:59.008080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.967 qpair failed and we were unable to recover it. 00:28:29.967 [2024-12-09 17:38:59.008290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.967 [2024-12-09 17:38:59.008323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.967 qpair failed and we were unable to recover it. 00:28:29.967 [2024-12-09 17:38:59.008501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.967 [2024-12-09 17:38:59.008538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.967 qpair failed and we were unable to recover it. 00:28:29.967 [2024-12-09 17:38:59.008692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.967 [2024-12-09 17:38:59.008724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.967 qpair failed and we were unable to recover it. 00:28:29.967 [2024-12-09 17:38:59.008861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.967 [2024-12-09 17:38:59.008893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.967 qpair failed and we were unable to recover it. 00:28:29.967 [2024-12-09 17:38:59.009080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.967 [2024-12-09 17:38:59.009112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.967 qpair failed and we were unable to recover it. 00:28:29.967 [2024-12-09 17:38:59.009371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.967 [2024-12-09 17:38:59.009404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.967 qpair failed and we were unable to recover it. 00:28:29.967 [2024-12-09 17:38:59.009594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.967 [2024-12-09 17:38:59.009625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.967 qpair failed and we were unable to recover it. 00:28:29.967 [2024-12-09 17:38:59.009802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.967 [2024-12-09 17:38:59.009834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.967 qpair failed and we were unable to recover it. 00:28:29.967 [2024-12-09 17:38:59.010040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.967 [2024-12-09 17:38:59.010071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.967 qpair failed and we were unable to recover it. 00:28:29.967 [2024-12-09 17:38:59.010324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.967 [2024-12-09 17:38:59.010358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.967 qpair failed and we were unable to recover it. 00:28:29.967 [2024-12-09 17:38:59.010552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.967 [2024-12-09 17:38:59.010584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.967 qpair failed and we were unable to recover it. 00:28:29.967 [2024-12-09 17:38:59.010781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.967 [2024-12-09 17:38:59.010813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.967 qpair failed and we were unable to recover it. 00:28:29.967 [2024-12-09 17:38:59.011056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.967 [2024-12-09 17:38:59.011089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.967 qpair failed and we were unable to recover it. 00:28:29.967 [2024-12-09 17:38:59.011303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.967 [2024-12-09 17:38:59.011335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.967 qpair failed and we were unable to recover it. 00:28:29.967 [2024-12-09 17:38:59.011605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.967 [2024-12-09 17:38:59.011637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.967 qpair failed and we were unable to recover it. 00:28:29.967 [2024-12-09 17:38:59.011888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.967 [2024-12-09 17:38:59.011920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.967 qpair failed and we were unable to recover it. 00:28:29.967 [2024-12-09 17:38:59.012032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.967 [2024-12-09 17:38:59.012063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.967 qpair failed and we were unable to recover it. 00:28:29.967 [2024-12-09 17:38:59.012193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.967 [2024-12-09 17:38:59.012235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.967 qpair failed and we were unable to recover it. 00:28:29.967 [2024-12-09 17:38:59.012475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.967 [2024-12-09 17:38:59.012507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.967 qpair failed and we were unable to recover it. 00:28:29.967 [2024-12-09 17:38:59.012705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.967 [2024-12-09 17:38:59.012737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.967 qpair failed and we were unable to recover it. 00:28:29.967 [2024-12-09 17:38:59.012933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.967 [2024-12-09 17:38:59.012964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.967 qpair failed and we were unable to recover it. 00:28:29.967 [2024-12-09 17:38:59.013098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.967 [2024-12-09 17:38:59.013132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.967 qpair failed and we were unable to recover it. 00:28:29.967 [2024-12-09 17:38:59.013409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.967 [2024-12-09 17:38:59.013442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.967 qpair failed and we were unable to recover it. 00:28:29.967 [2024-12-09 17:38:59.013571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.967 [2024-12-09 17:38:59.013604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.967 qpair failed and we were unable to recover it. 00:28:29.967 [2024-12-09 17:38:59.013720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.967 [2024-12-09 17:38:59.013751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.967 qpair failed and we were unable to recover it. 00:28:29.967 [2024-12-09 17:38:59.013876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.967 [2024-12-09 17:38:59.013907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.967 qpair failed and we were unable to recover it. 00:28:29.967 [2024-12-09 17:38:59.014117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.967 [2024-12-09 17:38:59.014148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.967 qpair failed and we were unable to recover it. 00:28:29.967 [2024-12-09 17:38:59.014381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.967 [2024-12-09 17:38:59.014415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.967 qpair failed and we were unable to recover it. 00:28:29.967 [2024-12-09 17:38:59.014608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.967 [2024-12-09 17:38:59.014645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.967 qpair failed and we were unable to recover it. 00:28:29.967 [2024-12-09 17:38:59.014834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.967 [2024-12-09 17:38:59.014866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.967 qpair failed and we were unable to recover it. 00:28:29.967 [2024-12-09 17:38:59.014978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.967 [2024-12-09 17:38:59.015009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.967 qpair failed and we were unable to recover it. 00:28:29.967 [2024-12-09 17:38:59.015237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.967 [2024-12-09 17:38:59.015271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.968 qpair failed and we were unable to recover it. 00:28:29.968 [2024-12-09 17:38:59.015463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.968 [2024-12-09 17:38:59.015496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.968 qpair failed and we were unable to recover it. 00:28:29.968 [2024-12-09 17:38:59.015696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.968 [2024-12-09 17:38:59.015728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.968 qpair failed and we were unable to recover it. 00:28:29.968 [2024-12-09 17:38:59.016002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.968 [2024-12-09 17:38:59.016034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.968 qpair failed and we were unable to recover it. 00:28:29.968 [2024-12-09 17:38:59.016348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.968 [2024-12-09 17:38:59.016382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.968 qpair failed and we were unable to recover it. 00:28:29.968 [2024-12-09 17:38:59.016561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.968 [2024-12-09 17:38:59.016593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.968 qpair failed and we were unable to recover it. 00:28:29.968 [2024-12-09 17:38:59.016780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.968 [2024-12-09 17:38:59.016813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.968 qpair failed and we were unable to recover it. 00:28:29.968 [2024-12-09 17:38:59.016990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.968 [2024-12-09 17:38:59.017021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.968 qpair failed and we were unable to recover it. 00:28:29.968 [2024-12-09 17:38:59.017141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.968 [2024-12-09 17:38:59.017173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.968 qpair failed and we were unable to recover it. 00:28:29.968 [2024-12-09 17:38:59.017326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.968 [2024-12-09 17:38:59.017357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.968 qpair failed and we were unable to recover it. 00:28:29.968 [2024-12-09 17:38:59.017531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.968 [2024-12-09 17:38:59.017568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.968 qpair failed and we were unable to recover it. 00:28:29.968 [2024-12-09 17:38:59.017703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.968 [2024-12-09 17:38:59.017734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.968 qpair failed and we were unable to recover it. 00:28:29.968 [2024-12-09 17:38:59.017938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.968 [2024-12-09 17:38:59.017968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.968 qpair failed and we were unable to recover it. 00:28:29.968 [2024-12-09 17:38:59.018170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.968 [2024-12-09 17:38:59.018201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.968 qpair failed and we were unable to recover it. 00:28:29.968 [2024-12-09 17:38:59.018393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.968 [2024-12-09 17:38:59.018425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.968 qpair failed and we were unable to recover it. 00:28:29.968 [2024-12-09 17:38:59.018638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.968 [2024-12-09 17:38:59.018670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.968 qpair failed and we were unable to recover it. 00:28:29.968 [2024-12-09 17:38:59.018841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.968 [2024-12-09 17:38:59.018872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.968 qpair failed and we were unable to recover it. 00:28:29.968 [2024-12-09 17:38:59.019001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.968 [2024-12-09 17:38:59.019032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.968 qpair failed and we were unable to recover it. 00:28:29.968 [2024-12-09 17:38:59.019274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.968 [2024-12-09 17:38:59.019306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.968 qpair failed and we were unable to recover it. 00:28:29.968 [2024-12-09 17:38:59.019491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.968 [2024-12-09 17:38:59.019522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.968 qpair failed and we were unable to recover it. 00:28:29.968 [2024-12-09 17:38:59.019636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.968 [2024-12-09 17:38:59.019668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.968 qpair failed and we were unable to recover it. 00:28:29.968 [2024-12-09 17:38:59.019790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.968 [2024-12-09 17:38:59.019822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.968 qpair failed and we were unable to recover it. 00:28:29.968 [2024-12-09 17:38:59.019958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.968 [2024-12-09 17:38:59.019990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.968 qpair failed and we were unable to recover it. 00:28:29.968 [2024-12-09 17:38:59.020164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.968 [2024-12-09 17:38:59.020196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.968 qpair failed and we were unable to recover it. 00:28:29.968 [2024-12-09 17:38:59.020332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.968 [2024-12-09 17:38:59.020365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.968 qpair failed and we were unable to recover it. 00:28:29.968 [2024-12-09 17:38:59.020562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.968 [2024-12-09 17:38:59.020593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.968 qpair failed and we were unable to recover it. 00:28:29.968 [2024-12-09 17:38:59.020876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.968 [2024-12-09 17:38:59.020907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.968 qpair failed and we were unable to recover it. 00:28:29.968 [2024-12-09 17:38:59.021043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.968 [2024-12-09 17:38:59.021075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.968 qpair failed and we were unable to recover it. 00:28:29.968 [2024-12-09 17:38:59.021256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.968 [2024-12-09 17:38:59.021289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.968 qpair failed and we were unable to recover it. 00:28:29.968 [2024-12-09 17:38:59.021466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.968 [2024-12-09 17:38:59.021499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.968 qpair failed and we were unable to recover it. 00:28:29.968 [2024-12-09 17:38:59.021619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.968 [2024-12-09 17:38:59.021650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.968 qpair failed and we were unable to recover it. 00:28:29.968 [2024-12-09 17:38:59.021824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.968 [2024-12-09 17:38:59.021855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.968 qpair failed and we were unable to recover it. 00:28:29.968 [2024-12-09 17:38:59.022029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.968 [2024-12-09 17:38:59.022061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.968 qpair failed and we were unable to recover it. 00:28:29.968 [2024-12-09 17:38:59.022243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.968 [2024-12-09 17:38:59.022275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.968 qpair failed and we were unable to recover it. 00:28:29.968 [2024-12-09 17:38:59.022406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.968 [2024-12-09 17:38:59.022438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.968 qpair failed and we were unable to recover it. 00:28:29.968 [2024-12-09 17:38:59.022612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.968 [2024-12-09 17:38:59.022643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.968 qpair failed and we were unable to recover it. 00:28:29.969 [2024-12-09 17:38:59.022764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.969 [2024-12-09 17:38:59.022796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.969 qpair failed and we were unable to recover it. 00:28:29.969 [2024-12-09 17:38:59.023065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.969 [2024-12-09 17:38:59.023138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.969 qpair failed and we were unable to recover it. 00:28:29.969 [2024-12-09 17:38:59.023298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.969 [2024-12-09 17:38:59.023336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.969 qpair failed and we were unable to recover it. 00:28:29.969 [2024-12-09 17:38:59.023459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.969 [2024-12-09 17:38:59.023491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.969 qpair failed and we were unable to recover it. 00:28:29.969 [2024-12-09 17:38:59.023692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.969 [2024-12-09 17:38:59.023725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.969 qpair failed and we were unable to recover it. 00:28:29.969 [2024-12-09 17:38:59.023916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.969 [2024-12-09 17:38:59.023948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.969 qpair failed and we were unable to recover it. 00:28:29.969 [2024-12-09 17:38:59.024193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.969 [2024-12-09 17:38:59.024237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.969 qpair failed and we were unable to recover it. 00:28:29.969 [2024-12-09 17:38:59.024416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.969 [2024-12-09 17:38:59.024449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.969 qpair failed and we were unable to recover it. 00:28:29.969 [2024-12-09 17:38:59.024635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.969 [2024-12-09 17:38:59.024667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.969 qpair failed and we were unable to recover it. 00:28:29.969 [2024-12-09 17:38:59.024868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.969 [2024-12-09 17:38:59.024899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.969 qpair failed and we were unable to recover it. 00:28:29.969 [2024-12-09 17:38:59.025115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.969 [2024-12-09 17:38:59.025146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.969 qpair failed and we were unable to recover it. 00:28:29.969 [2024-12-09 17:38:59.025270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.969 [2024-12-09 17:38:59.025303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.969 qpair failed and we were unable to recover it. 00:28:29.969 [2024-12-09 17:38:59.025533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.969 [2024-12-09 17:38:59.025564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.969 qpair failed and we were unable to recover it. 00:28:29.969 [2024-12-09 17:38:59.025720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.969 [2024-12-09 17:38:59.025752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.969 qpair failed and we were unable to recover it. 00:28:29.969 [2024-12-09 17:38:59.025873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.969 [2024-12-09 17:38:59.025905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.969 qpair failed and we were unable to recover it. 00:28:29.969 [2024-12-09 17:38:59.026109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.969 [2024-12-09 17:38:59.026141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.969 qpair failed and we were unable to recover it. 00:28:29.969 [2024-12-09 17:38:59.026265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.969 [2024-12-09 17:38:59.026298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.969 qpair failed and we were unable to recover it. 00:28:29.969 [2024-12-09 17:38:59.026401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.969 [2024-12-09 17:38:59.026431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.969 qpair failed and we were unable to recover it. 00:28:29.969 [2024-12-09 17:38:59.026611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.969 [2024-12-09 17:38:59.026642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.969 qpair failed and we were unable to recover it. 00:28:29.969 [2024-12-09 17:38:59.026827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.969 [2024-12-09 17:38:59.026859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.969 qpair failed and we were unable to recover it. 00:28:29.969 [2024-12-09 17:38:59.027096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.969 [2024-12-09 17:38:59.027128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.969 qpair failed and we were unable to recover it. 00:28:29.969 [2024-12-09 17:38:59.027322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.969 [2024-12-09 17:38:59.027354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.969 qpair failed and we were unable to recover it. 00:28:29.969 [2024-12-09 17:38:59.027475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.969 [2024-12-09 17:38:59.027506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.969 qpair failed and we were unable to recover it. 00:28:29.969 [2024-12-09 17:38:59.027633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.969 [2024-12-09 17:38:59.027664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.969 qpair failed and we were unable to recover it. 00:28:29.969 [2024-12-09 17:38:59.027797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.969 [2024-12-09 17:38:59.027829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.969 qpair failed and we were unable to recover it. 00:28:29.969 [2024-12-09 17:38:59.027949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.969 [2024-12-09 17:38:59.027980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.969 qpair failed and we were unable to recover it. 00:28:29.969 [2024-12-09 17:38:59.028087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.969 [2024-12-09 17:38:59.028118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.969 qpair failed and we were unable to recover it. 00:28:29.969 [2024-12-09 17:38:59.028402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.969 [2024-12-09 17:38:59.028435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.969 qpair failed and we were unable to recover it. 00:28:29.969 [2024-12-09 17:38:59.028706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.969 [2024-12-09 17:38:59.028744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.969 qpair failed and we were unable to recover it. 00:28:29.969 [2024-12-09 17:38:59.028926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.969 [2024-12-09 17:38:59.028957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.969 qpair failed and we were unable to recover it. 00:28:29.969 [2024-12-09 17:38:59.029160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.969 [2024-12-09 17:38:59.029191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.969 qpair failed and we were unable to recover it. 00:28:29.969 [2024-12-09 17:38:59.029385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.969 [2024-12-09 17:38:59.029418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.969 qpair failed and we were unable to recover it. 00:28:29.969 [2024-12-09 17:38:59.029591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.969 [2024-12-09 17:38:59.029622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.969 qpair failed and we were unable to recover it. 00:28:29.969 [2024-12-09 17:38:59.029827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.969 [2024-12-09 17:38:59.029859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.969 qpair failed and we were unable to recover it. 00:28:29.969 [2024-12-09 17:38:59.029991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.969 [2024-12-09 17:38:59.030023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.969 qpair failed and we were unable to recover it. 00:28:29.969 [2024-12-09 17:38:59.030133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.969 [2024-12-09 17:38:59.030165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.969 qpair failed and we were unable to recover it. 00:28:29.969 [2024-12-09 17:38:59.030285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.969 [2024-12-09 17:38:59.030318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.969 qpair failed and we were unable to recover it. 00:28:29.969 [2024-12-09 17:38:59.030420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.969 [2024-12-09 17:38:59.030452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.969 qpair failed and we were unable to recover it. 00:28:29.969 [2024-12-09 17:38:59.030649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.969 [2024-12-09 17:38:59.030680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.969 qpair failed and we were unable to recover it. 00:28:29.969 [2024-12-09 17:38:59.030924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.969 [2024-12-09 17:38:59.030956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.969 qpair failed and we were unable to recover it. 00:28:29.970 [2024-12-09 17:38:59.031151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.970 [2024-12-09 17:38:59.031182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.970 qpair failed and we were unable to recover it. 00:28:29.970 [2024-12-09 17:38:59.031364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.970 [2024-12-09 17:38:59.031398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.970 qpair failed and we were unable to recover it. 00:28:29.970 [2024-12-09 17:38:59.031580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.970 [2024-12-09 17:38:59.031613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.970 qpair failed and we were unable to recover it. 00:28:29.970 [2024-12-09 17:38:59.031787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.970 [2024-12-09 17:38:59.031819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.970 qpair failed and we were unable to recover it. 00:28:29.970 [2024-12-09 17:38:59.031997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.970 [2024-12-09 17:38:59.032029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.970 qpair failed and we were unable to recover it. 00:28:29.970 [2024-12-09 17:38:59.032206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.970 [2024-12-09 17:38:59.032256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.970 qpair failed and we were unable to recover it. 00:28:29.970 [2024-12-09 17:38:59.032463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.970 [2024-12-09 17:38:59.032495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.970 qpair failed and we were unable to recover it. 00:28:29.970 [2024-12-09 17:38:59.032669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.970 [2024-12-09 17:38:59.032701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.970 qpair failed and we were unable to recover it. 00:28:29.970 [2024-12-09 17:38:59.032893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.970 [2024-12-09 17:38:59.032924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.970 qpair failed and we were unable to recover it. 00:28:29.970 [2024-12-09 17:38:59.033118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.970 [2024-12-09 17:38:59.033150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.970 qpair failed and we were unable to recover it. 00:28:29.970 [2024-12-09 17:38:59.033282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.970 [2024-12-09 17:38:59.033315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.970 qpair failed and we were unable to recover it. 00:28:29.970 [2024-12-09 17:38:59.033436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.970 [2024-12-09 17:38:59.033468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.970 qpair failed and we were unable to recover it. 00:28:29.970 [2024-12-09 17:38:59.033667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.970 [2024-12-09 17:38:59.033699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.970 qpair failed and we were unable to recover it. 00:28:29.970 [2024-12-09 17:38:59.033825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.970 [2024-12-09 17:38:59.033858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.970 qpair failed and we were unable to recover it. 00:28:29.970 [2024-12-09 17:38:59.033983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.970 [2024-12-09 17:38:59.034015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.970 qpair failed and we were unable to recover it. 00:28:29.970 [2024-12-09 17:38:59.034131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.970 [2024-12-09 17:38:59.034163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.970 qpair failed and we were unable to recover it. 00:28:29.970 [2024-12-09 17:38:59.034307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.970 [2024-12-09 17:38:59.034341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.970 qpair failed and we were unable to recover it. 00:28:29.970 [2024-12-09 17:38:59.034561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.970 [2024-12-09 17:38:59.034592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.970 qpair failed and we were unable to recover it. 00:28:29.970 [2024-12-09 17:38:59.034809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.970 [2024-12-09 17:38:59.034841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.970 qpair failed and we were unable to recover it. 00:28:29.970 [2024-12-09 17:38:59.035026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.970 [2024-12-09 17:38:59.035058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.970 qpair failed and we were unable to recover it. 00:28:29.970 [2024-12-09 17:38:59.035250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.970 [2024-12-09 17:38:59.035283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.970 qpair failed and we were unable to recover it. 00:28:29.970 [2024-12-09 17:38:59.035402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.970 [2024-12-09 17:38:59.035434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.970 qpair failed and we were unable to recover it. 00:28:29.970 [2024-12-09 17:38:59.035547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.970 [2024-12-09 17:38:59.035579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.970 qpair failed and we were unable to recover it. 00:28:29.970 [2024-12-09 17:38:59.035700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.970 [2024-12-09 17:38:59.035731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.970 qpair failed and we were unable to recover it. 00:28:29.970 [2024-12-09 17:38:59.037137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.970 [2024-12-09 17:38:59.037192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.970 qpair failed and we were unable to recover it. 00:28:29.970 [2024-12-09 17:38:59.037522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.970 [2024-12-09 17:38:59.037558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.970 qpair failed and we were unable to recover it. 00:28:29.970 [2024-12-09 17:38:59.037740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.970 [2024-12-09 17:38:59.037775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.970 qpair failed and we were unable to recover it. 00:28:29.970 [2024-12-09 17:38:59.037970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.970 [2024-12-09 17:38:59.038002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.970 qpair failed and we were unable to recover it. 00:28:29.970 [2024-12-09 17:38:59.038194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.970 [2024-12-09 17:38:59.038238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.970 qpair failed and we were unable to recover it. 00:28:29.970 [2024-12-09 17:38:59.038392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.970 [2024-12-09 17:38:59.038425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.970 qpair failed and we were unable to recover it. 00:28:29.970 [2024-12-09 17:38:59.038678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.970 [2024-12-09 17:38:59.038711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.970 qpair failed and we were unable to recover it. 00:28:29.970 [2024-12-09 17:38:59.038846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.970 [2024-12-09 17:38:59.038878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.970 qpair failed and we were unable to recover it. 00:28:29.970 [2024-12-09 17:38:59.038992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.970 [2024-12-09 17:38:59.039025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.970 qpair failed and we were unable to recover it. 00:28:29.970 [2024-12-09 17:38:59.039210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.970 [2024-12-09 17:38:59.039258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.970 qpair failed and we were unable to recover it. 00:28:29.970 [2024-12-09 17:38:59.039432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.970 [2024-12-09 17:38:59.039464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.970 qpair failed and we were unable to recover it. 00:28:29.970 [2024-12-09 17:38:59.039598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.970 [2024-12-09 17:38:59.039631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.970 qpair failed and we were unable to recover it. 00:28:29.970 [2024-12-09 17:38:59.039763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.970 [2024-12-09 17:38:59.039794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.970 qpair failed and we were unable to recover it. 00:28:29.970 [2024-12-09 17:38:59.039929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.970 [2024-12-09 17:38:59.039961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.970 qpair failed and we were unable to recover it. 00:28:29.970 [2024-12-09 17:38:59.040141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.970 [2024-12-09 17:38:59.040173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.970 qpair failed and we were unable to recover it. 00:28:29.971 [2024-12-09 17:38:59.040365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.971 [2024-12-09 17:38:59.040399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.971 qpair failed and we were unable to recover it. 00:28:29.971 [2024-12-09 17:38:59.040582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.971 [2024-12-09 17:38:59.040614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.971 qpair failed and we were unable to recover it. 00:28:29.971 [2024-12-09 17:38:59.040744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.971 [2024-12-09 17:38:59.040776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.971 qpair failed and we were unable to recover it. 00:28:29.971 [2024-12-09 17:38:59.040974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.971 [2024-12-09 17:38:59.041006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.971 qpair failed and we were unable to recover it. 00:28:29.971 [2024-12-09 17:38:59.041213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.971 [2024-12-09 17:38:59.041260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.971 qpair failed and we were unable to recover it. 00:28:29.971 [2024-12-09 17:38:59.041381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.971 [2024-12-09 17:38:59.041415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.971 qpair failed and we were unable to recover it. 00:28:29.971 [2024-12-09 17:38:59.041641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.971 [2024-12-09 17:38:59.041675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.971 qpair failed and we were unable to recover it. 00:28:29.971 [2024-12-09 17:38:59.041812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.971 [2024-12-09 17:38:59.041844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.971 qpair failed and we were unable to recover it. 00:28:29.971 [2024-12-09 17:38:59.041965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.971 [2024-12-09 17:38:59.041998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.971 qpair failed and we were unable to recover it. 00:28:29.971 [2024-12-09 17:38:59.042176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.971 [2024-12-09 17:38:59.042208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.971 qpair failed and we were unable to recover it. 00:28:29.971 [2024-12-09 17:38:59.042345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.971 [2024-12-09 17:38:59.042377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.971 qpair failed and we were unable to recover it. 00:28:29.971 [2024-12-09 17:38:59.042506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.971 [2024-12-09 17:38:59.042537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.971 qpair failed and we were unable to recover it. 00:28:29.971 [2024-12-09 17:38:59.042719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.971 [2024-12-09 17:38:59.042751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.971 qpair failed and we were unable to recover it. 00:28:29.971 [2024-12-09 17:38:59.042930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.971 [2024-12-09 17:38:59.042961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.971 qpair failed and we were unable to recover it. 00:28:29.971 [2024-12-09 17:38:59.043155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.971 [2024-12-09 17:38:59.043187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.971 qpair failed and we were unable to recover it. 00:28:29.971 [2024-12-09 17:38:59.043388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.971 [2024-12-09 17:38:59.043420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.971 qpair failed and we were unable to recover it. 00:28:29.971 [2024-12-09 17:38:59.043540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.971 [2024-12-09 17:38:59.043572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.971 qpair failed and we were unable to recover it. 00:28:29.971 [2024-12-09 17:38:59.043746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.971 [2024-12-09 17:38:59.043783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.971 qpair failed and we were unable to recover it. 00:28:29.971 [2024-12-09 17:38:59.043959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.971 [2024-12-09 17:38:59.043992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.971 qpair failed and we were unable to recover it. 00:28:29.971 [2024-12-09 17:38:59.044116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.971 [2024-12-09 17:38:59.044149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.971 qpair failed and we were unable to recover it. 00:28:29.971 [2024-12-09 17:38:59.044339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.971 [2024-12-09 17:38:59.044373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.971 qpair failed and we were unable to recover it. 00:28:29.971 [2024-12-09 17:38:59.044504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.971 [2024-12-09 17:38:59.044535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.971 qpair failed and we were unable to recover it. 00:28:29.971 [2024-12-09 17:38:59.044736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.971 [2024-12-09 17:38:59.044769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.971 qpair failed and we were unable to recover it. 00:28:29.971 [2024-12-09 17:38:59.044951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.971 [2024-12-09 17:38:59.044984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.971 qpair failed and we were unable to recover it. 00:28:29.971 [2024-12-09 17:38:59.045098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.971 [2024-12-09 17:38:59.045132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.971 qpair failed and we were unable to recover it. 00:28:29.971 [2024-12-09 17:38:59.045254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.971 [2024-12-09 17:38:59.045289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.971 qpair failed and we were unable to recover it. 00:28:29.971 [2024-12-09 17:38:59.045491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.971 [2024-12-09 17:38:59.045524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.971 qpair failed and we were unable to recover it. 00:28:29.971 [2024-12-09 17:38:59.045731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.971 [2024-12-09 17:38:59.045762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.971 qpair failed and we were unable to recover it. 00:28:29.971 [2024-12-09 17:38:59.045874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.971 [2024-12-09 17:38:59.045906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.971 qpair failed and we were unable to recover it. 00:28:29.971 [2024-12-09 17:38:59.046102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.971 [2024-12-09 17:38:59.046134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.971 qpair failed and we were unable to recover it. 00:28:29.971 [2024-12-09 17:38:59.046271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.971 [2024-12-09 17:38:59.046304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.971 qpair failed and we were unable to recover it. 00:28:29.971 [2024-12-09 17:38:59.046628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.971 [2024-12-09 17:38:59.046660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.971 qpair failed and we were unable to recover it. 00:28:29.971 [2024-12-09 17:38:59.046845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.971 [2024-12-09 17:38:59.046877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.971 qpair failed and we were unable to recover it. 00:28:29.971 [2024-12-09 17:38:59.047011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.971 [2024-12-09 17:38:59.047043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.971 qpair failed and we were unable to recover it. 00:28:29.971 [2024-12-09 17:38:59.047157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.971 [2024-12-09 17:38:59.047189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.971 qpair failed and we were unable to recover it. 00:28:29.971 [2024-12-09 17:38:59.047441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.971 [2024-12-09 17:38:59.047474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.971 qpair failed and we were unable to recover it. 00:28:29.971 [2024-12-09 17:38:59.047594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.971 [2024-12-09 17:38:59.047626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.971 qpair failed and we were unable to recover it. 00:28:29.971 [2024-12-09 17:38:59.047742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.971 [2024-12-09 17:38:59.047774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.971 qpair failed and we were unable to recover it. 00:28:29.972 [2024-12-09 17:38:59.047885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.972 [2024-12-09 17:38:59.047918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.972 qpair failed and we were unable to recover it. 00:28:29.972 [2024-12-09 17:38:59.048090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.972 [2024-12-09 17:38:59.048122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.972 qpair failed and we were unable to recover it. 00:28:29.972 [2024-12-09 17:38:59.048325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.972 [2024-12-09 17:38:59.048359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.972 qpair failed and we were unable to recover it. 00:28:29.972 [2024-12-09 17:38:59.048483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.972 [2024-12-09 17:38:59.048514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.972 qpair failed and we were unable to recover it. 00:28:29.972 [2024-12-09 17:38:59.048623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.972 [2024-12-09 17:38:59.048654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.972 qpair failed and we were unable to recover it. 00:28:29.972 [2024-12-09 17:38:59.048785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.972 [2024-12-09 17:38:59.048818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.972 qpair failed and we were unable to recover it. 00:28:29.972 [2024-12-09 17:38:59.049028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.972 [2024-12-09 17:38:59.049060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.972 qpair failed and we were unable to recover it. 00:28:29.972 [2024-12-09 17:38:59.049241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.972 [2024-12-09 17:38:59.049275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.972 qpair failed and we were unable to recover it. 00:28:29.972 [2024-12-09 17:38:59.049385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.972 [2024-12-09 17:38:59.049418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.972 qpair failed and we were unable to recover it. 00:28:29.972 [2024-12-09 17:38:59.049597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.972 [2024-12-09 17:38:59.049628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.972 qpair failed and we were unable to recover it. 00:28:29.972 [2024-12-09 17:38:59.049801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.972 [2024-12-09 17:38:59.049833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.972 qpair failed and we were unable to recover it. 00:28:29.972 [2024-12-09 17:38:59.050048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.972 [2024-12-09 17:38:59.050080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.972 qpair failed and we were unable to recover it. 00:28:29.972 [2024-12-09 17:38:59.050199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.972 [2024-12-09 17:38:59.050240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.972 qpair failed and we were unable to recover it. 00:28:29.972 [2024-12-09 17:38:59.050428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.972 [2024-12-09 17:38:59.050462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.972 qpair failed and we were unable to recover it. 00:28:29.972 [2024-12-09 17:38:59.050636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.972 [2024-12-09 17:38:59.050668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.972 qpair failed and we were unable to recover it. 00:28:29.972 [2024-12-09 17:38:59.050788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.972 [2024-12-09 17:38:59.050820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.972 qpair failed and we were unable to recover it. 00:28:29.972 [2024-12-09 17:38:59.050994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.972 [2024-12-09 17:38:59.051028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.972 qpair failed and we were unable to recover it. 00:28:29.972 [2024-12-09 17:38:59.051201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.972 [2024-12-09 17:38:59.051259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.972 qpair failed and we were unable to recover it. 00:28:29.972 [2024-12-09 17:38:59.051453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.972 [2024-12-09 17:38:59.051487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.972 qpair failed and we were unable to recover it. 00:28:29.972 [2024-12-09 17:38:59.051681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.972 [2024-12-09 17:38:59.051714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.972 qpair failed and we were unable to recover it. 00:28:29.972 [2024-12-09 17:38:59.051818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.972 [2024-12-09 17:38:59.051856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.972 qpair failed and we were unable to recover it. 00:28:29.972 [2024-12-09 17:38:59.052038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.972 [2024-12-09 17:38:59.052071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.972 qpair failed and we were unable to recover it. 00:28:29.972 [2024-12-09 17:38:59.052196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.972 [2024-12-09 17:38:59.052237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.972 qpair failed and we were unable to recover it. 00:28:29.972 [2024-12-09 17:38:59.052425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.972 [2024-12-09 17:38:59.052457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.972 qpair failed and we were unable to recover it. 00:28:29.972 [2024-12-09 17:38:59.052563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.972 [2024-12-09 17:38:59.052596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.972 qpair failed and we were unable to recover it. 00:28:29.972 [2024-12-09 17:38:59.052782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.972 [2024-12-09 17:38:59.052813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.972 qpair failed and we were unable to recover it. 00:28:29.972 [2024-12-09 17:38:59.052939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.972 [2024-12-09 17:38:59.052972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.972 qpair failed and we were unable to recover it. 00:28:29.972 [2024-12-09 17:38:59.053098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.972 [2024-12-09 17:38:59.053130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.972 qpair failed and we were unable to recover it. 00:28:29.972 [2024-12-09 17:38:59.053307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.972 [2024-12-09 17:38:59.053341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.972 qpair failed and we were unable to recover it. 00:28:29.972 [2024-12-09 17:38:59.053527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.972 [2024-12-09 17:38:59.053559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.972 qpair failed and we were unable to recover it. 00:28:29.972 [2024-12-09 17:38:59.053671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.972 [2024-12-09 17:38:59.053703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.972 qpair failed and we were unable to recover it. 00:28:29.972 [2024-12-09 17:38:59.053886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.972 [2024-12-09 17:38:59.053917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.972 qpair failed and we were unable to recover it. 00:28:29.972 [2024-12-09 17:38:59.054091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.972 [2024-12-09 17:38:59.054123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.972 qpair failed and we were unable to recover it. 00:28:29.972 [2024-12-09 17:38:59.054248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.972 [2024-12-09 17:38:59.054282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.972 qpair failed and we were unable to recover it. 00:28:29.972 [2024-12-09 17:38:59.054399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.972 [2024-12-09 17:38:59.054431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.972 qpair failed and we were unable to recover it. 00:28:29.972 [2024-12-09 17:38:59.054603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.973 [2024-12-09 17:38:59.054636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.973 qpair failed and we were unable to recover it. 00:28:29.973 [2024-12-09 17:38:59.054829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.973 [2024-12-09 17:38:59.054860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.973 qpair failed and we were unable to recover it. 00:28:29.973 [2024-12-09 17:38:59.054993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.973 [2024-12-09 17:38:59.055026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.973 qpair failed and we were unable to recover it. 00:28:29.973 [2024-12-09 17:38:59.055158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.973 [2024-12-09 17:38:59.055191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.973 qpair failed and we were unable to recover it. 00:28:29.973 [2024-12-09 17:38:59.055407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.973 [2024-12-09 17:38:59.055440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.973 qpair failed and we were unable to recover it. 00:28:29.973 [2024-12-09 17:38:59.055562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.973 [2024-12-09 17:38:59.055595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.973 qpair failed and we were unable to recover it. 00:28:29.973 [2024-12-09 17:38:59.055698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.973 [2024-12-09 17:38:59.055729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.973 qpair failed and we were unable to recover it. 00:28:29.973 [2024-12-09 17:38:59.055902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.973 [2024-12-09 17:38:59.055934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.973 qpair failed and we were unable to recover it. 00:28:29.973 [2024-12-09 17:38:59.056174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.973 [2024-12-09 17:38:59.056209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.973 qpair failed and we were unable to recover it. 00:28:29.973 [2024-12-09 17:38:59.056468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.973 [2024-12-09 17:38:59.056501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.973 qpair failed and we were unable to recover it. 00:28:29.973 [2024-12-09 17:38:59.056608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.973 [2024-12-09 17:38:59.056639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.973 qpair failed and we were unable to recover it. 00:28:29.973 [2024-12-09 17:38:59.056848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.973 [2024-12-09 17:38:59.056880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.973 qpair failed and we were unable to recover it. 00:28:29.973 [2024-12-09 17:38:59.057001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.973 [2024-12-09 17:38:59.057038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.973 qpair failed and we were unable to recover it. 00:28:29.973 [2024-12-09 17:38:59.057233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.973 [2024-12-09 17:38:59.057266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.973 qpair failed and we were unable to recover it. 00:28:29.973 [2024-12-09 17:38:59.057409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.973 [2024-12-09 17:38:59.057442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.973 qpair failed and we were unable to recover it. 00:28:29.973 [2024-12-09 17:38:59.057547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.973 [2024-12-09 17:38:59.057578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.973 qpair failed and we were unable to recover it. 00:28:29.973 [2024-12-09 17:38:59.057706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.973 [2024-12-09 17:38:59.057738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.973 qpair failed and we were unable to recover it. 00:28:29.973 [2024-12-09 17:38:59.057950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.973 [2024-12-09 17:38:59.057982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.973 qpair failed and we were unable to recover it. 00:28:29.973 [2024-12-09 17:38:59.058126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.973 [2024-12-09 17:38:59.058158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.973 qpair failed and we were unable to recover it. 00:28:29.973 [2024-12-09 17:38:59.058285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.973 [2024-12-09 17:38:59.058319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.973 qpair failed and we were unable to recover it. 00:28:29.973 [2024-12-09 17:38:59.058455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.973 [2024-12-09 17:38:59.058488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.973 qpair failed and we were unable to recover it. 00:28:29.973 [2024-12-09 17:38:59.058592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.973 [2024-12-09 17:38:59.058623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.973 qpair failed and we were unable to recover it. 00:28:29.973 [2024-12-09 17:38:59.058738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.973 [2024-12-09 17:38:59.058770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.973 qpair failed and we were unable to recover it. 00:28:29.973 [2024-12-09 17:38:59.058951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.973 [2024-12-09 17:38:59.058984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.973 qpair failed and we were unable to recover it. 00:28:29.973 [2024-12-09 17:38:59.059090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.973 [2024-12-09 17:38:59.059122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.973 qpair failed and we were unable to recover it. 00:28:29.973 [2024-12-09 17:38:59.059236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.973 [2024-12-09 17:38:59.059269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.973 qpair failed and we were unable to recover it. 00:28:29.973 [2024-12-09 17:38:59.059388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.973 [2024-12-09 17:38:59.059422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.973 qpair failed and we were unable to recover it. 00:28:29.973 [2024-12-09 17:38:59.059610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.973 [2024-12-09 17:38:59.059642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.973 qpair failed and we were unable to recover it. 00:28:29.973 [2024-12-09 17:38:59.059766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.973 [2024-12-09 17:38:59.059799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.973 qpair failed and we were unable to recover it. 00:28:29.973 [2024-12-09 17:38:59.059975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.973 [2024-12-09 17:38:59.060007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.973 qpair failed and we were unable to recover it. 00:28:29.973 [2024-12-09 17:38:59.060132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.973 [2024-12-09 17:38:59.060164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.973 qpair failed and we were unable to recover it. 00:28:29.973 [2024-12-09 17:38:59.060305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.973 [2024-12-09 17:38:59.060340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.973 qpair failed and we were unable to recover it. 00:28:29.973 [2024-12-09 17:38:59.060591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.973 [2024-12-09 17:38:59.060624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.973 qpair failed and we were unable to recover it. 00:28:29.973 [2024-12-09 17:38:59.060800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.973 [2024-12-09 17:38:59.060832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.973 qpair failed and we were unable to recover it. 00:28:29.973 [2024-12-09 17:38:59.061046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.973 [2024-12-09 17:38:59.061078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.973 qpair failed and we were unable to recover it. 00:28:29.973 [2024-12-09 17:38:59.061261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.973 [2024-12-09 17:38:59.061295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.973 qpair failed and we were unable to recover it. 00:28:29.973 [2024-12-09 17:38:59.061413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.973 [2024-12-09 17:38:59.061446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.973 qpair failed and we were unable to recover it. 00:28:29.973 [2024-12-09 17:38:59.061556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.973 [2024-12-09 17:38:59.061588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.973 qpair failed and we were unable to recover it. 00:28:29.973 [2024-12-09 17:38:59.061712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.973 [2024-12-09 17:38:59.061746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.973 qpair failed and we were unable to recover it. 00:28:29.973 [2024-12-09 17:38:59.061921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.973 [2024-12-09 17:38:59.061954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.973 qpair failed and we were unable to recover it. 00:28:29.974 [2024-12-09 17:38:59.062134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.974 [2024-12-09 17:38:59.062166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.974 qpair failed and we were unable to recover it. 00:28:29.974 [2024-12-09 17:38:59.062364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.974 [2024-12-09 17:38:59.062398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.974 qpair failed and we were unable to recover it. 00:28:29.974 [2024-12-09 17:38:59.062504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.974 [2024-12-09 17:38:59.062535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.974 qpair failed and we were unable to recover it. 00:28:29.974 [2024-12-09 17:38:59.062716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.974 [2024-12-09 17:38:59.062747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.974 qpair failed and we were unable to recover it. 00:28:29.974 [2024-12-09 17:38:59.062858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.974 [2024-12-09 17:38:59.062891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.974 qpair failed and we were unable to recover it. 00:28:29.974 [2024-12-09 17:38:59.063000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.974 [2024-12-09 17:38:59.063031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.974 qpair failed and we were unable to recover it. 00:28:29.974 [2024-12-09 17:38:59.063234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.974 [2024-12-09 17:38:59.063268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.974 qpair failed and we were unable to recover it. 00:28:29.974 [2024-12-09 17:38:59.063397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.974 [2024-12-09 17:38:59.063430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.974 qpair failed and we were unable to recover it. 00:28:29.974 [2024-12-09 17:38:59.063552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.974 [2024-12-09 17:38:59.063583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.974 qpair failed and we were unable to recover it. 00:28:29.974 [2024-12-09 17:38:59.063764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.974 [2024-12-09 17:38:59.063795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.974 qpair failed and we were unable to recover it. 00:28:29.974 [2024-12-09 17:38:59.063991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.974 [2024-12-09 17:38:59.064023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.974 qpair failed and we were unable to recover it. 00:28:29.974 [2024-12-09 17:38:59.064126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.974 [2024-12-09 17:38:59.064158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.974 qpair failed and we were unable to recover it. 00:28:29.974 [2024-12-09 17:38:59.064357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.974 [2024-12-09 17:38:59.064389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.974 qpair failed and we were unable to recover it. 00:28:29.974 [2024-12-09 17:38:59.064592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.974 [2024-12-09 17:38:59.064631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.974 qpair failed and we were unable to recover it. 00:28:29.974 [2024-12-09 17:38:59.064739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.974 [2024-12-09 17:38:59.064772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.974 qpair failed and we were unable to recover it. 00:28:29.974 [2024-12-09 17:38:59.065011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.974 [2024-12-09 17:38:59.065044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.974 qpair failed and we were unable to recover it. 00:28:29.974 [2024-12-09 17:38:59.065238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.974 [2024-12-09 17:38:59.065277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.974 qpair failed and we were unable to recover it. 00:28:29.974 [2024-12-09 17:38:59.065407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.974 [2024-12-09 17:38:59.065438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.974 qpair failed and we were unable to recover it. 00:28:29.974 [2024-12-09 17:38:59.065605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.974 [2024-12-09 17:38:59.065636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.974 qpair failed and we were unable to recover it. 00:28:29.974 [2024-12-09 17:38:59.065831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.974 [2024-12-09 17:38:59.065863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.974 qpair failed and we were unable to recover it. 00:28:29.974 [2024-12-09 17:38:59.065966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.974 [2024-12-09 17:38:59.065997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.974 qpair failed and we were unable to recover it. 00:28:29.974 [2024-12-09 17:38:59.066174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.974 [2024-12-09 17:38:59.066207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.974 qpair failed and we were unable to recover it. 00:28:29.974 [2024-12-09 17:38:59.066471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.974 [2024-12-09 17:38:59.066504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.974 qpair failed and we were unable to recover it. 00:28:29.974 [2024-12-09 17:38:59.066609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.974 [2024-12-09 17:38:59.066642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.974 qpair failed and we were unable to recover it. 00:28:29.974 [2024-12-09 17:38:59.066762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.974 [2024-12-09 17:38:59.066793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.974 qpair failed and we were unable to recover it. 00:28:29.974 [2024-12-09 17:38:59.066978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.974 [2024-12-09 17:38:59.067011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.974 qpair failed and we were unable to recover it. 00:28:29.974 [2024-12-09 17:38:59.067196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.974 [2024-12-09 17:38:59.067237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.974 qpair failed and we were unable to recover it. 00:28:29.974 [2024-12-09 17:38:59.067349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.974 [2024-12-09 17:38:59.067383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.974 qpair failed and we were unable to recover it. 00:28:29.974 [2024-12-09 17:38:59.067522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.974 [2024-12-09 17:38:59.067554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.974 qpair failed and we were unable to recover it. 00:28:29.974 [2024-12-09 17:38:59.067676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.974 [2024-12-09 17:38:59.067709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.974 qpair failed and we were unable to recover it. 00:28:29.974 [2024-12-09 17:38:59.067833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.974 [2024-12-09 17:38:59.067866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.974 qpair failed and we were unable to recover it. 00:28:29.974 [2024-12-09 17:38:59.068050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.974 [2024-12-09 17:38:59.068083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.974 qpair failed and we were unable to recover it. 00:28:29.974 [2024-12-09 17:38:59.068227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.974 [2024-12-09 17:38:59.068260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.974 qpair failed and we were unable to recover it. 00:28:29.974 [2024-12-09 17:38:59.068517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.974 [2024-12-09 17:38:59.068550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.974 qpair failed and we were unable to recover it. 00:28:29.974 [2024-12-09 17:38:59.068658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.974 [2024-12-09 17:38:59.068689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.974 qpair failed and we were unable to recover it. 00:28:29.974 [2024-12-09 17:38:59.068794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.974 [2024-12-09 17:38:59.068825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.974 qpair failed and we were unable to recover it. 00:28:29.974 [2024-12-09 17:38:59.068932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.974 [2024-12-09 17:38:59.068965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.974 qpair failed and we were unable to recover it. 00:28:29.974 [2024-12-09 17:38:59.069092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.974 [2024-12-09 17:38:59.069125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.974 qpair failed and we were unable to recover it. 00:28:29.974 [2024-12-09 17:38:59.069239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.974 [2024-12-09 17:38:59.069274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.974 qpair failed and we were unable to recover it. 00:28:29.975 [2024-12-09 17:38:59.069379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.975 [2024-12-09 17:38:59.069411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.975 qpair failed and we were unable to recover it. 00:28:29.975 [2024-12-09 17:38:59.069653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.975 [2024-12-09 17:38:59.069691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.975 qpair failed and we were unable to recover it. 00:28:29.975 [2024-12-09 17:38:59.069860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.975 [2024-12-09 17:38:59.069892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.975 qpair failed and we were unable to recover it. 00:28:29.975 [2024-12-09 17:38:59.070016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.975 [2024-12-09 17:38:59.070047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.975 qpair failed and we were unable to recover it. 00:28:29.975 [2024-12-09 17:38:59.070243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.975 [2024-12-09 17:38:59.070277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.975 qpair failed and we were unable to recover it. 00:28:29.975 [2024-12-09 17:38:59.070452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.975 [2024-12-09 17:38:59.070483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.975 qpair failed and we were unable to recover it. 00:28:29.975 [2024-12-09 17:38:59.070598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.975 [2024-12-09 17:38:59.070629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.975 qpair failed and we were unable to recover it. 00:28:29.975 [2024-12-09 17:38:59.070735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.975 [2024-12-09 17:38:59.070769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.975 qpair failed and we were unable to recover it. 00:28:29.975 [2024-12-09 17:38:59.070879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.975 [2024-12-09 17:38:59.070909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.975 qpair failed and we were unable to recover it. 00:28:29.975 [2024-12-09 17:38:59.071101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.975 [2024-12-09 17:38:59.071133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.975 qpair failed and we were unable to recover it. 00:28:29.975 [2024-12-09 17:38:59.071372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.975 [2024-12-09 17:38:59.071406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.975 qpair failed and we were unable to recover it. 00:28:29.975 [2024-12-09 17:38:59.071591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.975 [2024-12-09 17:38:59.071622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.975 qpair failed and we were unable to recover it. 00:28:29.975 [2024-12-09 17:38:59.071792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.975 [2024-12-09 17:38:59.071823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.975 qpair failed and we were unable to recover it. 00:28:29.975 [2024-12-09 17:38:59.071989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.975 [2024-12-09 17:38:59.072022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.975 qpair failed and we were unable to recover it. 00:28:29.975 [2024-12-09 17:38:59.072258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.975 [2024-12-09 17:38:59.072292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.975 qpair failed and we were unable to recover it. 00:28:29.975 [2024-12-09 17:38:59.072475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.975 [2024-12-09 17:38:59.072507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.975 qpair failed and we were unable to recover it. 00:28:29.975 [2024-12-09 17:38:59.072644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.975 [2024-12-09 17:38:59.072676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.975 qpair failed and we were unable to recover it. 00:28:29.975 [2024-12-09 17:38:59.072849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.975 [2024-12-09 17:38:59.072880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.975 qpair failed and we were unable to recover it. 00:28:29.975 [2024-12-09 17:38:59.072997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.975 [2024-12-09 17:38:59.073030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.975 qpair failed and we were unable to recover it. 00:28:29.975 [2024-12-09 17:38:59.073133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.975 [2024-12-09 17:38:59.073164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.975 qpair failed and we were unable to recover it. 00:28:29.975 [2024-12-09 17:38:59.073425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.975 [2024-12-09 17:38:59.073497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.975 qpair failed and we were unable to recover it. 00:28:29.975 [2024-12-09 17:38:59.073648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.975 [2024-12-09 17:38:59.073685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.975 qpair failed and we were unable to recover it. 00:28:29.975 [2024-12-09 17:38:59.073907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.975 [2024-12-09 17:38:59.073940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.975 qpair failed and we were unable to recover it. 00:28:29.975 [2024-12-09 17:38:59.074139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.975 [2024-12-09 17:38:59.074171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.975 qpair failed and we were unable to recover it. 00:28:29.975 [2024-12-09 17:38:59.074368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.975 [2024-12-09 17:38:59.074407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.975 qpair failed and we were unable to recover it. 00:28:29.975 [2024-12-09 17:38:59.074574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.975 [2024-12-09 17:38:59.074606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.975 qpair failed and we were unable to recover it. 00:28:29.975 [2024-12-09 17:38:59.074726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.975 [2024-12-09 17:38:59.074758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.975 qpair failed and we were unable to recover it. 00:28:29.975 [2024-12-09 17:38:59.074931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.975 [2024-12-09 17:38:59.074965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.975 qpair failed and we were unable to recover it. 00:28:29.975 [2024-12-09 17:38:59.075105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.975 [2024-12-09 17:38:59.075147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.975 qpair failed and we were unable to recover it. 00:28:29.975 [2024-12-09 17:38:59.075276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.975 [2024-12-09 17:38:59.075310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.975 qpair failed and we were unable to recover it. 00:28:29.975 [2024-12-09 17:38:59.075576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.975 [2024-12-09 17:38:59.075608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.975 qpair failed and we were unable to recover it. 00:28:29.975 [2024-12-09 17:38:59.075726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.975 [2024-12-09 17:38:59.075766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.975 qpair failed and we were unable to recover it. 00:28:29.975 [2024-12-09 17:38:59.075960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.975 [2024-12-09 17:38:59.075993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.975 qpair failed and we were unable to recover it. 00:28:29.975 [2024-12-09 17:38:59.076269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.975 [2024-12-09 17:38:59.076303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.975 qpair failed and we were unable to recover it. 00:28:29.975 [2024-12-09 17:38:59.076419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.975 [2024-12-09 17:38:59.076450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.975 qpair failed and we were unable to recover it. 00:28:29.975 [2024-12-09 17:38:59.076575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.975 [2024-12-09 17:38:59.076611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.975 qpair failed and we were unable to recover it. 00:28:29.975 [2024-12-09 17:38:59.076805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.975 [2024-12-09 17:38:59.076838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.975 qpair failed and we were unable to recover it. 00:28:29.975 [2024-12-09 17:38:59.076969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.976 [2024-12-09 17:38:59.077001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.976 qpair failed and we were unable to recover it. 00:28:29.976 [2024-12-09 17:38:59.077189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.976 [2024-12-09 17:38:59.077230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.976 qpair failed and we were unable to recover it. 00:28:29.976 [2024-12-09 17:38:59.077445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.976 [2024-12-09 17:38:59.077477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.976 qpair failed and we were unable to recover it. 00:28:29.976 [2024-12-09 17:38:59.077682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.976 [2024-12-09 17:38:59.077715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.976 qpair failed and we were unable to recover it. 00:28:29.976 [2024-12-09 17:38:59.077959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.976 [2024-12-09 17:38:59.077990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.976 qpair failed and we were unable to recover it. 00:28:29.976 [2024-12-09 17:38:59.078180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.976 [2024-12-09 17:38:59.078213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.976 qpair failed and we were unable to recover it. 00:28:29.976 [2024-12-09 17:38:59.078516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.976 [2024-12-09 17:38:59.078549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.976 qpair failed and we were unable to recover it. 00:28:29.976 [2024-12-09 17:38:59.078658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.976 [2024-12-09 17:38:59.078689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.976 qpair failed and we were unable to recover it. 00:28:29.976 [2024-12-09 17:38:59.078938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.976 [2024-12-09 17:38:59.078970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.976 qpair failed and we were unable to recover it. 00:28:29.976 [2024-12-09 17:38:59.079155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.976 [2024-12-09 17:38:59.079186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.976 qpair failed and we were unable to recover it. 00:28:29.976 [2024-12-09 17:38:59.079489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.976 [2024-12-09 17:38:59.079525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.976 qpair failed and we were unable to recover it. 00:28:29.976 [2024-12-09 17:38:59.079646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.976 [2024-12-09 17:38:59.079678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.976 qpair failed and we were unable to recover it. 00:28:29.976 [2024-12-09 17:38:59.079797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.976 [2024-12-09 17:38:59.079829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.976 qpair failed and we were unable to recover it. 00:28:29.976 [2024-12-09 17:38:59.080001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.976 [2024-12-09 17:38:59.080033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.976 qpair failed and we were unable to recover it. 00:28:29.976 [2024-12-09 17:38:59.080156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.976 [2024-12-09 17:38:59.080188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:29.976 qpair failed and we were unable to recover it. 00:28:29.976 [2024-12-09 17:38:59.080383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.976 [2024-12-09 17:38:59.080454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:29.976 qpair failed and we were unable to recover it. 00:28:29.976 [2024-12-09 17:38:59.080607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.976 [2024-12-09 17:38:59.080641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.976 qpair failed and we were unable to recover it. 00:28:29.976 [2024-12-09 17:38:59.080819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.976 [2024-12-09 17:38:59.080851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.976 qpair failed and we were unable to recover it. 00:28:29.976 [2024-12-09 17:38:59.081031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.976 [2024-12-09 17:38:59.081069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.976 qpair failed and we were unable to recover it. 00:28:29.976 [2024-12-09 17:38:59.081335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.976 [2024-12-09 17:38:59.081368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.976 qpair failed and we were unable to recover it. 00:28:29.976 [2024-12-09 17:38:59.081569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.976 [2024-12-09 17:38:59.081601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.976 qpair failed and we were unable to recover it. 00:28:29.976 [2024-12-09 17:38:59.081778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.976 [2024-12-09 17:38:59.081810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.976 qpair failed and we were unable to recover it. 00:28:29.976 [2024-12-09 17:38:59.081997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.976 [2024-12-09 17:38:59.082029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.976 qpair failed and we were unable to recover it. 00:28:29.976 [2024-12-09 17:38:59.082204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.976 [2024-12-09 17:38:59.082246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.976 qpair failed and we were unable to recover it. 00:28:29.976 [2024-12-09 17:38:59.082492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.976 [2024-12-09 17:38:59.082523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.976 qpair failed and we were unable to recover it. 00:28:29.976 [2024-12-09 17:38:59.082655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.976 [2024-12-09 17:38:59.082687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.976 qpair failed and we were unable to recover it. 00:28:29.976 [2024-12-09 17:38:59.082806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.976 [2024-12-09 17:38:59.082837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.976 qpair failed and we were unable to recover it. 00:28:29.976 [2024-12-09 17:38:59.083055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.976 [2024-12-09 17:38:59.083088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.976 qpair failed and we were unable to recover it. 00:28:29.976 [2024-12-09 17:38:59.083278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.976 [2024-12-09 17:38:59.083311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.976 qpair failed and we were unable to recover it. 00:28:29.976 [2024-12-09 17:38:59.083437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.976 [2024-12-09 17:38:59.083469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.976 qpair failed and we were unable to recover it. 00:28:29.976 [2024-12-09 17:38:59.083593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.976 [2024-12-09 17:38:59.083625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.976 qpair failed and we were unable to recover it. 00:28:29.976 [2024-12-09 17:38:59.083804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.976 [2024-12-09 17:38:59.083835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.976 qpair failed and we were unable to recover it. 00:28:29.976 [2024-12-09 17:38:59.083953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.976 [2024-12-09 17:38:59.083985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.976 qpair failed and we were unable to recover it. 00:28:29.976 [2024-12-09 17:38:59.084176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.976 [2024-12-09 17:38:59.084208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.976 qpair failed and we were unable to recover it. 00:28:29.976 [2024-12-09 17:38:59.084429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.976 [2024-12-09 17:38:59.084460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.976 qpair failed and we were unable to recover it. 00:28:29.976 [2024-12-09 17:38:59.084658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.976 [2024-12-09 17:38:59.084691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.976 qpair failed and we were unable to recover it. 00:28:29.976 [2024-12-09 17:38:59.084864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.976 [2024-12-09 17:38:59.084896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.976 qpair failed and we were unable to recover it. 00:28:29.976 [2024-12-09 17:38:59.085079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.976 [2024-12-09 17:38:59.085111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.976 qpair failed and we were unable to recover it. 00:28:29.976 [2024-12-09 17:38:59.085302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.976 [2024-12-09 17:38:59.085337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.976 qpair failed and we were unable to recover it. 00:28:29.976 [2024-12-09 17:38:59.085509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.977 [2024-12-09 17:38:59.085541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.977 qpair failed and we were unable to recover it. 00:28:29.977 [2024-12-09 17:38:59.085802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.977 [2024-12-09 17:38:59.085833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.977 qpair failed and we were unable to recover it. 00:28:29.977 [2024-12-09 17:38:59.086102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.977 [2024-12-09 17:38:59.086134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.977 qpair failed and we were unable to recover it. 00:28:29.977 [2024-12-09 17:38:59.086394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.977 [2024-12-09 17:38:59.086429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.977 qpair failed and we were unable to recover it. 00:28:29.977 [2024-12-09 17:38:59.086620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.977 [2024-12-09 17:38:59.086651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.977 qpair failed and we were unable to recover it. 00:28:29.977 [2024-12-09 17:38:59.086842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.977 [2024-12-09 17:38:59.086874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.977 qpair failed and we were unable to recover it. 00:28:29.977 [2024-12-09 17:38:59.087061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.977 [2024-12-09 17:38:59.087094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.977 qpair failed and we were unable to recover it. 00:28:29.977 [2024-12-09 17:38:59.087267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.977 [2024-12-09 17:38:59.087300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.977 qpair failed and we were unable to recover it. 00:28:29.977 [2024-12-09 17:38:59.087421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.977 [2024-12-09 17:38:59.087453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.977 qpair failed and we were unable to recover it. 00:28:29.977 [2024-12-09 17:38:59.087715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.977 [2024-12-09 17:38:59.087748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.977 qpair failed and we were unable to recover it. 00:28:29.977 [2024-12-09 17:38:59.087938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.977 [2024-12-09 17:38:59.087970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.977 qpair failed and we were unable to recover it. 00:28:29.977 [2024-12-09 17:38:59.088148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.977 [2024-12-09 17:38:59.088180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.977 qpair failed and we were unable to recover it. 00:28:29.977 [2024-12-09 17:38:59.088388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.977 [2024-12-09 17:38:59.088422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.977 qpair failed and we were unable to recover it. 00:28:29.977 [2024-12-09 17:38:59.088541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.977 [2024-12-09 17:38:59.088572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.977 qpair failed and we were unable to recover it. 00:28:29.977 [2024-12-09 17:38:59.088695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.977 [2024-12-09 17:38:59.088726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.977 qpair failed and we were unable to recover it. 00:28:29.977 [2024-12-09 17:38:59.088948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.977 [2024-12-09 17:38:59.088980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.977 qpair failed and we were unable to recover it. 00:28:29.977 [2024-12-09 17:38:59.089247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.977 [2024-12-09 17:38:59.089281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.977 qpair failed and we were unable to recover it. 00:28:29.977 [2024-12-09 17:38:59.089543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.977 [2024-12-09 17:38:59.089575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.977 qpair failed and we were unable to recover it. 00:28:29.977 [2024-12-09 17:38:59.089704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.977 [2024-12-09 17:38:59.089735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.977 qpair failed and we were unable to recover it. 00:28:29.977 [2024-12-09 17:38:59.089939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.977 [2024-12-09 17:38:59.089977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.977 qpair failed and we were unable to recover it. 00:28:29.977 [2024-12-09 17:38:59.090179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.977 [2024-12-09 17:38:59.090212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.977 qpair failed and we were unable to recover it. 00:28:29.977 [2024-12-09 17:38:59.090365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.977 [2024-12-09 17:38:59.090398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.977 qpair failed and we were unable to recover it. 00:28:29.977 [2024-12-09 17:38:59.090600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.977 [2024-12-09 17:38:59.090632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.977 qpair failed and we were unable to recover it. 00:28:29.977 [2024-12-09 17:38:59.090757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.977 [2024-12-09 17:38:59.090788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.977 qpair failed and we were unable to recover it. 00:28:29.977 [2024-12-09 17:38:59.090988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.977 [2024-12-09 17:38:59.091020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.977 qpair failed and we were unable to recover it. 00:28:29.977 [2024-12-09 17:38:59.091213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.977 [2024-12-09 17:38:59.091255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.977 qpair failed and we were unable to recover it. 00:28:29.977 [2024-12-09 17:38:59.091429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.977 [2024-12-09 17:38:59.091461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.977 qpair failed and we were unable to recover it. 00:28:29.977 [2024-12-09 17:38:59.091581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.977 [2024-12-09 17:38:59.091612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.977 qpair failed and we were unable to recover it. 00:28:29.977 [2024-12-09 17:38:59.091785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.977 [2024-12-09 17:38:59.091818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.977 qpair failed and we were unable to recover it. 00:28:29.977 [2024-12-09 17:38:59.092010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.977 [2024-12-09 17:38:59.092042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.977 qpair failed and we were unable to recover it. 00:28:29.977 [2024-12-09 17:38:59.092229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.977 [2024-12-09 17:38:59.092261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.977 qpair failed and we were unable to recover it. 00:28:29.977 [2024-12-09 17:38:59.092435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.977 [2024-12-09 17:38:59.092468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.977 qpair failed and we were unable to recover it. 00:28:29.977 [2024-12-09 17:38:59.092641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.978 [2024-12-09 17:38:59.092673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.978 qpair failed and we were unable to recover it. 00:28:29.978 [2024-12-09 17:38:59.092792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.978 [2024-12-09 17:38:59.092823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.978 qpair failed and we were unable to recover it. 00:28:29.978 [2024-12-09 17:38:59.092997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.978 [2024-12-09 17:38:59.093028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.978 qpair failed and we were unable to recover it. 00:28:29.978 [2024-12-09 17:38:59.093213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.978 [2024-12-09 17:38:59.093256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.978 qpair failed and we were unable to recover it. 00:28:29.978 [2024-12-09 17:38:59.093545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.978 [2024-12-09 17:38:59.093577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.978 qpair failed and we were unable to recover it. 00:28:29.978 [2024-12-09 17:38:59.093817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.978 [2024-12-09 17:38:59.093848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.978 qpair failed and we were unable to recover it. 00:28:29.978 [2024-12-09 17:38:59.094039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.978 [2024-12-09 17:38:59.094071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.978 qpair failed and we were unable to recover it. 00:28:29.978 [2024-12-09 17:38:59.094201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.978 [2024-12-09 17:38:59.094243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.978 qpair failed and we were unable to recover it. 00:28:29.978 [2024-12-09 17:38:59.094377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.978 [2024-12-09 17:38:59.094409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.978 qpair failed and we were unable to recover it. 00:28:29.978 [2024-12-09 17:38:59.094548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.978 [2024-12-09 17:38:59.094581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.978 qpair failed and we were unable to recover it. 00:28:29.978 [2024-12-09 17:38:59.094790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.978 [2024-12-09 17:38:59.094822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.978 qpair failed and we were unable to recover it. 00:28:29.978 [2024-12-09 17:38:59.095004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.978 [2024-12-09 17:38:59.095036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.978 qpair failed and we were unable to recover it. 00:28:29.978 [2024-12-09 17:38:59.095146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.978 [2024-12-09 17:38:59.095177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.978 qpair failed and we were unable to recover it. 00:28:29.978 [2024-12-09 17:38:59.095442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.978 [2024-12-09 17:38:59.095476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.978 qpair failed and we were unable to recover it. 00:28:29.978 [2024-12-09 17:38:59.095664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.978 [2024-12-09 17:38:59.095695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.978 qpair failed and we were unable to recover it. 00:28:29.978 [2024-12-09 17:38:59.095870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.978 [2024-12-09 17:38:59.095901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.978 qpair failed and we were unable to recover it. 00:28:29.978 [2024-12-09 17:38:59.096022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.978 [2024-12-09 17:38:59.096053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.978 qpair failed and we were unable to recover it. 00:28:29.978 [2024-12-09 17:38:59.096344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.978 [2024-12-09 17:38:59.096379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.978 qpair failed and we were unable to recover it. 00:28:29.978 [2024-12-09 17:38:59.096516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.978 [2024-12-09 17:38:59.096548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.978 qpair failed and we were unable to recover it. 00:28:29.978 [2024-12-09 17:38:59.096720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.978 [2024-12-09 17:38:59.096752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.978 qpair failed and we were unable to recover it. 00:28:29.978 [2024-12-09 17:38:59.096994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.978 [2024-12-09 17:38:59.097026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.978 qpair failed and we were unable to recover it. 00:28:29.978 [2024-12-09 17:38:59.097298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.978 [2024-12-09 17:38:59.097331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.978 qpair failed and we were unable to recover it. 00:28:29.978 [2024-12-09 17:38:59.097516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.978 [2024-12-09 17:38:59.097548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.978 qpair failed and we were unable to recover it. 00:28:29.978 [2024-12-09 17:38:59.097741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.978 [2024-12-09 17:38:59.097772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.978 qpair failed and we were unable to recover it. 00:28:29.978 [2024-12-09 17:38:59.097900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.978 [2024-12-09 17:38:59.097932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.978 qpair failed and we were unable to recover it. 00:28:29.978 [2024-12-09 17:38:59.098193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.978 [2024-12-09 17:38:59.098233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.978 qpair failed and we were unable to recover it. 00:28:29.978 [2024-12-09 17:38:59.098425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.978 [2024-12-09 17:38:59.098457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.978 qpair failed and we were unable to recover it. 00:28:29.978 [2024-12-09 17:38:59.098696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.978 [2024-12-09 17:38:59.098739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.978 qpair failed and we were unable to recover it. 00:28:29.978 [2024-12-09 17:38:59.098949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.978 [2024-12-09 17:38:59.098982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.978 qpair failed and we were unable to recover it. 00:28:29.978 [2024-12-09 17:38:59.099107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.978 [2024-12-09 17:38:59.099138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.978 qpair failed and we were unable to recover it. 00:28:29.978 [2024-12-09 17:38:59.099324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.978 [2024-12-09 17:38:59.099358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.978 qpair failed and we were unable to recover it. 00:28:29.978 [2024-12-09 17:38:59.099548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.978 [2024-12-09 17:38:59.099580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.978 qpair failed and we were unable to recover it. 00:28:29.978 [2024-12-09 17:38:59.099819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.978 [2024-12-09 17:38:59.099850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.978 qpair failed and we were unable to recover it. 00:28:29.978 [2024-12-09 17:38:59.100038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.978 [2024-12-09 17:38:59.100071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.978 qpair failed and we were unable to recover it. 00:28:29.978 [2024-12-09 17:38:59.100210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.978 [2024-12-09 17:38:59.100266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.978 qpair failed and we were unable to recover it. 00:28:29.978 [2024-12-09 17:38:59.100477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.978 [2024-12-09 17:38:59.100509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.978 qpair failed and we were unable to recover it. 00:28:29.978 [2024-12-09 17:38:59.100627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.978 [2024-12-09 17:38:59.100658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.978 qpair failed and we were unable to recover it. 00:28:29.978 [2024-12-09 17:38:59.100836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.978 [2024-12-09 17:38:59.100868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.978 qpair failed and we were unable to recover it. 00:28:29.978 [2024-12-09 17:38:59.100990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.978 [2024-12-09 17:38:59.101021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.978 qpair failed and we were unable to recover it. 00:28:29.978 [2024-12-09 17:38:59.101211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.979 [2024-12-09 17:38:59.101255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.979 qpair failed and we were unable to recover it. 00:28:29.979 [2024-12-09 17:38:59.101445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.979 [2024-12-09 17:38:59.101486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.979 qpair failed and we were unable to recover it. 00:28:29.979 [2024-12-09 17:38:59.101714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.979 [2024-12-09 17:38:59.101763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.979 qpair failed and we were unable to recover it. 00:28:29.979 [2024-12-09 17:38:59.101982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.979 [2024-12-09 17:38:59.102027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.979 qpair failed and we were unable to recover it. 00:28:29.979 [2024-12-09 17:38:59.102243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.979 [2024-12-09 17:38:59.102278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.979 qpair failed and we were unable to recover it. 00:28:29.979 [2024-12-09 17:38:59.102453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.979 [2024-12-09 17:38:59.102485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.979 qpair failed and we were unable to recover it. 00:28:29.979 [2024-12-09 17:38:59.102627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.979 [2024-12-09 17:38:59.102659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.979 qpair failed and we were unable to recover it. 00:28:29.979 [2024-12-09 17:38:59.102789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.979 [2024-12-09 17:38:59.102821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.979 qpair failed and we were unable to recover it. 00:28:29.979 [2024-12-09 17:38:59.102947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.979 [2024-12-09 17:38:59.102980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.979 qpair failed and we were unable to recover it. 00:28:29.979 [2024-12-09 17:38:59.103162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.979 [2024-12-09 17:38:59.103194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.979 qpair failed and we were unable to recover it. 00:28:29.979 [2024-12-09 17:38:59.103319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.979 [2024-12-09 17:38:59.103352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.979 qpair failed and we were unable to recover it. 00:28:29.979 [2024-12-09 17:38:59.103470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.979 [2024-12-09 17:38:59.103504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.979 qpair failed and we were unable to recover it. 00:28:29.979 [2024-12-09 17:38:59.103626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.979 [2024-12-09 17:38:59.103660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.979 qpair failed and we were unable to recover it. 00:28:29.979 [2024-12-09 17:38:59.103862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.979 [2024-12-09 17:38:59.103911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.979 qpair failed and we were unable to recover it. 00:28:29.979 [2024-12-09 17:38:59.104112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.979 [2024-12-09 17:38:59.104149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.979 qpair failed and we were unable to recover it. 00:28:29.979 [2024-12-09 17:38:59.104356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.979 [2024-12-09 17:38:59.104391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.979 qpair failed and we were unable to recover it. 00:28:29.979 [2024-12-09 17:38:59.104566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.979 [2024-12-09 17:38:59.104598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.979 qpair failed and we were unable to recover it. 00:28:29.979 [2024-12-09 17:38:59.104707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.979 [2024-12-09 17:38:59.104741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.979 qpair failed and we were unable to recover it. 00:28:29.979 [2024-12-09 17:38:59.104854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.979 [2024-12-09 17:38:59.104887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.979 qpair failed and we were unable to recover it. 00:28:29.979 [2024-12-09 17:38:59.105080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.979 [2024-12-09 17:38:59.105113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.979 qpair failed and we were unable to recover it. 00:28:29.979 [2024-12-09 17:38:59.105304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.979 [2024-12-09 17:38:59.105338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.979 qpair failed and we were unable to recover it. 00:28:29.979 [2024-12-09 17:38:59.105527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.979 [2024-12-09 17:38:59.105562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.979 qpair failed and we were unable to recover it. 00:28:29.979 [2024-12-09 17:38:59.105764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.979 [2024-12-09 17:38:59.105812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.979 qpair failed and we were unable to recover it. 00:28:29.979 [2024-12-09 17:38:59.105952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.979 [2024-12-09 17:38:59.105989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:29.979 qpair failed and we were unable to recover it. 00:28:29.979 [2024-12-09 17:38:59.106102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.261 [2024-12-09 17:38:59.106134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.261 qpair failed and we were unable to recover it. 00:28:30.261 [2024-12-09 17:38:59.106311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.261 [2024-12-09 17:38:59.106345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.261 qpair failed and we were unable to recover it. 00:28:30.261 [2024-12-09 17:38:59.106532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.261 [2024-12-09 17:38:59.106563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.261 qpair failed and we were unable to recover it. 00:28:30.261 [2024-12-09 17:38:59.106677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.261 [2024-12-09 17:38:59.106708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.261 qpair failed and we were unable to recover it. 00:28:30.261 [2024-12-09 17:38:59.106875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.261 [2024-12-09 17:38:59.106914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.261 qpair failed and we were unable to recover it. 00:28:30.261 [2024-12-09 17:38:59.107087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.261 [2024-12-09 17:38:59.107118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.261 qpair failed and we were unable to recover it. 00:28:30.261 [2024-12-09 17:38:59.107360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.261 [2024-12-09 17:38:59.107393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.261 qpair failed and we were unable to recover it. 00:28:30.261 [2024-12-09 17:38:59.107585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.261 [2024-12-09 17:38:59.107617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.261 qpair failed and we were unable to recover it. 00:28:30.261 [2024-12-09 17:38:59.107824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.261 [2024-12-09 17:38:59.107856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.261 qpair failed and we were unable to recover it. 00:28:30.261 [2024-12-09 17:38:59.108035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.261 [2024-12-09 17:38:59.108067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.261 qpair failed and we were unable to recover it. 00:28:30.261 [2024-12-09 17:38:59.108181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.261 [2024-12-09 17:38:59.108212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.261 qpair failed and we were unable to recover it. 00:28:30.261 [2024-12-09 17:38:59.108345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.261 [2024-12-09 17:38:59.108377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.261 qpair failed and we were unable to recover it. 00:28:30.261 [2024-12-09 17:38:59.108517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.261 [2024-12-09 17:38:59.108550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.261 qpair failed and we were unable to recover it. 00:28:30.261 [2024-12-09 17:38:59.108734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.261 [2024-12-09 17:38:59.108765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.261 qpair failed and we were unable to recover it. 00:28:30.261 [2024-12-09 17:38:59.108874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.261 [2024-12-09 17:38:59.108906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.261 qpair failed and we were unable to recover it. 00:28:30.261 [2024-12-09 17:38:59.109028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.261 [2024-12-09 17:38:59.109060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.261 qpair failed and we were unable to recover it. 00:28:30.261 [2024-12-09 17:38:59.109164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.261 [2024-12-09 17:38:59.109197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.261 qpair failed and we were unable to recover it. 00:28:30.261 [2024-12-09 17:38:59.109321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.261 [2024-12-09 17:38:59.109353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.261 qpair failed and we were unable to recover it. 00:28:30.261 [2024-12-09 17:38:59.109500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.261 [2024-12-09 17:38:59.109532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.261 qpair failed and we were unable to recover it. 00:28:30.261 [2024-12-09 17:38:59.109650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.261 [2024-12-09 17:38:59.109681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.261 qpair failed and we were unable to recover it. 00:28:30.261 [2024-12-09 17:38:59.109886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.261 [2024-12-09 17:38:59.109918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.261 qpair failed and we were unable to recover it. 00:28:30.261 [2024-12-09 17:38:59.110091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.261 [2024-12-09 17:38:59.110122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.261 qpair failed and we were unable to recover it. 00:28:30.261 [2024-12-09 17:38:59.110294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.261 [2024-12-09 17:38:59.110328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.261 qpair failed and we were unable to recover it. 00:28:30.261 [2024-12-09 17:38:59.110465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.261 [2024-12-09 17:38:59.110497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.261 qpair failed and we were unable to recover it. 00:28:30.261 [2024-12-09 17:38:59.110676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.261 [2024-12-09 17:38:59.110707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.261 qpair failed and we were unable to recover it. 00:28:30.261 [2024-12-09 17:38:59.110890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.261 [2024-12-09 17:38:59.110921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.261 qpair failed and we were unable to recover it. 00:28:30.261 [2024-12-09 17:38:59.111045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.261 [2024-12-09 17:38:59.111076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.261 qpair failed and we were unable to recover it. 00:28:30.261 [2024-12-09 17:38:59.111209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.261 [2024-12-09 17:38:59.111250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.261 qpair failed and we were unable to recover it. 00:28:30.261 [2024-12-09 17:38:59.111358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.261 [2024-12-09 17:38:59.111390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.261 qpair failed and we were unable to recover it. 00:28:30.261 [2024-12-09 17:38:59.111510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.261 [2024-12-09 17:38:59.111542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.261 qpair failed and we were unable to recover it. 00:28:30.261 [2024-12-09 17:38:59.111764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.261 [2024-12-09 17:38:59.111795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.261 qpair failed and we were unable to recover it. 00:28:30.261 [2024-12-09 17:38:59.111906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.261 [2024-12-09 17:38:59.111938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.261 qpair failed and we were unable to recover it. 00:28:30.261 [2024-12-09 17:38:59.112187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.261 [2024-12-09 17:38:59.112241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.261 qpair failed and we were unable to recover it. 00:28:30.261 [2024-12-09 17:38:59.112344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.261 [2024-12-09 17:38:59.112376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.261 qpair failed and we were unable to recover it. 00:28:30.262 [2024-12-09 17:38:59.112571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.262 [2024-12-09 17:38:59.112603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.262 qpair failed and we were unable to recover it. 00:28:30.262 [2024-12-09 17:38:59.112737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.262 [2024-12-09 17:38:59.112769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.262 qpair failed and we were unable to recover it. 00:28:30.262 [2024-12-09 17:38:59.112895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.262 [2024-12-09 17:38:59.112926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.262 qpair failed and we were unable to recover it. 00:28:30.262 [2024-12-09 17:38:59.113029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.262 [2024-12-09 17:38:59.113060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.262 qpair failed and we were unable to recover it. 00:28:30.262 [2024-12-09 17:38:59.113187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.262 [2024-12-09 17:38:59.113226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.262 qpair failed and we were unable to recover it. 00:28:30.262 [2024-12-09 17:38:59.113351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.262 [2024-12-09 17:38:59.113382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.262 qpair failed and we were unable to recover it. 00:28:30.262 [2024-12-09 17:38:59.113585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.262 [2024-12-09 17:38:59.113617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.262 qpair failed and we were unable to recover it. 00:28:30.262 [2024-12-09 17:38:59.113794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.262 [2024-12-09 17:38:59.113827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.262 qpair failed and we were unable to recover it. 00:28:30.262 [2024-12-09 17:38:59.113940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.262 [2024-12-09 17:38:59.113971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.262 qpair failed and we were unable to recover it. 00:28:30.262 [2024-12-09 17:38:59.114086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.262 [2024-12-09 17:38:59.114117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.262 qpair failed and we were unable to recover it. 00:28:30.262 [2024-12-09 17:38:59.114360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.262 [2024-12-09 17:38:59.114399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.262 qpair failed and we were unable to recover it. 00:28:30.262 [2024-12-09 17:38:59.114589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.262 [2024-12-09 17:38:59.114622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.262 qpair failed and we were unable to recover it. 00:28:30.262 [2024-12-09 17:38:59.114831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.262 [2024-12-09 17:38:59.114863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.262 qpair failed and we were unable to recover it. 00:28:30.262 [2024-12-09 17:38:59.115057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.262 [2024-12-09 17:38:59.115088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.262 qpair failed and we were unable to recover it. 00:28:30.262 [2024-12-09 17:38:59.115230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.262 [2024-12-09 17:38:59.115263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.262 qpair failed and we were unable to recover it. 00:28:30.262 [2024-12-09 17:38:59.115401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.262 [2024-12-09 17:38:59.115434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.262 qpair failed and we were unable to recover it. 00:28:30.262 [2024-12-09 17:38:59.115546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.262 [2024-12-09 17:38:59.115576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.262 qpair failed and we were unable to recover it. 00:28:30.262 [2024-12-09 17:38:59.115809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.262 [2024-12-09 17:38:59.115841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.262 qpair failed and we were unable to recover it. 00:28:30.262 [2024-12-09 17:38:59.115957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.262 [2024-12-09 17:38:59.115989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.262 qpair failed and we were unable to recover it. 00:28:30.262 [2024-12-09 17:38:59.116110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.262 [2024-12-09 17:38:59.116141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.262 qpair failed and we were unable to recover it. 00:28:30.262 [2024-12-09 17:38:59.116319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.262 [2024-12-09 17:38:59.116353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.262 qpair failed and we were unable to recover it. 00:28:30.262 [2024-12-09 17:38:59.116525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.262 [2024-12-09 17:38:59.116557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.262 qpair failed and we were unable to recover it. 00:28:30.262 [2024-12-09 17:38:59.116733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.262 [2024-12-09 17:38:59.116764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.262 qpair failed and we were unable to recover it. 00:28:30.262 [2024-12-09 17:38:59.116971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.262 [2024-12-09 17:38:59.117002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.262 qpair failed and we were unable to recover it. 00:28:30.262 [2024-12-09 17:38:59.117132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.262 [2024-12-09 17:38:59.117164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.262 qpair failed and we were unable to recover it. 00:28:30.262 [2024-12-09 17:38:59.117287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.262 [2024-12-09 17:38:59.117319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.262 qpair failed and we were unable to recover it. 00:28:30.262 [2024-12-09 17:38:59.117450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.262 [2024-12-09 17:38:59.117481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.262 qpair failed and we were unable to recover it. 00:28:30.262 [2024-12-09 17:38:59.117588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.262 [2024-12-09 17:38:59.117620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.262 qpair failed and we were unable to recover it. 00:28:30.262 [2024-12-09 17:38:59.117794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.262 [2024-12-09 17:38:59.117825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.262 qpair failed and we were unable to recover it. 00:28:30.262 [2024-12-09 17:38:59.117998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.262 [2024-12-09 17:38:59.118031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.262 qpair failed and we were unable to recover it. 00:28:30.262 [2024-12-09 17:38:59.118209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.262 [2024-12-09 17:38:59.118264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.262 qpair failed and we were unable to recover it. 00:28:30.262 [2024-12-09 17:38:59.118440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.262 [2024-12-09 17:38:59.118471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.262 qpair failed and we were unable to recover it. 00:28:30.262 [2024-12-09 17:38:59.118587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.262 [2024-12-09 17:38:59.118618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.262 qpair failed and we were unable to recover it. 00:28:30.262 [2024-12-09 17:38:59.118757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.262 [2024-12-09 17:38:59.118789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.262 qpair failed and we were unable to recover it. 00:28:30.262 [2024-12-09 17:38:59.118921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.262 [2024-12-09 17:38:59.118952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.262 qpair failed and we were unable to recover it. 00:28:30.262 [2024-12-09 17:38:59.119060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.262 [2024-12-09 17:38:59.119092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.262 qpair failed and we were unable to recover it. 00:28:30.262 [2024-12-09 17:38:59.119199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.262 [2024-12-09 17:38:59.119240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.262 qpair failed and we were unable to recover it. 00:28:30.262 [2024-12-09 17:38:59.119459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.262 [2024-12-09 17:38:59.119530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.262 qpair failed and we were unable to recover it. 00:28:30.262 [2024-12-09 17:38:59.119678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.262 [2024-12-09 17:38:59.119714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.262 qpair failed and we were unable to recover it. 00:28:30.263 [2024-12-09 17:38:59.119831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.263 [2024-12-09 17:38:59.119864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.263 qpair failed and we were unable to recover it. 00:28:30.263 [2024-12-09 17:38:59.120048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.263 [2024-12-09 17:38:59.120080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.263 qpair failed and we were unable to recover it. 00:28:30.263 [2024-12-09 17:38:59.120266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.263 [2024-12-09 17:38:59.120299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.263 qpair failed and we were unable to recover it. 00:28:30.263 [2024-12-09 17:38:59.120424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.263 [2024-12-09 17:38:59.120456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.263 qpair failed and we were unable to recover it. 00:28:30.263 [2024-12-09 17:38:59.120567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.263 [2024-12-09 17:38:59.120599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.263 qpair failed and we were unable to recover it. 00:28:30.263 [2024-12-09 17:38:59.120722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.263 [2024-12-09 17:38:59.120754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.263 qpair failed and we were unable to recover it. 00:28:30.263 [2024-12-09 17:38:59.120863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.263 [2024-12-09 17:38:59.120895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.263 qpair failed and we were unable to recover it. 00:28:30.263 [2024-12-09 17:38:59.121040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.263 [2024-12-09 17:38:59.121071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.263 qpair failed and we were unable to recover it. 00:28:30.263 [2024-12-09 17:38:59.121180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.263 [2024-12-09 17:38:59.121212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.263 qpair failed and we were unable to recover it. 00:28:30.263 [2024-12-09 17:38:59.121344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.263 [2024-12-09 17:38:59.121377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.263 qpair failed and we were unable to recover it. 00:28:30.263 [2024-12-09 17:38:59.121503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.263 [2024-12-09 17:38:59.121534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.263 qpair failed and we were unable to recover it. 00:28:30.263 [2024-12-09 17:38:59.121709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.263 [2024-12-09 17:38:59.121751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.263 qpair failed and we were unable to recover it. 00:28:30.263 [2024-12-09 17:38:59.121872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.263 [2024-12-09 17:38:59.121904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.263 qpair failed and we were unable to recover it. 00:28:30.263 [2024-12-09 17:38:59.122018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.263 [2024-12-09 17:38:59.122049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.263 qpair failed and we were unable to recover it. 00:28:30.263 [2024-12-09 17:38:59.122162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.263 [2024-12-09 17:38:59.122194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.263 qpair failed and we were unable to recover it. 00:28:30.263 [2024-12-09 17:38:59.122392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.263 [2024-12-09 17:38:59.122425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.263 qpair failed and we were unable to recover it. 00:28:30.263 [2024-12-09 17:38:59.122552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.263 [2024-12-09 17:38:59.122583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.263 qpair failed and we were unable to recover it. 00:28:30.263 [2024-12-09 17:38:59.122728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.263 [2024-12-09 17:38:59.122759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.263 qpair failed and we were unable to recover it. 00:28:30.263 [2024-12-09 17:38:59.122951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.263 [2024-12-09 17:38:59.122983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.263 qpair failed and we were unable to recover it. 00:28:30.263 [2024-12-09 17:38:59.123098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.263 [2024-12-09 17:38:59.123129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.263 qpair failed and we were unable to recover it. 00:28:30.263 [2024-12-09 17:38:59.123248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.263 [2024-12-09 17:38:59.123282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.263 qpair failed and we were unable to recover it. 00:28:30.263 [2024-12-09 17:38:59.123398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.263 [2024-12-09 17:38:59.123430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.263 qpair failed and we were unable to recover it. 00:28:30.263 [2024-12-09 17:38:59.123549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.263 [2024-12-09 17:38:59.123582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.263 qpair failed and we were unable to recover it. 00:28:30.263 [2024-12-09 17:38:59.123763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.263 [2024-12-09 17:38:59.123795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.263 qpair failed and we were unable to recover it. 00:28:30.263 [2024-12-09 17:38:59.123982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.263 [2024-12-09 17:38:59.124014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.263 qpair failed and we were unable to recover it. 00:28:30.263 [2024-12-09 17:38:59.124138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.263 [2024-12-09 17:38:59.124171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.263 qpair failed and we were unable to recover it. 00:28:30.263 [2024-12-09 17:38:59.124340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.263 [2024-12-09 17:38:59.124372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.263 qpair failed and we were unable to recover it. 00:28:30.263 [2024-12-09 17:38:59.124563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.263 [2024-12-09 17:38:59.124594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.263 qpair failed and we were unable to recover it. 00:28:30.263 [2024-12-09 17:38:59.124793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.263 [2024-12-09 17:38:59.124825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.263 qpair failed and we were unable to recover it. 00:28:30.263 [2024-12-09 17:38:59.125001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.263 [2024-12-09 17:38:59.125033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.263 qpair failed and we were unable to recover it. 00:28:30.263 [2024-12-09 17:38:59.125152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.263 [2024-12-09 17:38:59.125183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.263 qpair failed and we were unable to recover it. 00:28:30.263 [2024-12-09 17:38:59.125434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.263 [2024-12-09 17:38:59.125466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.263 qpair failed and we were unable to recover it. 00:28:30.263 [2024-12-09 17:38:59.125675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.263 [2024-12-09 17:38:59.125706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.263 qpair failed and we were unable to recover it. 00:28:30.263 [2024-12-09 17:38:59.125884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.263 [2024-12-09 17:38:59.125915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.263 qpair failed and we were unable to recover it. 00:28:30.263 [2024-12-09 17:38:59.126028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.263 [2024-12-09 17:38:59.126060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.263 qpair failed and we were unable to recover it. 00:28:30.263 [2024-12-09 17:38:59.126195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.263 [2024-12-09 17:38:59.126236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.263 qpair failed and we were unable to recover it. 00:28:30.263 [2024-12-09 17:38:59.126409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.263 [2024-12-09 17:38:59.126440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.263 qpair failed and we were unable to recover it. 00:28:30.263 [2024-12-09 17:38:59.126565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.263 [2024-12-09 17:38:59.126596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.263 qpair failed and we were unable to recover it. 00:28:30.263 [2024-12-09 17:38:59.126757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.264 [2024-12-09 17:38:59.126829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.264 qpair failed and we were unable to recover it. 00:28:30.264 [2024-12-09 17:38:59.127029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.264 [2024-12-09 17:38:59.127065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.264 qpair failed and we were unable to recover it. 00:28:30.264 [2024-12-09 17:38:59.127180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.264 [2024-12-09 17:38:59.127213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.264 qpair failed and we were unable to recover it. 00:28:30.264 [2024-12-09 17:38:59.127418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.264 [2024-12-09 17:38:59.127451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.264 qpair failed and we were unable to recover it. 00:28:30.264 [2024-12-09 17:38:59.127575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.264 [2024-12-09 17:38:59.127607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.264 qpair failed and we were unable to recover it. 00:28:30.264 [2024-12-09 17:38:59.127783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.264 [2024-12-09 17:38:59.127815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.264 qpair failed and we were unable to recover it. 00:28:30.264 [2024-12-09 17:38:59.127939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.264 [2024-12-09 17:38:59.127971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.264 qpair failed and we were unable to recover it. 00:28:30.264 [2024-12-09 17:38:59.128215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.264 [2024-12-09 17:38:59.128262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.264 qpair failed and we were unable to recover it. 00:28:30.264 [2024-12-09 17:38:59.128437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.264 [2024-12-09 17:38:59.128469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.264 qpair failed and we were unable to recover it. 00:28:30.264 [2024-12-09 17:38:59.128662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.264 [2024-12-09 17:38:59.128693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.264 qpair failed and we were unable to recover it. 00:28:30.264 [2024-12-09 17:38:59.128811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.264 [2024-12-09 17:38:59.128843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.264 qpair failed and we were unable to recover it. 00:28:30.264 [2024-12-09 17:38:59.129043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.264 [2024-12-09 17:38:59.129074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.264 qpair failed and we were unable to recover it. 00:28:30.264 [2024-12-09 17:38:59.129251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.264 [2024-12-09 17:38:59.129284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.264 qpair failed and we were unable to recover it. 00:28:30.264 [2024-12-09 17:38:59.129395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.264 [2024-12-09 17:38:59.129438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.264 qpair failed and we were unable to recover it. 00:28:30.264 [2024-12-09 17:38:59.129559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.264 [2024-12-09 17:38:59.129592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.264 qpair failed and we were unable to recover it. 00:28:30.264 [2024-12-09 17:38:59.129766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.264 [2024-12-09 17:38:59.129797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.264 qpair failed and we were unable to recover it. 00:28:30.264 [2024-12-09 17:38:59.130066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.264 [2024-12-09 17:38:59.130099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.264 qpair failed and we were unable to recover it. 00:28:30.264 [2024-12-09 17:38:59.130317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.264 [2024-12-09 17:38:59.130351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.264 qpair failed and we were unable to recover it. 00:28:30.264 [2024-12-09 17:38:59.130616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.264 [2024-12-09 17:38:59.130649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.264 qpair failed and we were unable to recover it. 00:28:30.264 [2024-12-09 17:38:59.130780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.264 [2024-12-09 17:38:59.130812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.264 qpair failed and we were unable to recover it. 00:28:30.264 [2024-12-09 17:38:59.131013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.264 [2024-12-09 17:38:59.131044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.264 qpair failed and we were unable to recover it. 00:28:30.264 [2024-12-09 17:38:59.131181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.264 [2024-12-09 17:38:59.131214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.264 qpair failed and we were unable to recover it. 00:28:30.264 [2024-12-09 17:38:59.131510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.264 [2024-12-09 17:38:59.131542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.264 qpair failed and we were unable to recover it. 00:28:30.264 [2024-12-09 17:38:59.131668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.264 [2024-12-09 17:38:59.131700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.264 qpair failed and we were unable to recover it. 00:28:30.264 [2024-12-09 17:38:59.131899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.264 [2024-12-09 17:38:59.131933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.264 qpair failed and we were unable to recover it. 00:28:30.264 [2024-12-09 17:38:59.132108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.264 [2024-12-09 17:38:59.132140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.264 qpair failed and we were unable to recover it. 00:28:30.264 [2024-12-09 17:38:59.132276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.264 [2024-12-09 17:38:59.132308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.264 qpair failed and we were unable to recover it. 00:28:30.264 [2024-12-09 17:38:59.132488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.264 [2024-12-09 17:38:59.132520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.264 qpair failed and we were unable to recover it. 00:28:30.264 [2024-12-09 17:38:59.132636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.264 [2024-12-09 17:38:59.132668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.264 qpair failed and we were unable to recover it. 00:28:30.264 [2024-12-09 17:38:59.132789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.264 [2024-12-09 17:38:59.132820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.264 qpair failed and we were unable to recover it. 00:28:30.264 [2024-12-09 17:38:59.132929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.264 [2024-12-09 17:38:59.132960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.264 qpair failed and we were unable to recover it. 00:28:30.264 [2024-12-09 17:38:59.133152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.264 [2024-12-09 17:38:59.133184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.264 qpair failed and we were unable to recover it. 00:28:30.264 [2024-12-09 17:38:59.133435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.264 [2024-12-09 17:38:59.133468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.264 qpair failed and we were unable to recover it. 00:28:30.264 [2024-12-09 17:38:59.133575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.264 [2024-12-09 17:38:59.133607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.264 qpair failed and we were unable to recover it. 00:28:30.264 [2024-12-09 17:38:59.133869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.264 [2024-12-09 17:38:59.133900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.264 qpair failed and we were unable to recover it. 00:28:30.264 [2024-12-09 17:38:59.134043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.264 [2024-12-09 17:38:59.134074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.264 qpair failed and we were unable to recover it. 00:28:30.264 [2024-12-09 17:38:59.134249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.264 [2024-12-09 17:38:59.134283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.264 qpair failed and we were unable to recover it. 00:28:30.264 [2024-12-09 17:38:59.134467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.264 [2024-12-09 17:38:59.134498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.264 qpair failed and we were unable to recover it. 00:28:30.264 [2024-12-09 17:38:59.134739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.264 [2024-12-09 17:38:59.134770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.264 qpair failed and we were unable to recover it. 00:28:30.264 [2024-12-09 17:38:59.134966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.265 [2024-12-09 17:38:59.134999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.265 qpair failed and we were unable to recover it. 00:28:30.265 [2024-12-09 17:38:59.135252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.265 [2024-12-09 17:38:59.135286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.265 qpair failed and we were unable to recover it. 00:28:30.265 [2024-12-09 17:38:59.135407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.265 [2024-12-09 17:38:59.135439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.265 qpair failed and we were unable to recover it. 00:28:30.265 [2024-12-09 17:38:59.135564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.265 [2024-12-09 17:38:59.135595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.265 qpair failed and we were unable to recover it. 00:28:30.265 [2024-12-09 17:38:59.135813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.265 [2024-12-09 17:38:59.135845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.265 qpair failed and we were unable to recover it. 00:28:30.265 [2024-12-09 17:38:59.136024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.265 [2024-12-09 17:38:59.136056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.265 qpair failed and we were unable to recover it. 00:28:30.265 [2024-12-09 17:38:59.136167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.265 [2024-12-09 17:38:59.136198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.265 qpair failed and we were unable to recover it. 00:28:30.265 [2024-12-09 17:38:59.136393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.265 [2024-12-09 17:38:59.136425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.265 qpair failed and we were unable to recover it. 00:28:30.265 [2024-12-09 17:38:59.136600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.265 [2024-12-09 17:38:59.136631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.265 qpair failed and we were unable to recover it. 00:28:30.265 [2024-12-09 17:38:59.136745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.265 [2024-12-09 17:38:59.136776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.265 qpair failed and we were unable to recover it. 00:28:30.265 [2024-12-09 17:38:59.136885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.265 [2024-12-09 17:38:59.136917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.265 qpair failed and we were unable to recover it. 00:28:30.265 [2024-12-09 17:38:59.137090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.265 [2024-12-09 17:38:59.137122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.265 qpair failed and we were unable to recover it. 00:28:30.265 [2024-12-09 17:38:59.137262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.265 [2024-12-09 17:38:59.137295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.265 qpair failed and we were unable to recover it. 00:28:30.265 [2024-12-09 17:38:59.137496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.265 [2024-12-09 17:38:59.137529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.265 qpair failed and we were unable to recover it. 00:28:30.265 [2024-12-09 17:38:59.137652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.265 [2024-12-09 17:38:59.137693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.265 qpair failed and we were unable to recover it. 00:28:30.265 [2024-12-09 17:38:59.137878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.265 [2024-12-09 17:38:59.137910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.265 qpair failed and we were unable to recover it. 00:28:30.265 [2024-12-09 17:38:59.138038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.265 [2024-12-09 17:38:59.138070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.265 qpair failed and we were unable to recover it. 00:28:30.265 [2024-12-09 17:38:59.138206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.265 [2024-12-09 17:38:59.138256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.265 qpair failed and we were unable to recover it. 00:28:30.265 [2024-12-09 17:38:59.138378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.265 [2024-12-09 17:38:59.138409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.265 qpair failed and we were unable to recover it. 00:28:30.265 [2024-12-09 17:38:59.138594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.265 [2024-12-09 17:38:59.138626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.265 qpair failed and we were unable to recover it. 00:28:30.265 [2024-12-09 17:38:59.138751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.265 [2024-12-09 17:38:59.138782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.265 qpair failed and we were unable to recover it. 00:28:30.265 [2024-12-09 17:38:59.138899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.265 [2024-12-09 17:38:59.138931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.265 qpair failed and we were unable to recover it. 00:28:30.265 [2024-12-09 17:38:59.139122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.265 [2024-12-09 17:38:59.139155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.265 qpair failed and we were unable to recover it. 00:28:30.265 [2024-12-09 17:38:59.139288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.265 [2024-12-09 17:38:59.139322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.265 qpair failed and we were unable to recover it. 00:28:30.265 [2024-12-09 17:38:59.139440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.265 [2024-12-09 17:38:59.139472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.265 qpair failed and we were unable to recover it. 00:28:30.265 [2024-12-09 17:38:59.139587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.265 [2024-12-09 17:38:59.139619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.265 qpair failed and we were unable to recover it. 00:28:30.265 [2024-12-09 17:38:59.139727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.265 [2024-12-09 17:38:59.139759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.265 qpair failed and we were unable to recover it. 00:28:30.265 [2024-12-09 17:38:59.140034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.265 [2024-12-09 17:38:59.140066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.265 qpair failed and we were unable to recover it. 00:28:30.265 [2024-12-09 17:38:59.140295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.265 [2024-12-09 17:38:59.140330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.265 qpair failed and we were unable to recover it. 00:28:30.265 [2024-12-09 17:38:59.140507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.265 [2024-12-09 17:38:59.140542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.265 qpair failed and we were unable to recover it. 00:28:30.265 [2024-12-09 17:38:59.140649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.265 [2024-12-09 17:38:59.140682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.265 qpair failed and we were unable to recover it. 00:28:30.265 [2024-12-09 17:38:59.140857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.265 [2024-12-09 17:38:59.140888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.265 qpair failed and we were unable to recover it. 00:28:30.265 [2024-12-09 17:38:59.141064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.265 [2024-12-09 17:38:59.141096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.265 qpair failed and we were unable to recover it. 00:28:30.265 [2024-12-09 17:38:59.141230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.265 [2024-12-09 17:38:59.141276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.265 qpair failed and we were unable to recover it. 00:28:30.265 [2024-12-09 17:38:59.141456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.266 [2024-12-09 17:38:59.141489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.266 qpair failed and we were unable to recover it. 00:28:30.266 [2024-12-09 17:38:59.141749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.266 [2024-12-09 17:38:59.141782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.266 qpair failed and we were unable to recover it. 00:28:30.266 [2024-12-09 17:38:59.141919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.266 [2024-12-09 17:38:59.141951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.266 qpair failed and we were unable to recover it. 00:28:30.266 [2024-12-09 17:38:59.142202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.266 [2024-12-09 17:38:59.142251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.266 qpair failed and we were unable to recover it. 00:28:30.266 [2024-12-09 17:38:59.142377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.266 [2024-12-09 17:38:59.142409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.266 qpair failed and we were unable to recover it. 00:28:30.266 [2024-12-09 17:38:59.142534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.266 [2024-12-09 17:38:59.142566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.266 qpair failed and we were unable to recover it. 00:28:30.266 [2024-12-09 17:38:59.142686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.266 [2024-12-09 17:38:59.142717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.266 qpair failed and we were unable to recover it. 00:28:30.266 [2024-12-09 17:38:59.142881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.266 [2024-12-09 17:38:59.142953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.266 qpair failed and we were unable to recover it. 00:28:30.266 [2024-12-09 17:38:59.143168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.266 [2024-12-09 17:38:59.143203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.266 qpair failed and we were unable to recover it. 00:28:30.266 [2024-12-09 17:38:59.143352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.266 [2024-12-09 17:38:59.143385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.266 qpair failed and we were unable to recover it. 00:28:30.266 [2024-12-09 17:38:59.143523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.266 [2024-12-09 17:38:59.143555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.266 qpair failed and we were unable to recover it. 00:28:30.266 [2024-12-09 17:38:59.143732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.266 [2024-12-09 17:38:59.143763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.266 qpair failed and we were unable to recover it. 00:28:30.266 [2024-12-09 17:38:59.143890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.266 [2024-12-09 17:38:59.143922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.266 qpair failed and we were unable to recover it. 00:28:30.266 [2024-12-09 17:38:59.144117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.266 [2024-12-09 17:38:59.144150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.266 qpair failed and we were unable to recover it. 00:28:30.266 [2024-12-09 17:38:59.144288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.266 [2024-12-09 17:38:59.144322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.266 qpair failed and we were unable to recover it. 00:28:30.266 [2024-12-09 17:38:59.144510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.266 [2024-12-09 17:38:59.144541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.266 qpair failed and we were unable to recover it. 00:28:30.266 [2024-12-09 17:38:59.144657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.266 [2024-12-09 17:38:59.144689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.266 qpair failed and we were unable to recover it. 00:28:30.266 [2024-12-09 17:38:59.144872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.266 [2024-12-09 17:38:59.144903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.266 qpair failed and we were unable to recover it. 00:28:30.266 [2024-12-09 17:38:59.145083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.266 [2024-12-09 17:38:59.145115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.266 qpair failed and we were unable to recover it. 00:28:30.266 [2024-12-09 17:38:59.145292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.266 [2024-12-09 17:38:59.145326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.266 qpair failed and we were unable to recover it. 00:28:30.266 [2024-12-09 17:38:59.145442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.266 [2024-12-09 17:38:59.145484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.266 qpair failed and we were unable to recover it. 00:28:30.266 [2024-12-09 17:38:59.145702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.266 [2024-12-09 17:38:59.145733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.266 qpair failed and we were unable to recover it. 00:28:30.266 [2024-12-09 17:38:59.145962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.266 [2024-12-09 17:38:59.145996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.266 qpair failed and we were unable to recover it. 00:28:30.266 [2024-12-09 17:38:59.146137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.266 [2024-12-09 17:38:59.146169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.266 qpair failed and we were unable to recover it. 00:28:30.266 [2024-12-09 17:38:59.146306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.266 [2024-12-09 17:38:59.146339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.266 qpair failed and we were unable to recover it. 00:28:30.266 [2024-12-09 17:38:59.146524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.266 [2024-12-09 17:38:59.146556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.266 qpair failed and we were unable to recover it. 00:28:30.266 [2024-12-09 17:38:59.146686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.266 [2024-12-09 17:38:59.146718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.266 qpair failed and we were unable to recover it. 00:28:30.266 [2024-12-09 17:38:59.146828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.266 [2024-12-09 17:38:59.146860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.266 qpair failed and we were unable to recover it. 00:28:30.266 [2024-12-09 17:38:59.146968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.266 [2024-12-09 17:38:59.147000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.266 qpair failed and we were unable to recover it. 00:28:30.266 [2024-12-09 17:38:59.147205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.266 [2024-12-09 17:38:59.147251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.266 qpair failed and we were unable to recover it. 00:28:30.266 [2024-12-09 17:38:59.147430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.266 [2024-12-09 17:38:59.147463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.266 qpair failed and we were unable to recover it. 00:28:30.266 [2024-12-09 17:38:59.147587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.266 [2024-12-09 17:38:59.147620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.266 qpair failed and we were unable to recover it. 00:28:30.266 [2024-12-09 17:38:59.147735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.266 [2024-12-09 17:38:59.147767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.266 qpair failed and we were unable to recover it. 00:28:30.266 [2024-12-09 17:38:59.147960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.266 [2024-12-09 17:38:59.147992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.266 qpair failed and we were unable to recover it. 00:28:30.266 [2024-12-09 17:38:59.148185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.266 [2024-12-09 17:38:59.148228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.266 qpair failed and we were unable to recover it. 00:28:30.266 [2024-12-09 17:38:59.148443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.266 [2024-12-09 17:38:59.148481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.266 qpair failed and we were unable to recover it. 00:28:30.266 [2024-12-09 17:38:59.148601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.266 [2024-12-09 17:38:59.148634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.266 qpair failed and we were unable to recover it. 00:28:30.266 [2024-12-09 17:38:59.148755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.266 [2024-12-09 17:38:59.148787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.266 qpair failed and we were unable to recover it. 00:28:30.266 [2024-12-09 17:38:59.148903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.266 [2024-12-09 17:38:59.148936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.266 qpair failed and we were unable to recover it. 00:28:30.267 [2024-12-09 17:38:59.149059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.267 [2024-12-09 17:38:59.149091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.267 qpair failed and we were unable to recover it. 00:28:30.267 [2024-12-09 17:38:59.149238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.267 [2024-12-09 17:38:59.149271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.267 qpair failed and we were unable to recover it. 00:28:30.267 [2024-12-09 17:38:59.149384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.267 [2024-12-09 17:38:59.149417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.267 qpair failed and we were unable to recover it. 00:28:30.267 [2024-12-09 17:38:59.149604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.267 [2024-12-09 17:38:59.149637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.267 qpair failed and we were unable to recover it. 00:28:30.267 [2024-12-09 17:38:59.149835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.267 [2024-12-09 17:38:59.149867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.267 qpair failed and we were unable to recover it. 00:28:30.267 [2024-12-09 17:38:59.150044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.267 [2024-12-09 17:38:59.150075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.267 qpair failed and we were unable to recover it. 00:28:30.267 [2024-12-09 17:38:59.150211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.267 [2024-12-09 17:38:59.150254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.267 qpair failed and we were unable to recover it. 00:28:30.267 [2024-12-09 17:38:59.150379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.267 [2024-12-09 17:38:59.150411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.267 qpair failed and we were unable to recover it. 00:28:30.267 [2024-12-09 17:38:59.150648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.267 [2024-12-09 17:38:59.150718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.267 qpair failed and we were unable to recover it. 00:28:30.267 [2024-12-09 17:38:59.150860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.267 [2024-12-09 17:38:59.150896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.267 qpair failed and we were unable to recover it. 00:28:30.267 [2024-12-09 17:38:59.151079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.267 [2024-12-09 17:38:59.151112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.267 qpair failed and we were unable to recover it. 00:28:30.267 [2024-12-09 17:38:59.151325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.267 [2024-12-09 17:38:59.151359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.267 qpair failed and we were unable to recover it. 00:28:30.267 [2024-12-09 17:38:59.151547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.267 [2024-12-09 17:38:59.151579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.267 qpair failed and we were unable to recover it. 00:28:30.267 [2024-12-09 17:38:59.151697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.267 [2024-12-09 17:38:59.151729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.267 qpair failed and we were unable to recover it. 00:28:30.267 [2024-12-09 17:38:59.151851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.267 [2024-12-09 17:38:59.151882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.267 qpair failed and we were unable to recover it. 00:28:30.267 [2024-12-09 17:38:59.152053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.267 [2024-12-09 17:38:59.152084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.267 qpair failed and we were unable to recover it. 00:28:30.267 [2024-12-09 17:38:59.152213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.267 [2024-12-09 17:38:59.152265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.267 qpair failed and we were unable to recover it. 00:28:30.267 [2024-12-09 17:38:59.152370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.267 [2024-12-09 17:38:59.152401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.267 qpair failed and we were unable to recover it. 00:28:30.267 [2024-12-09 17:38:59.152591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.267 [2024-12-09 17:38:59.152622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.267 qpair failed and we were unable to recover it. 00:28:30.267 [2024-12-09 17:38:59.152747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.267 [2024-12-09 17:38:59.152779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.267 qpair failed and we were unable to recover it. 00:28:30.267 [2024-12-09 17:38:59.152965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.267 [2024-12-09 17:38:59.152997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.267 qpair failed and we were unable to recover it. 00:28:30.267 [2024-12-09 17:38:59.153104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.267 [2024-12-09 17:38:59.153135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.267 qpair failed and we were unable to recover it. 00:28:30.267 [2024-12-09 17:38:59.153334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.267 [2024-12-09 17:38:59.153368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.267 qpair failed and we were unable to recover it. 00:28:30.267 [2024-12-09 17:38:59.153558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.267 [2024-12-09 17:38:59.153590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.267 qpair failed and we were unable to recover it. 00:28:30.267 [2024-12-09 17:38:59.153764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.267 [2024-12-09 17:38:59.153795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.267 qpair failed and we were unable to recover it. 00:28:30.267 [2024-12-09 17:38:59.153981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.267 [2024-12-09 17:38:59.154012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.267 qpair failed and we were unable to recover it. 00:28:30.267 [2024-12-09 17:38:59.154204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.267 [2024-12-09 17:38:59.154243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.267 qpair failed and we were unable to recover it. 00:28:30.267 [2024-12-09 17:38:59.154372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.267 [2024-12-09 17:38:59.154404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.267 qpair failed and we were unable to recover it. 00:28:30.267 [2024-12-09 17:38:59.154531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.267 [2024-12-09 17:38:59.154560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.267 qpair failed and we were unable to recover it. 00:28:30.267 [2024-12-09 17:38:59.154684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.267 [2024-12-09 17:38:59.154716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.267 qpair failed and we were unable to recover it. 00:28:30.267 [2024-12-09 17:38:59.154896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.267 [2024-12-09 17:38:59.154929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.267 qpair failed and we were unable to recover it. 00:28:30.267 [2024-12-09 17:38:59.155143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.267 [2024-12-09 17:38:59.155174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.267 qpair failed and we were unable to recover it. 00:28:30.267 [2024-12-09 17:38:59.155362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.267 [2024-12-09 17:38:59.155396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.267 qpair failed and we were unable to recover it. 00:28:30.267 [2024-12-09 17:38:59.155536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.267 [2024-12-09 17:38:59.155566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.267 qpair failed and we were unable to recover it. 00:28:30.267 [2024-12-09 17:38:59.155741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.267 [2024-12-09 17:38:59.155772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.267 qpair failed and we were unable to recover it. 00:28:30.267 [2024-12-09 17:38:59.155999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.267 [2024-12-09 17:38:59.156031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.267 qpair failed and we were unable to recover it. 00:28:30.267 [2024-12-09 17:38:59.156148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.267 [2024-12-09 17:38:59.156179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.267 qpair failed and we were unable to recover it. 00:28:30.267 [2024-12-09 17:38:59.156457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.267 [2024-12-09 17:38:59.156491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.267 qpair failed and we were unable to recover it. 00:28:30.267 [2024-12-09 17:38:59.156619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.267 [2024-12-09 17:38:59.156650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.267 qpair failed and we were unable to recover it. 00:28:30.267 [2024-12-09 17:38:59.156834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.268 [2024-12-09 17:38:59.156867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.268 qpair failed and we were unable to recover it. 00:28:30.268 [2024-12-09 17:38:59.157051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.268 [2024-12-09 17:38:59.157083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.268 qpair failed and we were unable to recover it. 00:28:30.268 [2024-12-09 17:38:59.157258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.268 [2024-12-09 17:38:59.157290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.268 qpair failed and we were unable to recover it. 00:28:30.268 [2024-12-09 17:38:59.157415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.268 [2024-12-09 17:38:59.157446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.268 qpair failed and we were unable to recover it. 00:28:30.268 [2024-12-09 17:38:59.157582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.268 [2024-12-09 17:38:59.157613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.268 qpair failed and we were unable to recover it. 00:28:30.268 [2024-12-09 17:38:59.157749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.268 [2024-12-09 17:38:59.157780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.268 qpair failed and we were unable to recover it. 00:28:30.268 [2024-12-09 17:38:59.157975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.268 [2024-12-09 17:38:59.158022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.268 qpair failed and we were unable to recover it. 00:28:30.268 [2024-12-09 17:38:59.158150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.268 [2024-12-09 17:38:59.158181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.268 qpair failed and we were unable to recover it. 00:28:30.268 [2024-12-09 17:38:59.158306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.268 [2024-12-09 17:38:59.158341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.268 qpair failed and we were unable to recover it. 00:28:30.268 [2024-12-09 17:38:59.158588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.268 [2024-12-09 17:38:59.158644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.268 qpair failed and we were unable to recover it. 00:28:30.268 [2024-12-09 17:38:59.158765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.268 [2024-12-09 17:38:59.158809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.268 qpair failed and we were unable to recover it. 00:28:30.268 [2024-12-09 17:38:59.158941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.268 [2024-12-09 17:38:59.158975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.268 qpair failed and we were unable to recover it. 00:28:30.268 [2024-12-09 17:38:59.159156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.268 [2024-12-09 17:38:59.159188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.268 qpair failed and we were unable to recover it. 00:28:30.268 [2024-12-09 17:38:59.159379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.268 [2024-12-09 17:38:59.159412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.268 qpair failed and we were unable to recover it. 00:28:30.268 [2024-12-09 17:38:59.159536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.268 [2024-12-09 17:38:59.159581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.268 qpair failed and we were unable to recover it. 00:28:30.268 [2024-12-09 17:38:59.159860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.268 [2024-12-09 17:38:59.159893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.268 qpair failed and we were unable to recover it. 00:28:30.268 [2024-12-09 17:38:59.160074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.268 [2024-12-09 17:38:59.160106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.268 qpair failed and we were unable to recover it. 00:28:30.268 [2024-12-09 17:38:59.160283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.268 [2024-12-09 17:38:59.160316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.268 qpair failed and we were unable to recover it. 00:28:30.268 [2024-12-09 17:38:59.160443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.268 [2024-12-09 17:38:59.160475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.268 qpair failed and we were unable to recover it. 00:28:30.268 [2024-12-09 17:38:59.160590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.268 [2024-12-09 17:38:59.160621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.268 qpair failed and we were unable to recover it. 00:28:30.268 [2024-12-09 17:38:59.160747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.268 [2024-12-09 17:38:59.160779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.268 qpair failed and we were unable to recover it. 00:28:30.268 [2024-12-09 17:38:59.160955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.268 [2024-12-09 17:38:59.160987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.268 qpair failed and we were unable to recover it. 00:28:30.268 [2024-12-09 17:38:59.161191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.268 [2024-12-09 17:38:59.161235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.268 qpair failed and we were unable to recover it. 00:28:30.268 [2024-12-09 17:38:59.161362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.268 [2024-12-09 17:38:59.161394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.268 qpair failed and we were unable to recover it. 00:28:30.268 [2024-12-09 17:38:59.161526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.268 [2024-12-09 17:38:59.161557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.268 qpair failed and we were unable to recover it. 00:28:30.268 [2024-12-09 17:38:59.161793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.268 [2024-12-09 17:38:59.161824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.268 qpair failed and we were unable to recover it. 00:28:30.268 [2024-12-09 17:38:59.161951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.268 [2024-12-09 17:38:59.161983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.268 qpair failed and we were unable to recover it. 00:28:30.268 [2024-12-09 17:38:59.162120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.268 [2024-12-09 17:38:59.162151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.268 qpair failed and we were unable to recover it. 00:28:30.268 [2024-12-09 17:38:59.162268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.268 [2024-12-09 17:38:59.162302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.268 qpair failed and we were unable to recover it. 00:28:30.268 [2024-12-09 17:38:59.162520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.268 [2024-12-09 17:38:59.162552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.268 qpair failed and we were unable to recover it. 00:28:30.268 [2024-12-09 17:38:59.162659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.268 [2024-12-09 17:38:59.162691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.268 qpair failed and we were unable to recover it. 00:28:30.268 [2024-12-09 17:38:59.162869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.268 [2024-12-09 17:38:59.162901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.268 qpair failed and we were unable to recover it. 00:28:30.268 [2024-12-09 17:38:59.163016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.268 [2024-12-09 17:38:59.163048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.268 qpair failed and we were unable to recover it. 00:28:30.268 [2024-12-09 17:38:59.163236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.268 [2024-12-09 17:38:59.163269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.268 qpair failed and we were unable to recover it. 00:28:30.268 [2024-12-09 17:38:59.163373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.268 [2024-12-09 17:38:59.163405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.268 qpair failed and we were unable to recover it. 00:28:30.268 [2024-12-09 17:38:59.163525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.268 [2024-12-09 17:38:59.163556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.268 qpair failed and we were unable to recover it. 00:28:30.268 [2024-12-09 17:38:59.163748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.268 [2024-12-09 17:38:59.163779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.268 qpair failed and we were unable to recover it. 00:28:30.268 [2024-12-09 17:38:59.163964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.268 [2024-12-09 17:38:59.163996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.268 qpair failed and we were unable to recover it. 00:28:30.268 [2024-12-09 17:38:59.164100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.268 [2024-12-09 17:38:59.164133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.268 qpair failed and we were unable to recover it. 00:28:30.268 [2024-12-09 17:38:59.164253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.269 [2024-12-09 17:38:59.164288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.269 qpair failed and we were unable to recover it. 00:28:30.269 [2024-12-09 17:38:59.164494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.269 [2024-12-09 17:38:59.164526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.269 qpair failed and we were unable to recover it. 00:28:30.269 [2024-12-09 17:38:59.164648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.269 [2024-12-09 17:38:59.164680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.269 qpair failed and we were unable to recover it. 00:28:30.269 [2024-12-09 17:38:59.164799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.269 [2024-12-09 17:38:59.164830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.269 qpair failed and we were unable to recover it. 00:28:30.269 [2024-12-09 17:38:59.164944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.269 [2024-12-09 17:38:59.164976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.269 qpair failed and we were unable to recover it. 00:28:30.269 [2024-12-09 17:38:59.165154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.269 [2024-12-09 17:38:59.165186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.269 qpair failed and we were unable to recover it. 00:28:30.269 [2024-12-09 17:38:59.165340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.269 [2024-12-09 17:38:59.165371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.269 qpair failed and we were unable to recover it. 00:28:30.269 [2024-12-09 17:38:59.165490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.269 [2024-12-09 17:38:59.165522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.269 qpair failed and we were unable to recover it. 00:28:30.269 [2024-12-09 17:38:59.165645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.269 [2024-12-09 17:38:59.165677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.269 qpair failed and we were unable to recover it. 00:28:30.269 [2024-12-09 17:38:59.165857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.269 [2024-12-09 17:38:59.165889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.269 qpair failed and we were unable to recover it. 00:28:30.269 [2024-12-09 17:38:59.165995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.269 [2024-12-09 17:38:59.166033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.269 qpair failed and we were unable to recover it. 00:28:30.269 [2024-12-09 17:38:59.166154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.269 [2024-12-09 17:38:59.166185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.269 qpair failed and we were unable to recover it. 00:28:30.269 [2024-12-09 17:38:59.166314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.269 [2024-12-09 17:38:59.166347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.269 qpair failed and we were unable to recover it. 00:28:30.269 [2024-12-09 17:38:59.166461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.269 [2024-12-09 17:38:59.166492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.269 qpair failed and we were unable to recover it. 00:28:30.269 [2024-12-09 17:38:59.166606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.269 [2024-12-09 17:38:59.166638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.269 qpair failed and we were unable to recover it. 00:28:30.269 [2024-12-09 17:38:59.166759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.269 [2024-12-09 17:38:59.166791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.269 qpair failed and we were unable to recover it. 00:28:30.269 [2024-12-09 17:38:59.167060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.269 [2024-12-09 17:38:59.167091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.269 qpair failed and we were unable to recover it. 00:28:30.269 [2024-12-09 17:38:59.167209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.269 [2024-12-09 17:38:59.167250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.269 qpair failed and we were unable to recover it. 00:28:30.269 [2024-12-09 17:38:59.167359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.269 [2024-12-09 17:38:59.167390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.269 qpair failed and we were unable to recover it. 00:28:30.269 [2024-12-09 17:38:59.167521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.269 [2024-12-09 17:38:59.167553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.269 qpair failed and we were unable to recover it. 00:28:30.269 [2024-12-09 17:38:59.167676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.269 [2024-12-09 17:38:59.167708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.269 qpair failed and we were unable to recover it. 00:28:30.269 [2024-12-09 17:38:59.167903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.269 [2024-12-09 17:38:59.167934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.269 qpair failed and we were unable to recover it. 00:28:30.269 [2024-12-09 17:38:59.168049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.269 [2024-12-09 17:38:59.168081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.269 qpair failed and we were unable to recover it. 00:28:30.269 [2024-12-09 17:38:59.168204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.269 [2024-12-09 17:38:59.168244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.269 qpair failed and we were unable to recover it. 00:28:30.269 [2024-12-09 17:38:59.168523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.269 [2024-12-09 17:38:59.168555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.269 qpair failed and we were unable to recover it. 00:28:30.269 [2024-12-09 17:38:59.168674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.269 [2024-12-09 17:38:59.168705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.269 qpair failed and we were unable to recover it. 00:28:30.269 [2024-12-09 17:38:59.168894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.269 [2024-12-09 17:38:59.168926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.269 qpair failed and we were unable to recover it. 00:28:30.269 [2024-12-09 17:38:59.169050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.269 [2024-12-09 17:38:59.169081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.269 qpair failed and we were unable to recover it. 00:28:30.269 [2024-12-09 17:38:59.169260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.269 [2024-12-09 17:38:59.169293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.269 qpair failed and we were unable to recover it. 00:28:30.269 [2024-12-09 17:38:59.169484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.269 [2024-12-09 17:38:59.169516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.269 qpair failed and we were unable to recover it. 00:28:30.269 [2024-12-09 17:38:59.169638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.269 [2024-12-09 17:38:59.169670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.269 qpair failed and we were unable to recover it. 00:28:30.269 [2024-12-09 17:38:59.169789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.269 [2024-12-09 17:38:59.169820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.269 qpair failed and we were unable to recover it. 00:28:30.269 [2024-12-09 17:38:59.169992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.269 [2024-12-09 17:38:59.170022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.269 qpair failed and we were unable to recover it. 00:28:30.269 [2024-12-09 17:38:59.170132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.269 [2024-12-09 17:38:59.170163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.269 qpair failed and we were unable to recover it. 00:28:30.269 [2024-12-09 17:38:59.170292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.269 [2024-12-09 17:38:59.170325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.269 qpair failed and we were unable to recover it. 00:28:30.269 [2024-12-09 17:38:59.170442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.269 [2024-12-09 17:38:59.170473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.269 qpair failed and we were unable to recover it. 00:28:30.269 [2024-12-09 17:38:59.170664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.269 [2024-12-09 17:38:59.170696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.269 qpair failed and we were unable to recover it. 00:28:30.270 [2024-12-09 17:38:59.170822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.270 [2024-12-09 17:38:59.170855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.270 qpair failed and we were unable to recover it. 00:28:30.270 [2024-12-09 17:38:59.170972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.270 [2024-12-09 17:38:59.171003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.270 qpair failed and we were unable to recover it. 00:28:30.270 [2024-12-09 17:38:59.171173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.270 [2024-12-09 17:38:59.171205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.270 qpair failed and we were unable to recover it. 00:28:30.270 [2024-12-09 17:38:59.171318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.270 [2024-12-09 17:38:59.171350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.270 qpair failed and we were unable to recover it. 00:28:30.270 [2024-12-09 17:38:59.171474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.270 [2024-12-09 17:38:59.171506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.270 qpair failed and we were unable to recover it. 00:28:30.270 [2024-12-09 17:38:59.171618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.270 [2024-12-09 17:38:59.171650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.270 qpair failed and we were unable to recover it. 00:28:30.270 [2024-12-09 17:38:59.171828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.270 [2024-12-09 17:38:59.171860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.270 qpair failed and we were unable to recover it. 00:28:30.270 [2024-12-09 17:38:59.172032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.270 [2024-12-09 17:38:59.172064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.270 qpair failed and we were unable to recover it. 00:28:30.270 [2024-12-09 17:38:59.172253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.270 [2024-12-09 17:38:59.172287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.270 qpair failed and we were unable to recover it. 00:28:30.270 [2024-12-09 17:38:59.172482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.270 [2024-12-09 17:38:59.172513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.270 qpair failed and we were unable to recover it. 00:28:30.270 [2024-12-09 17:38:59.172731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.270 [2024-12-09 17:38:59.172763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.270 qpair failed and we were unable to recover it. 00:28:30.270 [2024-12-09 17:38:59.172884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.270 [2024-12-09 17:38:59.172915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.270 qpair failed and we were unable to recover it. 00:28:30.270 [2024-12-09 17:38:59.173026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.270 [2024-12-09 17:38:59.173058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.270 qpair failed and we were unable to recover it. 00:28:30.270 [2024-12-09 17:38:59.173185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.270 [2024-12-09 17:38:59.173234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.270 qpair failed and we were unable to recover it. 00:28:30.270 [2024-12-09 17:38:59.173437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.270 [2024-12-09 17:38:59.173468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.270 qpair failed and we were unable to recover it. 00:28:30.270 [2024-12-09 17:38:59.173573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.270 [2024-12-09 17:38:59.173604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.270 qpair failed and we were unable to recover it. 00:28:30.270 [2024-12-09 17:38:59.173730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.270 [2024-12-09 17:38:59.173762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.270 qpair failed and we were unable to recover it. 00:28:30.270 [2024-12-09 17:38:59.174000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.270 [2024-12-09 17:38:59.174031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.270 qpair failed and we were unable to recover it. 00:28:30.270 [2024-12-09 17:38:59.174144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.270 [2024-12-09 17:38:59.174175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.270 qpair failed and we were unable to recover it. 00:28:30.270 [2024-12-09 17:38:59.174358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.270 [2024-12-09 17:38:59.174391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.270 qpair failed and we were unable to recover it. 00:28:30.270 [2024-12-09 17:38:59.174564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.270 [2024-12-09 17:38:59.174596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.270 qpair failed and we were unable to recover it. 00:28:30.270 [2024-12-09 17:38:59.174782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.270 [2024-12-09 17:38:59.174814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.270 qpair failed and we were unable to recover it. 00:28:30.270 [2024-12-09 17:38:59.174956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.270 [2024-12-09 17:38:59.174988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.270 qpair failed and we were unable to recover it. 00:28:30.270 [2024-12-09 17:38:59.175171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.270 [2024-12-09 17:38:59.175204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.270 qpair failed and we were unable to recover it. 00:28:30.270 [2024-12-09 17:38:59.175395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.270 [2024-12-09 17:38:59.175427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.270 qpair failed and we were unable to recover it. 00:28:30.270 [2024-12-09 17:38:59.175566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.270 [2024-12-09 17:38:59.175598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.270 qpair failed and we were unable to recover it. 00:28:30.270 [2024-12-09 17:38:59.175708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.270 [2024-12-09 17:38:59.175739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.270 qpair failed and we were unable to recover it. 00:28:30.270 [2024-12-09 17:38:59.175857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.270 [2024-12-09 17:38:59.175888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.270 qpair failed and we were unable to recover it. 00:28:30.270 [2024-12-09 17:38:59.176010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.270 [2024-12-09 17:38:59.176042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.270 qpair failed and we were unable to recover it. 00:28:30.270 [2024-12-09 17:38:59.176151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.270 [2024-12-09 17:38:59.176183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.270 qpair failed and we were unable to recover it. 00:28:30.270 [2024-12-09 17:38:59.176342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.270 [2024-12-09 17:38:59.176375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.270 qpair failed and we were unable to recover it. 00:28:30.270 [2024-12-09 17:38:59.176569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.270 [2024-12-09 17:38:59.176601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.270 qpair failed and we were unable to recover it. 00:28:30.270 [2024-12-09 17:38:59.176716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.270 [2024-12-09 17:38:59.176747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.270 qpair failed and we were unable to recover it. 00:28:30.270 [2024-12-09 17:38:59.177012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.270 [2024-12-09 17:38:59.177045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.270 qpair failed and we were unable to recover it. 00:28:30.270 [2024-12-09 17:38:59.177172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.270 [2024-12-09 17:38:59.177204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.270 qpair failed and we were unable to recover it. 00:28:30.270 [2024-12-09 17:38:59.177369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.270 [2024-12-09 17:38:59.177401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.270 qpair failed and we were unable to recover it. 00:28:30.270 [2024-12-09 17:38:59.177630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.270 [2024-12-09 17:38:59.177662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.270 qpair failed and we were unable to recover it. 00:28:30.270 [2024-12-09 17:38:59.177790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.271 [2024-12-09 17:38:59.177822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.271 qpair failed and we were unable to recover it. 00:28:30.271 [2024-12-09 17:38:59.177950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.271 [2024-12-09 17:38:59.177982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.271 qpair failed and we were unable to recover it. 00:28:30.271 [2024-12-09 17:38:59.178117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.271 [2024-12-09 17:38:59.178148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.271 qpair failed and we were unable to recover it. 00:28:30.271 [2024-12-09 17:38:59.178339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.271 [2024-12-09 17:38:59.178373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.271 qpair failed and we were unable to recover it. 00:28:30.271 [2024-12-09 17:38:59.178570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.271 [2024-12-09 17:38:59.178602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.271 qpair failed and we were unable to recover it. 00:28:30.271 [2024-12-09 17:38:59.178777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.271 [2024-12-09 17:38:59.178808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.271 qpair failed and we were unable to recover it. 00:28:30.271 [2024-12-09 17:38:59.178913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.271 [2024-12-09 17:38:59.178945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.271 qpair failed and we were unable to recover it. 00:28:30.271 [2024-12-09 17:38:59.179062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.271 [2024-12-09 17:38:59.179094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.271 qpair failed and we were unable to recover it. 00:28:30.271 [2024-12-09 17:38:59.179214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.271 [2024-12-09 17:38:59.179252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.271 qpair failed and we were unable to recover it. 00:28:30.271 [2024-12-09 17:38:59.179364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.271 [2024-12-09 17:38:59.179394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.271 qpair failed and we were unable to recover it. 00:28:30.271 [2024-12-09 17:38:59.179578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.271 [2024-12-09 17:38:59.179610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.271 qpair failed and we were unable to recover it. 00:28:30.271 [2024-12-09 17:38:59.179819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.271 [2024-12-09 17:38:59.179850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.271 qpair failed and we were unable to recover it. 00:28:30.271 [2024-12-09 17:38:59.179981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.271 [2024-12-09 17:38:59.180013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.271 qpair failed and we were unable to recover it. 00:28:30.271 [2024-12-09 17:38:59.180190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.271 [2024-12-09 17:38:59.180229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.271 qpair failed and we were unable to recover it. 00:28:30.271 [2024-12-09 17:38:59.180407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.271 [2024-12-09 17:38:59.180439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.271 qpair failed and we were unable to recover it. 00:28:30.271 [2024-12-09 17:38:59.180564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.271 [2024-12-09 17:38:59.180596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.271 qpair failed and we were unable to recover it. 00:28:30.271 [2024-12-09 17:38:59.180769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.271 [2024-12-09 17:38:59.180806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.271 qpair failed and we were unable to recover it. 00:28:30.271 [2024-12-09 17:38:59.180976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.271 [2024-12-09 17:38:59.181007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.271 qpair failed and we were unable to recover it. 00:28:30.271 [2024-12-09 17:38:59.181121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.271 [2024-12-09 17:38:59.181152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.271 qpair failed and we were unable to recover it. 00:28:30.271 [2024-12-09 17:38:59.181342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.271 [2024-12-09 17:38:59.181375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.271 qpair failed and we were unable to recover it. 00:28:30.271 [2024-12-09 17:38:59.181496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.271 [2024-12-09 17:38:59.181528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.271 qpair failed and we were unable to recover it. 00:28:30.271 [2024-12-09 17:38:59.181640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.271 [2024-12-09 17:38:59.181671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.271 qpair failed and we were unable to recover it. 00:28:30.271 [2024-12-09 17:38:59.181782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.271 [2024-12-09 17:38:59.181813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.271 qpair failed and we were unable to recover it. 00:28:30.271 [2024-12-09 17:38:59.181930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.271 [2024-12-09 17:38:59.181962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.271 qpair failed and we were unable to recover it. 00:28:30.271 [2024-12-09 17:38:59.182091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.271 [2024-12-09 17:38:59.182122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.271 qpair failed and we were unable to recover it. 00:28:30.271 [2024-12-09 17:38:59.182235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.271 [2024-12-09 17:38:59.182268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.271 qpair failed and we were unable to recover it. 00:28:30.271 [2024-12-09 17:38:59.182448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.271 [2024-12-09 17:38:59.182480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.271 qpair failed and we were unable to recover it. 00:28:30.271 [2024-12-09 17:38:59.182594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.271 [2024-12-09 17:38:59.182626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.271 qpair failed and we were unable to recover it. 00:28:30.271 [2024-12-09 17:38:59.182748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.271 [2024-12-09 17:38:59.182779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.271 qpair failed and we were unable to recover it. 00:28:30.271 [2024-12-09 17:38:59.182911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.271 [2024-12-09 17:38:59.182942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.271 qpair failed and we were unable to recover it. 00:28:30.271 [2024-12-09 17:38:59.183075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.271 [2024-12-09 17:38:59.183108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.271 qpair failed and we were unable to recover it. 00:28:30.271 [2024-12-09 17:38:59.183236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.271 [2024-12-09 17:38:59.183269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.271 qpair failed and we were unable to recover it. 00:28:30.271 [2024-12-09 17:38:59.183375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.271 [2024-12-09 17:38:59.183407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.271 qpair failed and we were unable to recover it. 00:28:30.271 [2024-12-09 17:38:59.183591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.271 [2024-12-09 17:38:59.183623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.271 qpair failed and we were unable to recover it. 00:28:30.271 [2024-12-09 17:38:59.183726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.271 [2024-12-09 17:38:59.183757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.271 qpair failed and we were unable to recover it. 00:28:30.271 [2024-12-09 17:38:59.183939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.271 [2024-12-09 17:38:59.183970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.271 qpair failed and we were unable to recover it. 00:28:30.271 [2024-12-09 17:38:59.184101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.271 [2024-12-09 17:38:59.184133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.271 qpair failed and we were unable to recover it. 00:28:30.271 [2024-12-09 17:38:59.184271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.271 [2024-12-09 17:38:59.184304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.271 qpair failed and we were unable to recover it. 00:28:30.271 [2024-12-09 17:38:59.184482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.271 [2024-12-09 17:38:59.184513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.271 qpair failed and we were unable to recover it. 00:28:30.271 [2024-12-09 17:38:59.184630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.271 [2024-12-09 17:38:59.184661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.272 qpair failed and we were unable to recover it. 00:28:30.272 [2024-12-09 17:38:59.184774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.272 [2024-12-09 17:38:59.184805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.272 qpair failed and we were unable to recover it. 00:28:30.272 [2024-12-09 17:38:59.184961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.272 [2024-12-09 17:38:59.184992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.272 qpair failed and we were unable to recover it. 00:28:30.272 [2024-12-09 17:38:59.185192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.272 [2024-12-09 17:38:59.185232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.272 qpair failed and we were unable to recover it. 00:28:30.272 [2024-12-09 17:38:59.185356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.272 [2024-12-09 17:38:59.185388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.272 qpair failed and we were unable to recover it. 00:28:30.272 [2024-12-09 17:38:59.185581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.272 [2024-12-09 17:38:59.185612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.272 qpair failed and we were unable to recover it. 00:28:30.272 [2024-12-09 17:38:59.185719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.272 [2024-12-09 17:38:59.185750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.272 qpair failed and we were unable to recover it. 00:28:30.272 [2024-12-09 17:38:59.185993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.272 [2024-12-09 17:38:59.186024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.272 qpair failed and we were unable to recover it. 00:28:30.272 [2024-12-09 17:38:59.186203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.272 [2024-12-09 17:38:59.186241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.272 qpair failed and we were unable to recover it. 00:28:30.272 [2024-12-09 17:38:59.186382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.272 [2024-12-09 17:38:59.186412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.272 qpair failed and we were unable to recover it. 00:28:30.272 [2024-12-09 17:38:59.186650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.272 [2024-12-09 17:38:59.186681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.272 qpair failed and we were unable to recover it. 00:28:30.272 [2024-12-09 17:38:59.186783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.272 [2024-12-09 17:38:59.186814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.272 qpair failed and we were unable to recover it. 00:28:30.272 [2024-12-09 17:38:59.186920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.272 [2024-12-09 17:38:59.186952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.272 qpair failed and we were unable to recover it. 00:28:30.272 [2024-12-09 17:38:59.187123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.272 [2024-12-09 17:38:59.187153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.272 qpair failed and we were unable to recover it. 00:28:30.272 [2024-12-09 17:38:59.187329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.272 [2024-12-09 17:38:59.187362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.272 qpair failed and we were unable to recover it. 00:28:30.272 [2024-12-09 17:38:59.187556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.272 [2024-12-09 17:38:59.187587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.272 qpair failed and we were unable to recover it. 00:28:30.272 [2024-12-09 17:38:59.187775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.272 [2024-12-09 17:38:59.187806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.272 qpair failed and we were unable to recover it. 00:28:30.272 [2024-12-09 17:38:59.187925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.272 [2024-12-09 17:38:59.187962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.272 qpair failed and we were unable to recover it. 00:28:30.272 [2024-12-09 17:38:59.188090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.272 [2024-12-09 17:38:59.188121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.272 qpair failed and we were unable to recover it. 00:28:30.272 [2024-12-09 17:38:59.188293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.272 [2024-12-09 17:38:59.188326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.272 qpair failed and we were unable to recover it. 00:28:30.272 [2024-12-09 17:38:59.188470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.272 [2024-12-09 17:38:59.188501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.272 qpair failed and we were unable to recover it. 00:28:30.272 [2024-12-09 17:38:59.188637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.272 [2024-12-09 17:38:59.188669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.272 qpair failed and we were unable to recover it. 00:28:30.272 [2024-12-09 17:38:59.188797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.272 [2024-12-09 17:38:59.188828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.272 qpair failed and we were unable to recover it. 00:28:30.272 [2024-12-09 17:38:59.189009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.272 [2024-12-09 17:38:59.189041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.272 qpair failed and we were unable to recover it. 00:28:30.272 [2024-12-09 17:38:59.189216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.272 [2024-12-09 17:38:59.189256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.272 qpair failed and we were unable to recover it. 00:28:30.272 [2024-12-09 17:38:59.189383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.272 [2024-12-09 17:38:59.189414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.272 qpair failed and we were unable to recover it. 00:28:30.272 [2024-12-09 17:38:59.189521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.272 [2024-12-09 17:38:59.189553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.272 qpair failed and we were unable to recover it. 00:28:30.272 [2024-12-09 17:38:59.189681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.272 [2024-12-09 17:38:59.189713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.272 qpair failed and we were unable to recover it. 00:28:30.272 [2024-12-09 17:38:59.189907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.272 [2024-12-09 17:38:59.189939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.272 qpair failed and we were unable to recover it. 00:28:30.272 [2024-12-09 17:38:59.190051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.272 [2024-12-09 17:38:59.190083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.272 qpair failed and we were unable to recover it. 00:28:30.272 [2024-12-09 17:38:59.190270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.272 [2024-12-09 17:38:59.190303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.272 qpair failed and we were unable to recover it. 00:28:30.272 [2024-12-09 17:38:59.190439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.272 [2024-12-09 17:38:59.190471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.272 qpair failed and we were unable to recover it. 00:28:30.272 [2024-12-09 17:38:59.190645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.272 [2024-12-09 17:38:59.190676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.272 qpair failed and we were unable to recover it. 00:28:30.272 [2024-12-09 17:38:59.190790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.272 [2024-12-09 17:38:59.190821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.272 qpair failed and we were unable to recover it. 00:28:30.272 [2024-12-09 17:38:59.190925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.272 [2024-12-09 17:38:59.190957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.272 qpair failed and we were unable to recover it. 00:28:30.272 [2024-12-09 17:38:59.191146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.272 [2024-12-09 17:38:59.191178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.272 qpair failed and we were unable to recover it. 00:28:30.272 [2024-12-09 17:38:59.191309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.272 [2024-12-09 17:38:59.191342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.272 qpair failed and we were unable to recover it. 00:28:30.272 [2024-12-09 17:38:59.191537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.272 [2024-12-09 17:38:59.191569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.272 qpair failed and we were unable to recover it. 00:28:30.272 [2024-12-09 17:38:59.191697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.272 [2024-12-09 17:38:59.191729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.272 qpair failed and we were unable to recover it. 00:28:30.272 [2024-12-09 17:38:59.191856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.272 [2024-12-09 17:38:59.191889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.272 qpair failed and we were unable to recover it. 00:28:30.273 [2024-12-09 17:38:59.192066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.273 [2024-12-09 17:38:59.192098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.273 qpair failed and we were unable to recover it. 00:28:30.273 [2024-12-09 17:38:59.192241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.273 [2024-12-09 17:38:59.192292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.273 qpair failed and we were unable to recover it. 00:28:30.273 [2024-12-09 17:38:59.192476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.273 [2024-12-09 17:38:59.192508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.273 qpair failed and we were unable to recover it. 00:28:30.273 [2024-12-09 17:38:59.192635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.273 [2024-12-09 17:38:59.192667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.273 qpair failed and we were unable to recover it. 00:28:30.273 [2024-12-09 17:38:59.192791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.273 [2024-12-09 17:38:59.192823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.273 qpair failed and we were unable to recover it. 00:28:30.273 [2024-12-09 17:38:59.192935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.273 [2024-12-09 17:38:59.192966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.273 qpair failed and we were unable to recover it. 00:28:30.273 [2024-12-09 17:38:59.193074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.273 [2024-12-09 17:38:59.193105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.273 qpair failed and we were unable to recover it. 00:28:30.273 [2024-12-09 17:38:59.193241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.273 [2024-12-09 17:38:59.193274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.273 qpair failed and we were unable to recover it. 00:28:30.273 [2024-12-09 17:38:59.193453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.273 [2024-12-09 17:38:59.193485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.273 qpair failed and we were unable to recover it. 00:28:30.273 [2024-12-09 17:38:59.193617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.273 [2024-12-09 17:38:59.193649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.273 qpair failed and we were unable to recover it. 00:28:30.273 [2024-12-09 17:38:59.193821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.273 [2024-12-09 17:38:59.193853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.273 qpair failed and we were unable to recover it. 00:28:30.273 [2024-12-09 17:38:59.193968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.273 [2024-12-09 17:38:59.194000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.273 qpair failed and we were unable to recover it. 00:28:30.273 [2024-12-09 17:38:59.194116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.273 [2024-12-09 17:38:59.194147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.273 qpair failed and we were unable to recover it. 00:28:30.273 [2024-12-09 17:38:59.194280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.273 [2024-12-09 17:38:59.194313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.273 qpair failed and we were unable to recover it. 00:28:30.273 [2024-12-09 17:38:59.194428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.273 [2024-12-09 17:38:59.194459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.273 qpair failed and we were unable to recover it. 00:28:30.273 [2024-12-09 17:38:59.194634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.273 [2024-12-09 17:38:59.194665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.273 qpair failed and we were unable to recover it. 00:28:30.273 [2024-12-09 17:38:59.194774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.273 [2024-12-09 17:38:59.194806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.273 qpair failed and we were unable to recover it. 00:28:30.273 [2024-12-09 17:38:59.194907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.273 [2024-12-09 17:38:59.194943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.273 qpair failed and we were unable to recover it. 00:28:30.273 [2024-12-09 17:38:59.195157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.273 [2024-12-09 17:38:59.195188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.273 qpair failed and we were unable to recover it. 00:28:30.273 [2024-12-09 17:38:59.195323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.273 [2024-12-09 17:38:59.195355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.273 qpair failed and we were unable to recover it. 00:28:30.273 [2024-12-09 17:38:59.195489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.273 [2024-12-09 17:38:59.195519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.273 qpair failed and we were unable to recover it. 00:28:30.273 [2024-12-09 17:38:59.195700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.273 [2024-12-09 17:38:59.195733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.273 qpair failed and we were unable to recover it. 00:28:30.273 [2024-12-09 17:38:59.195839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.273 [2024-12-09 17:38:59.195870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.273 qpair failed and we were unable to recover it. 00:28:30.273 [2024-12-09 17:38:59.196043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.273 [2024-12-09 17:38:59.196076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.273 qpair failed and we were unable to recover it. 00:28:30.273 [2024-12-09 17:38:59.196182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.273 [2024-12-09 17:38:59.196214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.273 qpair failed and we were unable to recover it. 00:28:30.273 [2024-12-09 17:38:59.196405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.273 [2024-12-09 17:38:59.196437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.273 qpair failed and we were unable to recover it. 00:28:30.273 [2024-12-09 17:38:59.196547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.273 [2024-12-09 17:38:59.196578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.273 qpair failed and we were unable to recover it. 00:28:30.273 [2024-12-09 17:38:59.196760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.273 [2024-12-09 17:38:59.196792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.273 qpair failed and we were unable to recover it. 00:28:30.273 [2024-12-09 17:38:59.196898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.273 [2024-12-09 17:38:59.196928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.273 qpair failed and we were unable to recover it. 00:28:30.273 [2024-12-09 17:38:59.197037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.273 [2024-12-09 17:38:59.197070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.273 qpair failed and we were unable to recover it. 00:28:30.273 [2024-12-09 17:38:59.197210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.273 [2024-12-09 17:38:59.197250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.273 qpair failed and we were unable to recover it. 00:28:30.273 [2024-12-09 17:38:59.197454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.273 [2024-12-09 17:38:59.197486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.273 qpair failed and we were unable to recover it. 00:28:30.273 [2024-12-09 17:38:59.197608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.273 [2024-12-09 17:38:59.197640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.273 qpair failed and we were unable to recover it. 00:28:30.273 [2024-12-09 17:38:59.197761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.273 [2024-12-09 17:38:59.197793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.273 qpair failed and we were unable to recover it. 00:28:30.273 [2024-12-09 17:38:59.197973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.273 [2024-12-09 17:38:59.198004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.273 qpair failed and we were unable to recover it. 00:28:30.273 [2024-12-09 17:38:59.198178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.273 [2024-12-09 17:38:59.198211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.273 qpair failed and we were unable to recover it. 00:28:30.273 [2024-12-09 17:38:59.198347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.273 [2024-12-09 17:38:59.198379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.273 qpair failed and we were unable to recover it. 00:28:30.273 [2024-12-09 17:38:59.198499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.273 [2024-12-09 17:38:59.198531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.273 qpair failed and we were unable to recover it. 00:28:30.273 [2024-12-09 17:38:59.198644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.273 [2024-12-09 17:38:59.198676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.274 qpair failed and we were unable to recover it. 00:28:30.274 [2024-12-09 17:38:59.198876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.274 [2024-12-09 17:38:59.198909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.274 qpair failed and we were unable to recover it. 00:28:30.274 [2024-12-09 17:38:59.199014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.274 [2024-12-09 17:38:59.199045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.274 qpair failed and we were unable to recover it. 00:28:30.274 [2024-12-09 17:38:59.199240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.274 [2024-12-09 17:38:59.199274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.274 qpair failed and we were unable to recover it. 00:28:30.274 [2024-12-09 17:38:59.199519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.274 [2024-12-09 17:38:59.199552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.274 qpair failed and we were unable to recover it. 00:28:30.274 [2024-12-09 17:38:59.199768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.274 [2024-12-09 17:38:59.199799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.274 qpair failed and we were unable to recover it. 00:28:30.274 [2024-12-09 17:38:59.199992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.274 [2024-12-09 17:38:59.200023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.274 qpair failed and we were unable to recover it. 00:28:30.274 [2024-12-09 17:38:59.200208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.274 [2024-12-09 17:38:59.200273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.274 qpair failed and we were unable to recover it. 00:28:30.274 [2024-12-09 17:38:59.200401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.274 [2024-12-09 17:38:59.200432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.274 qpair failed and we were unable to recover it. 00:28:30.274 [2024-12-09 17:38:59.200548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.274 [2024-12-09 17:38:59.200580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.274 qpair failed and we were unable to recover it. 00:28:30.274 [2024-12-09 17:38:59.200777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.274 [2024-12-09 17:38:59.200808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.274 qpair failed and we were unable to recover it. 00:28:30.274 [2024-12-09 17:38:59.200926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.274 [2024-12-09 17:38:59.200958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.274 qpair failed and we were unable to recover it. 00:28:30.274 [2024-12-09 17:38:59.201075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.274 [2024-12-09 17:38:59.201107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.274 qpair failed and we were unable to recover it. 00:28:30.274 [2024-12-09 17:38:59.201229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.274 [2024-12-09 17:38:59.201262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.274 qpair failed and we were unable to recover it. 00:28:30.274 [2024-12-09 17:38:59.201452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.274 [2024-12-09 17:38:59.201484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.274 qpair failed and we were unable to recover it. 00:28:30.274 [2024-12-09 17:38:59.201617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.274 [2024-12-09 17:38:59.201649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.274 qpair failed and we were unable to recover it. 00:28:30.274 [2024-12-09 17:38:59.201752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.274 [2024-12-09 17:38:59.201784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.274 qpair failed and we were unable to recover it. 00:28:30.274 [2024-12-09 17:38:59.201910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.274 [2024-12-09 17:38:59.201941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.274 qpair failed and we were unable to recover it. 00:28:30.274 [2024-12-09 17:38:59.202050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.274 [2024-12-09 17:38:59.202081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.274 qpair failed and we were unable to recover it. 00:28:30.274 [2024-12-09 17:38:59.202188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.274 [2024-12-09 17:38:59.202232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.274 qpair failed and we were unable to recover it. 00:28:30.274 [2024-12-09 17:38:59.202422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.274 [2024-12-09 17:38:59.202454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.274 qpair failed and we were unable to recover it. 00:28:30.274 [2024-12-09 17:38:59.202562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.274 [2024-12-09 17:38:59.202593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.274 qpair failed and we were unable to recover it. 00:28:30.274 [2024-12-09 17:38:59.202781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.274 [2024-12-09 17:38:59.202812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.274 qpair failed and we were unable to recover it. 00:28:30.274 [2024-12-09 17:38:59.202913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.274 [2024-12-09 17:38:59.202944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.274 qpair failed and we were unable to recover it. 00:28:30.274 [2024-12-09 17:38:59.203061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.274 [2024-12-09 17:38:59.203093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.274 qpair failed and we were unable to recover it. 00:28:30.274 [2024-12-09 17:38:59.203281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.274 [2024-12-09 17:38:59.203314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.274 qpair failed and we were unable to recover it. 00:28:30.274 [2024-12-09 17:38:59.203434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.274 [2024-12-09 17:38:59.203466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.274 qpair failed and we were unable to recover it. 00:28:30.274 [2024-12-09 17:38:59.203642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.274 [2024-12-09 17:38:59.203673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.274 qpair failed and we were unable to recover it. 00:28:30.274 [2024-12-09 17:38:59.203863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.274 [2024-12-09 17:38:59.203893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.274 qpair failed and we were unable to recover it. 00:28:30.274 [2024-12-09 17:38:59.204150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.274 [2024-12-09 17:38:59.204182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.274 qpair failed and we were unable to recover it. 00:28:30.274 [2024-12-09 17:38:59.204316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.274 [2024-12-09 17:38:59.204349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.274 qpair failed and we were unable to recover it. 00:28:30.274 [2024-12-09 17:38:59.204465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.274 [2024-12-09 17:38:59.204497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.274 qpair failed and we were unable to recover it. 00:28:30.274 [2024-12-09 17:38:59.204649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.274 [2024-12-09 17:38:59.204680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.274 qpair failed and we were unable to recover it. 00:28:30.274 [2024-12-09 17:38:59.204797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.274 [2024-12-09 17:38:59.204829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.274 qpair failed and we were unable to recover it. 00:28:30.274 [2024-12-09 17:38:59.204947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.274 [2024-12-09 17:38:59.204978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.274 qpair failed and we were unable to recover it. 00:28:30.274 [2024-12-09 17:38:59.205147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.274 [2024-12-09 17:38:59.205178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.274 qpair failed and we were unable to recover it. 00:28:30.275 [2024-12-09 17:38:59.205337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.275 [2024-12-09 17:38:59.205370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.275 qpair failed and we were unable to recover it. 00:28:30.275 [2024-12-09 17:38:59.205500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.275 [2024-12-09 17:38:59.205531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.275 qpair failed and we were unable to recover it. 00:28:30.275 [2024-12-09 17:38:59.205722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.275 [2024-12-09 17:38:59.205754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.275 qpair failed and we were unable to recover it. 00:28:30.275 [2024-12-09 17:38:59.205864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.275 [2024-12-09 17:38:59.205896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.275 qpair failed and we were unable to recover it. 00:28:30.275 [2024-12-09 17:38:59.206023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.275 [2024-12-09 17:38:59.206055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.275 qpair failed and we were unable to recover it. 00:28:30.275 [2024-12-09 17:38:59.206239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.275 [2024-12-09 17:38:59.206273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.275 qpair failed and we were unable to recover it. 00:28:30.275 [2024-12-09 17:38:59.206397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.275 [2024-12-09 17:38:59.206428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.275 qpair failed and we were unable to recover it. 00:28:30.275 [2024-12-09 17:38:59.206555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.275 [2024-12-09 17:38:59.206586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.275 qpair failed and we were unable to recover it. 00:28:30.275 [2024-12-09 17:38:59.206723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.275 [2024-12-09 17:38:59.206755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.275 qpair failed and we were unable to recover it. 00:28:30.275 [2024-12-09 17:38:59.206871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.275 [2024-12-09 17:38:59.206901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.275 qpair failed and we were unable to recover it. 00:28:30.275 [2024-12-09 17:38:59.207146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.275 [2024-12-09 17:38:59.207215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.275 qpair failed and we were unable to recover it. 00:28:30.275 [2024-12-09 17:38:59.207387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.275 [2024-12-09 17:38:59.207423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.275 qpair failed and we were unable to recover it. 00:28:30.275 [2024-12-09 17:38:59.207542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.275 [2024-12-09 17:38:59.207573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.275 qpair failed and we were unable to recover it. 00:28:30.275 [2024-12-09 17:38:59.207762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.275 [2024-12-09 17:38:59.207794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.275 qpair failed and we were unable to recover it. 00:28:30.275 [2024-12-09 17:38:59.208039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.275 [2024-12-09 17:38:59.208071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.275 qpair failed and we were unable to recover it. 00:28:30.275 [2024-12-09 17:38:59.208188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.275 [2024-12-09 17:38:59.208233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.275 qpair failed and we were unable to recover it. 00:28:30.275 [2024-12-09 17:38:59.208368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.275 [2024-12-09 17:38:59.208399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.275 qpair failed and we were unable to recover it. 00:28:30.275 [2024-12-09 17:38:59.208515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.275 [2024-12-09 17:38:59.208547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.275 qpair failed and we were unable to recover it. 00:28:30.275 [2024-12-09 17:38:59.208667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.275 [2024-12-09 17:38:59.208698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.275 qpair failed and we were unable to recover it. 00:28:30.275 [2024-12-09 17:38:59.208803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.275 [2024-12-09 17:38:59.208835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.275 qpair failed and we were unable to recover it. 00:28:30.275 [2024-12-09 17:38:59.209083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.275 [2024-12-09 17:38:59.209114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.275 qpair failed and we were unable to recover it. 00:28:30.275 [2024-12-09 17:38:59.209307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.275 [2024-12-09 17:38:59.209342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.275 qpair failed and we were unable to recover it. 00:28:30.275 [2024-12-09 17:38:59.209455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.275 [2024-12-09 17:38:59.209486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.275 qpair failed and we were unable to recover it. 00:28:30.275 [2024-12-09 17:38:59.209661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.275 [2024-12-09 17:38:59.209693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.275 qpair failed and we were unable to recover it. 00:28:30.275 [2024-12-09 17:38:59.209878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.275 [2024-12-09 17:38:59.209910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.275 qpair failed and we were unable to recover it. 00:28:30.275 [2024-12-09 17:38:59.210093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.275 [2024-12-09 17:38:59.210124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.275 qpair failed and we were unable to recover it. 00:28:30.275 [2024-12-09 17:38:59.210240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.275 [2024-12-09 17:38:59.210272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.275 qpair failed and we were unable to recover it. 00:28:30.275 [2024-12-09 17:38:59.210394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.275 [2024-12-09 17:38:59.210425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.275 qpair failed and we were unable to recover it. 00:28:30.275 [2024-12-09 17:38:59.210542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.275 [2024-12-09 17:38:59.210574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.275 qpair failed and we were unable to recover it. 00:28:30.275 [2024-12-09 17:38:59.210678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.275 [2024-12-09 17:38:59.210710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.275 qpair failed and we were unable to recover it. 00:28:30.275 [2024-12-09 17:38:59.210823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.275 [2024-12-09 17:38:59.210854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.275 qpair failed and we were unable to recover it. 00:28:30.275 [2024-12-09 17:38:59.210963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.275 [2024-12-09 17:38:59.210995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.275 qpair failed and we were unable to recover it. 00:28:30.275 [2024-12-09 17:38:59.211109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.275 [2024-12-09 17:38:59.211140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.275 qpair failed and we were unable to recover it. 00:28:30.275 [2024-12-09 17:38:59.211397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.275 [2024-12-09 17:38:59.211431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.275 qpair failed and we were unable to recover it. 00:28:30.275 [2024-12-09 17:38:59.211561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.275 [2024-12-09 17:38:59.211593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.275 qpair failed and we were unable to recover it. 00:28:30.275 [2024-12-09 17:38:59.211727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.275 [2024-12-09 17:38:59.211758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.275 qpair failed and we were unable to recover it. 00:28:30.275 [2024-12-09 17:38:59.212004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.275 [2024-12-09 17:38:59.212036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.275 qpair failed and we were unable to recover it. 00:28:30.275 [2024-12-09 17:38:59.212145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.275 [2024-12-09 17:38:59.212182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.275 qpair failed and we were unable to recover it. 00:28:30.275 [2024-12-09 17:38:59.212312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.275 [2024-12-09 17:38:59.212345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.275 qpair failed and we were unable to recover it. 00:28:30.275 [2024-12-09 17:38:59.212458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.275 [2024-12-09 17:38:59.212488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.276 qpair failed and we were unable to recover it. 00:28:30.276 [2024-12-09 17:38:59.212614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.276 [2024-12-09 17:38:59.212646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.276 qpair failed and we were unable to recover it. 00:28:30.276 [2024-12-09 17:38:59.212754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.276 [2024-12-09 17:38:59.212785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.276 qpair failed and we were unable to recover it. 00:28:30.276 [2024-12-09 17:38:59.212896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.276 [2024-12-09 17:38:59.212927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.276 qpair failed and we were unable to recover it. 00:28:30.276 [2024-12-09 17:38:59.213110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.276 [2024-12-09 17:38:59.213142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.276 qpair failed and we were unable to recover it. 00:28:30.276 [2024-12-09 17:38:59.213314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.276 [2024-12-09 17:38:59.213349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.276 qpair failed and we were unable to recover it. 00:28:30.276 [2024-12-09 17:38:59.213477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.276 [2024-12-09 17:38:59.213507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.276 qpair failed and we were unable to recover it. 00:28:30.276 [2024-12-09 17:38:59.213750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.276 [2024-12-09 17:38:59.213782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.276 qpair failed and we were unable to recover it. 00:28:30.276 [2024-12-09 17:38:59.213971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.276 [2024-12-09 17:38:59.214003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.276 qpair failed and we were unable to recover it. 00:28:30.276 [2024-12-09 17:38:59.214108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.276 [2024-12-09 17:38:59.214139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.276 qpair failed and we were unable to recover it. 00:28:30.276 [2024-12-09 17:38:59.214326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.276 [2024-12-09 17:38:59.214359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.276 qpair failed and we were unable to recover it. 00:28:30.276 [2024-12-09 17:38:59.214538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.276 [2024-12-09 17:38:59.214570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.276 qpair failed and we were unable to recover it. 00:28:30.276 [2024-12-09 17:38:59.214759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.276 [2024-12-09 17:38:59.214791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.276 qpair failed and we were unable to recover it. 00:28:30.276 [2024-12-09 17:38:59.214965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.276 [2024-12-09 17:38:59.214998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.276 qpair failed and we were unable to recover it. 00:28:30.276 [2024-12-09 17:38:59.215236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.276 [2024-12-09 17:38:59.215270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.276 qpair failed and we were unable to recover it. 00:28:30.276 [2024-12-09 17:38:59.215394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.276 [2024-12-09 17:38:59.215427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.276 qpair failed and we were unable to recover it. 00:28:30.276 [2024-12-09 17:38:59.215562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.276 [2024-12-09 17:38:59.215594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.276 qpair failed and we were unable to recover it. 00:28:30.276 [2024-12-09 17:38:59.215816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.276 [2024-12-09 17:38:59.215848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.276 qpair failed and we were unable to recover it. 00:28:30.276 [2024-12-09 17:38:59.215966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.276 [2024-12-09 17:38:59.215998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.276 qpair failed and we were unable to recover it. 00:28:30.276 [2024-12-09 17:38:59.216110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.276 [2024-12-09 17:38:59.216141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.276 qpair failed and we were unable to recover it. 00:28:30.276 [2024-12-09 17:38:59.216315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.276 [2024-12-09 17:38:59.216348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.276 qpair failed and we were unable to recover it. 00:28:30.276 [2024-12-09 17:38:59.216562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.276 [2024-12-09 17:38:59.216594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.276 qpair failed and we were unable to recover it. 00:28:30.276 [2024-12-09 17:38:59.216712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.276 [2024-12-09 17:38:59.216743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.276 qpair failed and we were unable to recover it. 00:28:30.276 [2024-12-09 17:38:59.216858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.276 [2024-12-09 17:38:59.216889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.276 qpair failed and we were unable to recover it. 00:28:30.276 [2024-12-09 17:38:59.217071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.276 [2024-12-09 17:38:59.217102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.276 qpair failed and we were unable to recover it. 00:28:30.276 [2024-12-09 17:38:59.217213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.276 [2024-12-09 17:38:59.217266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.276 qpair failed and we were unable to recover it. 00:28:30.276 [2024-12-09 17:38:59.217401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.276 [2024-12-09 17:38:59.217433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.276 qpair failed and we were unable to recover it. 00:28:30.276 [2024-12-09 17:38:59.217558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.276 [2024-12-09 17:38:59.217590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.276 qpair failed and we were unable to recover it. 00:28:30.276 [2024-12-09 17:38:59.217709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.276 [2024-12-09 17:38:59.217741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.276 qpair failed and we were unable to recover it. 00:28:30.276 [2024-12-09 17:38:59.217841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.276 [2024-12-09 17:38:59.217873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.276 qpair failed and we were unable to recover it. 00:28:30.276 [2024-12-09 17:38:59.217982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.276 [2024-12-09 17:38:59.218014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.276 qpair failed and we were unable to recover it. 00:28:30.276 [2024-12-09 17:38:59.218128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.276 [2024-12-09 17:38:59.218160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.276 qpair failed and we were unable to recover it. 00:28:30.276 [2024-12-09 17:38:59.218307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.276 [2024-12-09 17:38:59.218341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.276 qpair failed and we were unable to recover it. 00:28:30.276 [2024-12-09 17:38:59.218450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.276 [2024-12-09 17:38:59.218482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.276 qpair failed and we were unable to recover it. 00:28:30.276 [2024-12-09 17:38:59.218594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.276 [2024-12-09 17:38:59.218625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.276 qpair failed and we were unable to recover it. 00:28:30.276 [2024-12-09 17:38:59.218823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.276 [2024-12-09 17:38:59.218855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.276 qpair failed and we were unable to recover it. 00:28:30.276 [2024-12-09 17:38:59.218978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.276 [2024-12-09 17:38:59.219009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.276 qpair failed and we were unable to recover it. 00:28:30.276 [2024-12-09 17:38:59.219121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.276 [2024-12-09 17:38:59.219153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.276 qpair failed and we were unable to recover it. 00:28:30.276 [2024-12-09 17:38:59.219277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.276 [2024-12-09 17:38:59.219310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.276 qpair failed and we were unable to recover it. 00:28:30.276 [2024-12-09 17:38:59.219426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.276 [2024-12-09 17:38:59.219462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.276 qpair failed and we were unable to recover it. 00:28:30.276 [2024-12-09 17:38:59.219702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.277 [2024-12-09 17:38:59.219733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.277 qpair failed and we were unable to recover it. 00:28:30.277 [2024-12-09 17:38:59.219841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.277 [2024-12-09 17:38:59.219872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.277 qpair failed and we were unable to recover it. 00:28:30.277 [2024-12-09 17:38:59.220011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.277 [2024-12-09 17:38:59.220042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.277 qpair failed and we were unable to recover it. 00:28:30.277 [2024-12-09 17:38:59.220162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.277 [2024-12-09 17:38:59.220194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.277 qpair failed and we were unable to recover it. 00:28:30.277 [2024-12-09 17:38:59.220315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.277 [2024-12-09 17:38:59.220348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.277 qpair failed and we were unable to recover it. 00:28:30.277 [2024-12-09 17:38:59.220475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.277 [2024-12-09 17:38:59.220507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.277 qpair failed and we were unable to recover it. 00:28:30.277 [2024-12-09 17:38:59.220716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.277 [2024-12-09 17:38:59.220748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.277 qpair failed and we were unable to recover it. 00:28:30.277 [2024-12-09 17:38:59.220862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.277 [2024-12-09 17:38:59.220893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.277 qpair failed and we were unable to recover it. 00:28:30.277 [2024-12-09 17:38:59.221099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.277 [2024-12-09 17:38:59.221130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.277 qpair failed and we were unable to recover it. 00:28:30.277 [2024-12-09 17:38:59.221235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.277 [2024-12-09 17:38:59.221268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.277 qpair failed and we were unable to recover it. 00:28:30.277 [2024-12-09 17:38:59.221541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.277 [2024-12-09 17:38:59.221572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.277 qpair failed and we were unable to recover it. 00:28:30.277 [2024-12-09 17:38:59.221700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.277 [2024-12-09 17:38:59.221732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.277 qpair failed and we were unable to recover it. 00:28:30.277 [2024-12-09 17:38:59.221852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.277 [2024-12-09 17:38:59.221888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.277 qpair failed and we were unable to recover it. 00:28:30.277 [2024-12-09 17:38:59.222032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.277 [2024-12-09 17:38:59.222063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.277 qpair failed and we were unable to recover it. 00:28:30.277 [2024-12-09 17:38:59.222248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.277 [2024-12-09 17:38:59.222280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.277 qpair failed and we were unable to recover it. 00:28:30.277 [2024-12-09 17:38:59.222410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.277 [2024-12-09 17:38:59.222442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.277 qpair failed and we were unable to recover it. 00:28:30.277 [2024-12-09 17:38:59.222563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.277 [2024-12-09 17:38:59.222594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.277 qpair failed and we were unable to recover it. 00:28:30.277 [2024-12-09 17:38:59.222715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.277 [2024-12-09 17:38:59.222746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.277 qpair failed and we were unable to recover it. 00:28:30.277 [2024-12-09 17:38:59.222863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.277 [2024-12-09 17:38:59.222894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.277 qpair failed and we were unable to recover it. 00:28:30.277 [2024-12-09 17:38:59.223132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.277 [2024-12-09 17:38:59.223163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.277 qpair failed and we were unable to recover it. 00:28:30.277 [2024-12-09 17:38:59.223345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.277 [2024-12-09 17:38:59.223378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.277 qpair failed and we were unable to recover it. 00:28:30.277 [2024-12-09 17:38:59.223487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.277 [2024-12-09 17:38:59.223518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.277 qpair failed and we were unable to recover it. 00:28:30.277 [2024-12-09 17:38:59.223691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.277 [2024-12-09 17:38:59.223723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.277 qpair failed and we were unable to recover it. 00:28:30.277 [2024-12-09 17:38:59.223960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.277 [2024-12-09 17:38:59.223992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.277 qpair failed and we were unable to recover it. 00:28:30.277 [2024-12-09 17:38:59.224116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.277 [2024-12-09 17:38:59.224149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.277 qpair failed and we were unable to recover it. 00:28:30.277 [2024-12-09 17:38:59.224251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.277 [2024-12-09 17:38:59.224283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.277 qpair failed and we were unable to recover it. 00:28:30.277 [2024-12-09 17:38:59.224410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.277 [2024-12-09 17:38:59.224441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.277 qpair failed and we were unable to recover it. 00:28:30.277 [2024-12-09 17:38:59.224580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.277 [2024-12-09 17:38:59.224612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.277 qpair failed and we were unable to recover it. 00:28:30.277 [2024-12-09 17:38:59.224800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.277 [2024-12-09 17:38:59.224832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.277 qpair failed and we were unable to recover it. 00:28:30.277 [2024-12-09 17:38:59.225024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.277 [2024-12-09 17:38:59.225056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.277 qpair failed and we were unable to recover it. 00:28:30.277 [2024-12-09 17:38:59.225237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.277 [2024-12-09 17:38:59.225270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.277 qpair failed and we were unable to recover it. 00:28:30.277 [2024-12-09 17:38:59.225492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.277 [2024-12-09 17:38:59.225522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.277 qpair failed and we were unable to recover it. 00:28:30.277 [2024-12-09 17:38:59.225645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.277 [2024-12-09 17:38:59.225677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.277 qpair failed and we were unable to recover it. 00:28:30.277 [2024-12-09 17:38:59.225807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.277 [2024-12-09 17:38:59.225839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.277 qpair failed and we were unable to recover it. 00:28:30.277 [2024-12-09 17:38:59.225946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.277 [2024-12-09 17:38:59.225978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.277 qpair failed and we were unable to recover it. 00:28:30.277 [2024-12-09 17:38:59.226180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.277 [2024-12-09 17:38:59.226211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.277 qpair failed and we were unable to recover it. 00:28:30.277 [2024-12-09 17:38:59.226362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.277 [2024-12-09 17:38:59.226392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.277 qpair failed and we were unable to recover it. 00:28:30.277 [2024-12-09 17:38:59.226515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.277 [2024-12-09 17:38:59.226547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.277 qpair failed and we were unable to recover it. 00:28:30.277 [2024-12-09 17:38:59.226667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.277 [2024-12-09 17:38:59.226698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.277 qpair failed and we were unable to recover it. 00:28:30.277 [2024-12-09 17:38:59.226823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.278 [2024-12-09 17:38:59.226859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.278 qpair failed and we were unable to recover it. 00:28:30.278 [2024-12-09 17:38:59.226981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.278 [2024-12-09 17:38:59.227013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.278 qpair failed and we were unable to recover it. 00:28:30.278 [2024-12-09 17:38:59.227199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.278 [2024-12-09 17:38:59.227240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.278 qpair failed and we were unable to recover it. 00:28:30.278 [2024-12-09 17:38:59.227347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.278 [2024-12-09 17:38:59.227380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.278 qpair failed and we were unable to recover it. 00:28:30.278 [2024-12-09 17:38:59.227500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.278 [2024-12-09 17:38:59.227531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.278 qpair failed and we were unable to recover it. 00:28:30.278 [2024-12-09 17:38:59.227703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.278 [2024-12-09 17:38:59.227734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.278 qpair failed and we were unable to recover it. 00:28:30.278 [2024-12-09 17:38:59.227934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.278 [2024-12-09 17:38:59.227966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.278 qpair failed and we were unable to recover it. 00:28:30.278 [2024-12-09 17:38:59.228210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.278 [2024-12-09 17:38:59.228255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.278 qpair failed and we were unable to recover it. 00:28:30.278 [2024-12-09 17:38:59.228384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.278 [2024-12-09 17:38:59.228416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.278 qpair failed and we were unable to recover it. 00:28:30.278 [2024-12-09 17:38:59.228589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.278 [2024-12-09 17:38:59.228620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.278 qpair failed and we were unable to recover it. 00:28:30.278 [2024-12-09 17:38:59.228747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.278 [2024-12-09 17:38:59.228780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.278 qpair failed and we were unable to recover it. 00:28:30.278 [2024-12-09 17:38:59.228963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.278 [2024-12-09 17:38:59.228994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.278 qpair failed and we were unable to recover it. 00:28:30.278 [2024-12-09 17:38:59.229177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.278 [2024-12-09 17:38:59.229208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.278 qpair failed and we were unable to recover it. 00:28:30.278 [2024-12-09 17:38:59.229351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.278 [2024-12-09 17:38:59.229383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.278 qpair failed and we were unable to recover it. 00:28:30.278 [2024-12-09 17:38:59.229500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.278 [2024-12-09 17:38:59.229532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.278 qpair failed and we were unable to recover it. 00:28:30.278 [2024-12-09 17:38:59.229657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.278 [2024-12-09 17:38:59.229690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.278 qpair failed and we were unable to recover it. 00:28:30.278 [2024-12-09 17:38:59.229799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.278 [2024-12-09 17:38:59.229831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.278 qpair failed and we were unable to recover it. 00:28:30.278 [2024-12-09 17:38:59.229939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.278 [2024-12-09 17:38:59.229971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.278 qpair failed and we were unable to recover it. 00:28:30.278 [2024-12-09 17:38:59.230158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.278 [2024-12-09 17:38:59.230190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.278 qpair failed and we were unable to recover it. 00:28:30.278 [2024-12-09 17:38:59.230390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.278 [2024-12-09 17:38:59.230422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.278 qpair failed and we were unable to recover it. 00:28:30.278 [2024-12-09 17:38:59.230596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.278 [2024-12-09 17:38:59.230628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.278 qpair failed and we were unable to recover it. 00:28:30.278 [2024-12-09 17:38:59.230743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.278 [2024-12-09 17:38:59.230775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.278 qpair failed and we were unable to recover it. 00:28:30.278 [2024-12-09 17:38:59.230968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.278 [2024-12-09 17:38:59.230999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.278 qpair failed and we were unable to recover it. 00:28:30.278 [2024-12-09 17:38:59.231191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.278 [2024-12-09 17:38:59.231234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.278 qpair failed and we were unable to recover it. 00:28:30.278 [2024-12-09 17:38:59.231429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.278 [2024-12-09 17:38:59.231461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.278 qpair failed and we were unable to recover it. 00:28:30.278 [2024-12-09 17:38:59.231677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.278 [2024-12-09 17:38:59.231709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.278 qpair failed and we were unable to recover it. 00:28:30.278 [2024-12-09 17:38:59.231887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.278 [2024-12-09 17:38:59.231919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.278 qpair failed and we were unable to recover it. 00:28:30.278 [2024-12-09 17:38:59.232095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.278 [2024-12-09 17:38:59.232132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.278 qpair failed and we were unable to recover it. 00:28:30.278 [2024-12-09 17:38:59.232271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.278 [2024-12-09 17:38:59.232304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.278 qpair failed and we were unable to recover it. 00:28:30.278 [2024-12-09 17:38:59.232431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.278 [2024-12-09 17:38:59.232462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.278 qpair failed and we were unable to recover it. 00:28:30.278 [2024-12-09 17:38:59.232583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.278 [2024-12-09 17:38:59.232615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.278 qpair failed and we were unable to recover it. 00:28:30.278 [2024-12-09 17:38:59.232723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.278 [2024-12-09 17:38:59.232755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.278 qpair failed and we were unable to recover it. 00:28:30.278 [2024-12-09 17:38:59.232926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.278 [2024-12-09 17:38:59.232958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.278 qpair failed and we were unable to recover it. 00:28:30.278 [2024-12-09 17:38:59.233066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.278 [2024-12-09 17:38:59.233116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.278 qpair failed and we were unable to recover it. 00:28:30.278 [2024-12-09 17:38:59.233323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.278 [2024-12-09 17:38:59.233356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.278 qpair failed and we were unable to recover it. 00:28:30.278 [2024-12-09 17:38:59.233597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.278 [2024-12-09 17:38:59.233629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.278 qpair failed and we were unable to recover it. 00:28:30.279 [2024-12-09 17:38:59.233812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.279 [2024-12-09 17:38:59.233844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.279 qpair failed and we were unable to recover it. 00:28:30.279 [2024-12-09 17:38:59.233956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.279 [2024-12-09 17:38:59.233987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.279 qpair failed and we were unable to recover it. 00:28:30.279 [2024-12-09 17:38:59.234137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.279 [2024-12-09 17:38:59.234169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.279 qpair failed and we were unable to recover it. 00:28:30.279 [2024-12-09 17:38:59.234416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.279 [2024-12-09 17:38:59.234448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.279 qpair failed and we were unable to recover it. 00:28:30.279 [2024-12-09 17:38:59.234637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.279 [2024-12-09 17:38:59.234668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.279 qpair failed and we were unable to recover it. 00:28:30.279 [2024-12-09 17:38:59.234868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.279 [2024-12-09 17:38:59.234900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.279 qpair failed and we were unable to recover it. 00:28:30.279 [2024-12-09 17:38:59.235071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.279 [2024-12-09 17:38:59.235102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.279 qpair failed and we were unable to recover it. 00:28:30.279 [2024-12-09 17:38:59.235236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.279 [2024-12-09 17:38:59.235269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.279 qpair failed and we were unable to recover it. 00:28:30.279 [2024-12-09 17:38:59.235507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.279 [2024-12-09 17:38:59.235538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.279 qpair failed and we were unable to recover it. 00:28:30.279 [2024-12-09 17:38:59.235721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.279 [2024-12-09 17:38:59.235754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.279 qpair failed and we were unable to recover it. 00:28:30.279 [2024-12-09 17:38:59.235952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.279 [2024-12-09 17:38:59.235984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.279 qpair failed and we were unable to recover it. 00:28:30.279 [2024-12-09 17:38:59.236095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.279 [2024-12-09 17:38:59.236126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.279 qpair failed and we were unable to recover it. 00:28:30.279 [2024-12-09 17:38:59.236249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.279 [2024-12-09 17:38:59.236288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.279 qpair failed and we were unable to recover it. 00:28:30.279 [2024-12-09 17:38:59.236405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.279 [2024-12-09 17:38:59.236436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.279 qpair failed and we were unable to recover it. 00:28:30.279 [2024-12-09 17:38:59.236549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.279 [2024-12-09 17:38:59.236580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.279 qpair failed and we were unable to recover it. 00:28:30.279 [2024-12-09 17:38:59.236761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.279 [2024-12-09 17:38:59.236793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.279 qpair failed and we were unable to recover it. 00:28:30.279 [2024-12-09 17:38:59.236963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.279 [2024-12-09 17:38:59.236995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.279 qpair failed and we were unable to recover it. 00:28:30.279 [2024-12-09 17:38:59.237108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.279 [2024-12-09 17:38:59.237140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.279 qpair failed and we were unable to recover it. 00:28:30.279 [2024-12-09 17:38:59.237387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.279 [2024-12-09 17:38:59.237420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.279 qpair failed and we were unable to recover it. 00:28:30.279 [2024-12-09 17:38:59.237614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.279 [2024-12-09 17:38:59.237645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.279 qpair failed and we were unable to recover it. 00:28:30.279 [2024-12-09 17:38:59.237814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.279 [2024-12-09 17:38:59.237845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.279 qpair failed and we were unable to recover it. 00:28:30.279 [2024-12-09 17:38:59.238051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.279 [2024-12-09 17:38:59.238083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.279 qpair failed and we were unable to recover it. 00:28:30.279 [2024-12-09 17:38:59.238264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.279 [2024-12-09 17:38:59.238298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.279 qpair failed and we were unable to recover it. 00:28:30.279 [2024-12-09 17:38:59.238426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.279 [2024-12-09 17:38:59.238459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.279 qpair failed and we were unable to recover it. 00:28:30.279 [2024-12-09 17:38:59.238578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.279 [2024-12-09 17:38:59.238609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.279 qpair failed and we were unable to recover it. 00:28:30.279 [2024-12-09 17:38:59.238782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.279 [2024-12-09 17:38:59.238814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.279 qpair failed and we were unable to recover it. 00:28:30.279 [2024-12-09 17:38:59.238986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.279 [2024-12-09 17:38:59.239017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.279 qpair failed and we were unable to recover it. 00:28:30.279 [2024-12-09 17:38:59.239134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.279 [2024-12-09 17:38:59.239166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.279 qpair failed and we were unable to recover it. 00:28:30.279 [2024-12-09 17:38:59.239296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.279 [2024-12-09 17:38:59.239329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.279 qpair failed and we were unable to recover it. 00:28:30.279 [2024-12-09 17:38:59.239445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.279 [2024-12-09 17:38:59.239476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.279 qpair failed and we were unable to recover it. 00:28:30.279 [2024-12-09 17:38:59.239739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.279 [2024-12-09 17:38:59.239770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.279 qpair failed and we were unable to recover it. 00:28:30.279 [2024-12-09 17:38:59.239871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.279 [2024-12-09 17:38:59.239903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.279 qpair failed and we were unable to recover it. 00:28:30.279 [2024-12-09 17:38:59.240011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.279 [2024-12-09 17:38:59.240043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.279 qpair failed and we were unable to recover it. 00:28:30.279 [2024-12-09 17:38:59.240149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.279 [2024-12-09 17:38:59.240181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.279 qpair failed and we were unable to recover it. 00:28:30.279 [2024-12-09 17:38:59.240324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.279 [2024-12-09 17:38:59.240357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.279 qpair failed and we were unable to recover it. 00:28:30.279 [2024-12-09 17:38:59.240552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.279 [2024-12-09 17:38:59.240584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.279 qpair failed and we were unable to recover it. 00:28:30.279 [2024-12-09 17:38:59.240755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.279 [2024-12-09 17:38:59.240786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.279 qpair failed and we were unable to recover it. 00:28:30.279 [2024-12-09 17:38:59.240908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.279 [2024-12-09 17:38:59.240941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.279 qpair failed and we were unable to recover it. 00:28:30.279 [2024-12-09 17:38:59.241062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.280 [2024-12-09 17:38:59.241093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.280 qpair failed and we were unable to recover it. 00:28:30.280 [2024-12-09 17:38:59.241272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.280 [2024-12-09 17:38:59.241306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.280 qpair failed and we were unable to recover it. 00:28:30.280 [2024-12-09 17:38:59.241420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.280 [2024-12-09 17:38:59.241452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.280 qpair failed and we were unable to recover it. 00:28:30.280 [2024-12-09 17:38:59.241587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.280 [2024-12-09 17:38:59.241618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.280 qpair failed and we were unable to recover it. 00:28:30.280 [2024-12-09 17:38:59.241799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.280 [2024-12-09 17:38:59.241831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.280 qpair failed and we were unable to recover it. 00:28:30.280 [2024-12-09 17:38:59.241956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.280 [2024-12-09 17:38:59.241988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.280 qpair failed and we were unable to recover it. 00:28:30.280 [2024-12-09 17:38:59.242157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.280 [2024-12-09 17:38:59.242190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.280 qpair failed and we were unable to recover it. 00:28:30.280 [2024-12-09 17:38:59.242317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.280 [2024-12-09 17:38:59.242349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.280 qpair failed and we were unable to recover it. 00:28:30.280 [2024-12-09 17:38:59.242485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.280 [2024-12-09 17:38:59.242517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.280 qpair failed and we were unable to recover it. 00:28:30.280 [2024-12-09 17:38:59.242643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.280 [2024-12-09 17:38:59.242674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.280 qpair failed and we were unable to recover it. 00:28:30.280 [2024-12-09 17:38:59.242794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.280 [2024-12-09 17:38:59.242826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.280 qpair failed and we were unable to recover it. 00:28:30.280 [2024-12-09 17:38:59.243013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.280 [2024-12-09 17:38:59.243045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.280 qpair failed and we were unable to recover it. 00:28:30.280 [2024-12-09 17:38:59.243147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.280 [2024-12-09 17:38:59.243178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.280 qpair failed and we were unable to recover it. 00:28:30.280 [2024-12-09 17:38:59.243358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.280 [2024-12-09 17:38:59.243391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.280 qpair failed and we were unable to recover it. 00:28:30.280 [2024-12-09 17:38:59.243517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.280 [2024-12-09 17:38:59.243549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.280 qpair failed and we were unable to recover it. 00:28:30.280 [2024-12-09 17:38:59.243662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.280 [2024-12-09 17:38:59.243693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.280 qpair failed and we were unable to recover it. 00:28:30.280 [2024-12-09 17:38:59.243891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.280 [2024-12-09 17:38:59.243922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.280 qpair failed and we were unable to recover it. 00:28:30.280 [2024-12-09 17:38:59.244031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.280 [2024-12-09 17:38:59.244063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.280 qpair failed and we were unable to recover it. 00:28:30.280 [2024-12-09 17:38:59.244253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.280 [2024-12-09 17:38:59.244285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.280 qpair failed and we were unable to recover it. 00:28:30.280 [2024-12-09 17:38:59.244411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.280 [2024-12-09 17:38:59.244443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.280 qpair failed and we were unable to recover it. 00:28:30.280 [2024-12-09 17:38:59.244562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.280 [2024-12-09 17:38:59.244593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.280 qpair failed and we were unable to recover it. 00:28:30.280 [2024-12-09 17:38:59.244715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.280 [2024-12-09 17:38:59.244752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.280 qpair failed and we were unable to recover it. 00:28:30.280 [2024-12-09 17:38:59.244878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.280 [2024-12-09 17:38:59.244909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.280 qpair failed and we were unable to recover it. 00:28:30.280 [2024-12-09 17:38:59.245106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.280 [2024-12-09 17:38:59.245136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.280 qpair failed and we were unable to recover it. 00:28:30.280 [2024-12-09 17:38:59.245244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.280 [2024-12-09 17:38:59.245278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.280 qpair failed and we were unable to recover it. 00:28:30.280 [2024-12-09 17:38:59.245406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.280 [2024-12-09 17:38:59.245437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.280 qpair failed and we were unable to recover it. 00:28:30.280 [2024-12-09 17:38:59.245559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.280 [2024-12-09 17:38:59.245590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.280 qpair failed and we were unable to recover it. 00:28:30.280 [2024-12-09 17:38:59.245720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.280 [2024-12-09 17:38:59.245752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.280 qpair failed and we were unable to recover it. 00:28:30.280 [2024-12-09 17:38:59.245936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.280 [2024-12-09 17:38:59.245968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.280 qpair failed and we were unable to recover it. 00:28:30.280 [2024-12-09 17:38:59.246091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.280 [2024-12-09 17:38:59.246123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.280 qpair failed and we were unable to recover it. 00:28:30.280 [2024-12-09 17:38:59.246321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.280 [2024-12-09 17:38:59.246354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.280 qpair failed and we were unable to recover it. 00:28:30.280 [2024-12-09 17:38:59.246481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.280 [2024-12-09 17:38:59.246514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.280 qpair failed and we were unable to recover it. 00:28:30.280 [2024-12-09 17:38:59.246699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.280 [2024-12-09 17:38:59.246730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.280 qpair failed and we were unable to recover it. 00:28:30.280 [2024-12-09 17:38:59.246845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.280 [2024-12-09 17:38:59.246877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.280 qpair failed and we were unable to recover it. 00:28:30.280 [2024-12-09 17:38:59.247064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.280 [2024-12-09 17:38:59.247096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.280 qpair failed and we were unable to recover it. 00:28:30.280 [2024-12-09 17:38:59.247284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.280 [2024-12-09 17:38:59.247317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.280 qpair failed and we were unable to recover it. 00:28:30.280 [2024-12-09 17:38:59.247435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.280 [2024-12-09 17:38:59.247466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.280 qpair failed and we were unable to recover it. 00:28:30.280 [2024-12-09 17:38:59.247576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.280 [2024-12-09 17:38:59.247608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.280 qpair failed and we were unable to recover it. 00:28:30.280 [2024-12-09 17:38:59.247716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.280 [2024-12-09 17:38:59.247748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.280 qpair failed and we were unable to recover it. 00:28:30.280 [2024-12-09 17:38:59.247869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.280 [2024-12-09 17:38:59.247900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.281 qpair failed and we were unable to recover it. 00:28:30.281 [2024-12-09 17:38:59.248098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.281 [2024-12-09 17:38:59.248129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.281 qpair failed and we were unable to recover it. 00:28:30.281 [2024-12-09 17:38:59.248238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.281 [2024-12-09 17:38:59.248271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.281 qpair failed and we were unable to recover it. 00:28:30.281 [2024-12-09 17:38:59.248390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.281 [2024-12-09 17:38:59.248422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.281 qpair failed and we were unable to recover it. 00:28:30.281 [2024-12-09 17:38:59.248541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.281 [2024-12-09 17:38:59.248573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.281 qpair failed and we were unable to recover it. 00:28:30.281 [2024-12-09 17:38:59.248750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.281 [2024-12-09 17:38:59.248781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.281 qpair failed and we were unable to recover it. 00:28:30.281 [2024-12-09 17:38:59.248912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.281 [2024-12-09 17:38:59.248943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.281 qpair failed and we were unable to recover it. 00:28:30.281 [2024-12-09 17:38:59.249119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.281 [2024-12-09 17:38:59.249151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.281 qpair failed and we were unable to recover it. 00:28:30.281 [2024-12-09 17:38:59.249269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.281 [2024-12-09 17:38:59.249302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.281 qpair failed and we were unable to recover it. 00:28:30.281 [2024-12-09 17:38:59.249496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.281 [2024-12-09 17:38:59.249528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.281 qpair failed and we were unable to recover it. 00:28:30.281 [2024-12-09 17:38:59.249648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.281 [2024-12-09 17:38:59.249682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.281 qpair failed and we were unable to recover it. 00:28:30.281 [2024-12-09 17:38:59.249857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.281 [2024-12-09 17:38:59.249888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.281 qpair failed and we were unable to recover it. 00:28:30.281 [2024-12-09 17:38:59.249991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.281 [2024-12-09 17:38:59.250024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.281 qpair failed and we were unable to recover it. 00:28:30.281 [2024-12-09 17:38:59.250144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.281 [2024-12-09 17:38:59.250175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.281 qpair failed and we were unable to recover it. 00:28:30.281 [2024-12-09 17:38:59.250295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.281 [2024-12-09 17:38:59.250327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.281 qpair failed and we were unable to recover it. 00:28:30.281 [2024-12-09 17:38:59.250523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.281 [2024-12-09 17:38:59.250556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.281 qpair failed and we were unable to recover it. 00:28:30.281 [2024-12-09 17:38:59.250671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.281 [2024-12-09 17:38:59.250702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.281 qpair failed and we were unable to recover it. 00:28:30.281 [2024-12-09 17:38:59.250880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.281 [2024-12-09 17:38:59.250912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.281 qpair failed and we were unable to recover it. 00:28:30.281 [2024-12-09 17:38:59.251022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.281 [2024-12-09 17:38:59.251053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.281 qpair failed and we were unable to recover it. 00:28:30.281 [2024-12-09 17:38:59.251181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.281 [2024-12-09 17:38:59.251212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.281 qpair failed and we were unable to recover it. 00:28:30.281 [2024-12-09 17:38:59.251323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.281 [2024-12-09 17:38:59.251357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.281 qpair failed and we were unable to recover it. 00:28:30.281 [2024-12-09 17:38:59.251464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.281 [2024-12-09 17:38:59.251496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.281 qpair failed and we were unable to recover it. 00:28:30.281 [2024-12-09 17:38:59.251733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.281 [2024-12-09 17:38:59.251765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.281 qpair failed and we were unable to recover it. 00:28:30.281 [2024-12-09 17:38:59.252040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.281 [2024-12-09 17:38:59.252072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.281 qpair failed and we were unable to recover it. 00:28:30.281 [2024-12-09 17:38:59.252242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.281 [2024-12-09 17:38:59.252274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.281 qpair failed and we were unable to recover it. 00:28:30.281 [2024-12-09 17:38:59.252410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.281 [2024-12-09 17:38:59.252441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.281 qpair failed and we were unable to recover it. 00:28:30.281 [2024-12-09 17:38:59.252615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.281 [2024-12-09 17:38:59.252645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.281 qpair failed and we were unable to recover it. 00:28:30.281 [2024-12-09 17:38:59.252817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.281 [2024-12-09 17:38:59.252849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.281 qpair failed and we were unable to recover it. 00:28:30.281 [2024-12-09 17:38:59.252979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.281 [2024-12-09 17:38:59.253010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.281 qpair failed and we were unable to recover it. 00:28:30.281 [2024-12-09 17:38:59.253178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.281 [2024-12-09 17:38:59.253209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.281 qpair failed and we were unable to recover it. 00:28:30.281 [2024-12-09 17:38:59.253383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.281 [2024-12-09 17:38:59.253417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.281 qpair failed and we were unable to recover it. 00:28:30.281 [2024-12-09 17:38:59.253530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.281 [2024-12-09 17:38:59.253561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.281 qpair failed and we were unable to recover it. 00:28:30.281 [2024-12-09 17:38:59.253683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.281 [2024-12-09 17:38:59.253714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.281 qpair failed and we were unable to recover it. 00:28:30.281 [2024-12-09 17:38:59.253990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.281 [2024-12-09 17:38:59.254022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.281 qpair failed and we were unable to recover it. 00:28:30.281 [2024-12-09 17:38:59.254146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.281 [2024-12-09 17:38:59.254177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.281 qpair failed and we were unable to recover it. 00:28:30.281 [2024-12-09 17:38:59.254316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.281 [2024-12-09 17:38:59.254350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.281 qpair failed and we were unable to recover it. 00:28:30.281 [2024-12-09 17:38:59.254465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.281 [2024-12-09 17:38:59.254497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.281 qpair failed and we were unable to recover it. 00:28:30.281 [2024-12-09 17:38:59.254684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.281 [2024-12-09 17:38:59.254716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.281 qpair failed and we were unable to recover it. 00:28:30.281 [2024-12-09 17:38:59.254834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.281 [2024-12-09 17:38:59.254866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.281 qpair failed and we were unable to recover it. 00:28:30.281 [2024-12-09 17:38:59.255039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.282 [2024-12-09 17:38:59.255071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.282 qpair failed and we were unable to recover it. 00:28:30.282 [2024-12-09 17:38:59.255187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.282 [2024-12-09 17:38:59.255226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.282 qpair failed and we were unable to recover it. 00:28:30.282 [2024-12-09 17:38:59.255427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.282 [2024-12-09 17:38:59.255458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.282 qpair failed and we were unable to recover it. 00:28:30.282 [2024-12-09 17:38:59.255592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.282 [2024-12-09 17:38:59.255623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.282 qpair failed and we were unable to recover it. 00:28:30.282 [2024-12-09 17:38:59.255794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.282 [2024-12-09 17:38:59.255827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.282 qpair failed and we were unable to recover it. 00:28:30.282 [2024-12-09 17:38:59.256091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.282 [2024-12-09 17:38:59.256123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.282 qpair failed and we were unable to recover it. 00:28:30.282 [2024-12-09 17:38:59.256240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.282 [2024-12-09 17:38:59.256274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.282 qpair failed and we were unable to recover it. 00:28:30.282 [2024-12-09 17:38:59.256400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.282 [2024-12-09 17:38:59.256432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.282 qpair failed and we were unable to recover it. 00:28:30.282 [2024-12-09 17:38:59.256561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.282 [2024-12-09 17:38:59.256592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.282 qpair failed and we were unable to recover it. 00:28:30.282 [2024-12-09 17:38:59.256697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.282 [2024-12-09 17:38:59.256728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.282 qpair failed and we were unable to recover it. 00:28:30.282 [2024-12-09 17:38:59.256904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.282 [2024-12-09 17:38:59.256935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.282 qpair failed and we were unable to recover it. 00:28:30.282 [2024-12-09 17:38:59.257068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.282 [2024-12-09 17:38:59.257105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.282 qpair failed and we were unable to recover it. 00:28:30.282 [2024-12-09 17:38:59.257298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.282 [2024-12-09 17:38:59.257330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.282 qpair failed and we were unable to recover it. 00:28:30.282 [2024-12-09 17:38:59.257508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.282 [2024-12-09 17:38:59.257539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.282 qpair failed and we were unable to recover it. 00:28:30.282 [2024-12-09 17:38:59.257723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.282 [2024-12-09 17:38:59.257755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.282 qpair failed and we were unable to recover it. 00:28:30.282 [2024-12-09 17:38:59.258009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.282 [2024-12-09 17:38:59.258041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.282 qpair failed and we were unable to recover it. 00:28:30.282 [2024-12-09 17:38:59.258158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.282 [2024-12-09 17:38:59.258189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.282 qpair failed and we were unable to recover it. 00:28:30.282 [2024-12-09 17:38:59.258331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.282 [2024-12-09 17:38:59.258363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.282 qpair failed and we were unable to recover it. 00:28:30.282 [2024-12-09 17:38:59.258495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.282 [2024-12-09 17:38:59.258527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.282 qpair failed and we were unable to recover it. 00:28:30.282 [2024-12-09 17:38:59.258634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.282 [2024-12-09 17:38:59.258672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.282 qpair failed and we were unable to recover it. 00:28:30.282 [2024-12-09 17:38:59.258937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.282 [2024-12-09 17:38:59.258968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.282 qpair failed and we were unable to recover it. 00:28:30.282 [2024-12-09 17:38:59.259168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.282 [2024-12-09 17:38:59.259199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.282 qpair failed and we were unable to recover it. 00:28:30.282 [2024-12-09 17:38:59.259327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.282 [2024-12-09 17:38:59.259360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.282 qpair failed and we were unable to recover it. 00:28:30.282 [2024-12-09 17:38:59.259484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.282 [2024-12-09 17:38:59.259516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.282 qpair failed and we were unable to recover it. 00:28:30.282 [2024-12-09 17:38:59.259688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.282 [2024-12-09 17:38:59.259720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.282 qpair failed and we were unable to recover it. 00:28:30.282 [2024-12-09 17:38:59.259853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.282 [2024-12-09 17:38:59.259885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.282 qpair failed and we were unable to recover it. 00:28:30.282 [2024-12-09 17:38:59.260067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.282 [2024-12-09 17:38:59.260100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.282 qpair failed and we were unable to recover it. 00:28:30.282 [2024-12-09 17:38:59.260234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.282 [2024-12-09 17:38:59.260267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.282 qpair failed and we were unable to recover it. 00:28:30.282 [2024-12-09 17:38:59.260448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.282 [2024-12-09 17:38:59.260480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.282 qpair failed and we were unable to recover it. 00:28:30.282 [2024-12-09 17:38:59.260676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.282 [2024-12-09 17:38:59.260708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.282 qpair failed and we were unable to recover it. 00:28:30.282 [2024-12-09 17:38:59.260969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.282 [2024-12-09 17:38:59.261000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.282 qpair failed and we were unable to recover it. 00:28:30.282 [2024-12-09 17:38:59.261132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.282 [2024-12-09 17:38:59.261164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.282 qpair failed and we were unable to recover it. 00:28:30.282 [2024-12-09 17:38:59.261322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.282 [2024-12-09 17:38:59.261355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.282 qpair failed and we were unable to recover it. 00:28:30.282 [2024-12-09 17:38:59.261647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.282 [2024-12-09 17:38:59.261679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.282 qpair failed and we were unable to recover it. 00:28:30.282 [2024-12-09 17:38:59.261807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.282 [2024-12-09 17:38:59.261839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.282 qpair failed and we were unable to recover it. 00:28:30.282 [2024-12-09 17:38:59.262092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.282 [2024-12-09 17:38:59.262124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.282 qpair failed and we were unable to recover it. 00:28:30.282 [2024-12-09 17:38:59.262261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.282 [2024-12-09 17:38:59.262296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.282 qpair failed and we were unable to recover it. 00:28:30.282 [2024-12-09 17:38:59.262483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.282 [2024-12-09 17:38:59.262514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.282 qpair failed and we were unable to recover it. 00:28:30.283 [2024-12-09 17:38:59.262707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.283 [2024-12-09 17:38:59.262738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.283 qpair failed and we were unable to recover it. 00:28:30.283 [2024-12-09 17:38:59.263010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.283 [2024-12-09 17:38:59.263042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.283 qpair failed and we were unable to recover it. 00:28:30.283 [2024-12-09 17:38:59.263253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.283 [2024-12-09 17:38:59.263285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.283 qpair failed and we were unable to recover it. 00:28:30.283 [2024-12-09 17:38:59.263462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.283 [2024-12-09 17:38:59.263493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.283 qpair failed and we were unable to recover it. 00:28:30.283 [2024-12-09 17:38:59.263621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.283 [2024-12-09 17:38:59.263653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.283 qpair failed and we were unable to recover it. 00:28:30.283 [2024-12-09 17:38:59.263833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.283 [2024-12-09 17:38:59.263865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.283 qpair failed and we were unable to recover it. 00:28:30.283 [2024-12-09 17:38:59.264048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.283 [2024-12-09 17:38:59.264079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.283 qpair failed and we were unable to recover it. 00:28:30.283 [2024-12-09 17:38:59.264358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.283 [2024-12-09 17:38:59.264390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.283 qpair failed and we were unable to recover it. 00:28:30.283 [2024-12-09 17:38:59.264523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.283 [2024-12-09 17:38:59.264555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.283 qpair failed and we were unable to recover it. 00:28:30.283 [2024-12-09 17:38:59.264726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.283 [2024-12-09 17:38:59.264758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.283 qpair failed and we were unable to recover it. 00:28:30.283 [2024-12-09 17:38:59.264904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.283 [2024-12-09 17:38:59.264935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.283 qpair failed and we were unable to recover it. 00:28:30.283 [2024-12-09 17:38:59.265125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.283 [2024-12-09 17:38:59.265156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.283 qpair failed and we were unable to recover it. 00:28:30.283 [2024-12-09 17:38:59.265275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.283 [2024-12-09 17:38:59.265307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.283 qpair failed and we were unable to recover it. 00:28:30.283 [2024-12-09 17:38:59.265434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.283 [2024-12-09 17:38:59.265465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.283 qpair failed and we were unable to recover it. 00:28:30.283 [2024-12-09 17:38:59.265584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.283 [2024-12-09 17:38:59.265621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.283 qpair failed and we were unable to recover it. 00:28:30.283 [2024-12-09 17:38:59.265767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.283 [2024-12-09 17:38:59.265798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.283 qpair failed and we were unable to recover it. 00:28:30.283 [2024-12-09 17:38:59.265926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.283 [2024-12-09 17:38:59.265958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.283 qpair failed and we were unable to recover it. 00:28:30.283 [2024-12-09 17:38:59.266151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.283 [2024-12-09 17:38:59.266183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.283 qpair failed and we were unable to recover it. 00:28:30.283 [2024-12-09 17:38:59.266441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.283 [2024-12-09 17:38:59.266473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.283 qpair failed and we were unable to recover it. 00:28:30.283 [2024-12-09 17:38:59.266603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.283 [2024-12-09 17:38:59.266635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.283 qpair failed and we were unable to recover it. 00:28:30.283 [2024-12-09 17:38:59.266753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.283 [2024-12-09 17:38:59.266784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.283 qpair failed and we were unable to recover it. 00:28:30.283 [2024-12-09 17:38:59.266886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.283 [2024-12-09 17:38:59.266918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.283 qpair failed and we were unable to recover it. 00:28:30.283 [2024-12-09 17:38:59.267128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.283 [2024-12-09 17:38:59.267159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.283 qpair failed and we were unable to recover it. 00:28:30.283 [2024-12-09 17:38:59.267379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.283 [2024-12-09 17:38:59.267413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.283 qpair failed and we were unable to recover it. 00:28:30.283 [2024-12-09 17:38:59.267596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.283 [2024-12-09 17:38:59.267627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.283 qpair failed and we were unable to recover it. 00:28:30.283 [2024-12-09 17:38:59.267782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.283 [2024-12-09 17:38:59.267814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.283 qpair failed and we were unable to recover it. 00:28:30.283 [2024-12-09 17:38:59.267992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.283 [2024-12-09 17:38:59.268023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.283 qpair failed and we were unable to recover it. 00:28:30.283 [2024-12-09 17:38:59.268204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.283 [2024-12-09 17:38:59.268245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.283 qpair failed and we were unable to recover it. 00:28:30.283 [2024-12-09 17:38:59.268443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.283 [2024-12-09 17:38:59.268475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.283 qpair failed and we were unable to recover it. 00:28:30.283 [2024-12-09 17:38:59.268664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.283 [2024-12-09 17:38:59.268696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.283 qpair failed and we were unable to recover it. 00:28:30.283 [2024-12-09 17:38:59.268878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.283 [2024-12-09 17:38:59.268910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.283 qpair failed and we were unable to recover it. 00:28:30.283 [2024-12-09 17:38:59.269094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.283 [2024-12-09 17:38:59.269126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.283 qpair failed and we were unable to recover it. 00:28:30.283 [2024-12-09 17:38:59.269239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.283 [2024-12-09 17:38:59.269272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.283 qpair failed and we were unable to recover it. 00:28:30.283 [2024-12-09 17:38:59.269443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.283 [2024-12-09 17:38:59.269475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.283 qpair failed and we were unable to recover it. 00:28:30.283 [2024-12-09 17:38:59.269606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.283 [2024-12-09 17:38:59.269637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.283 qpair failed and we were unable to recover it. 00:28:30.283 [2024-12-09 17:38:59.269749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.284 [2024-12-09 17:38:59.269781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.284 qpair failed and we were unable to recover it. 00:28:30.284 [2024-12-09 17:38:59.269972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.284 [2024-12-09 17:38:59.270003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.284 qpair failed and we were unable to recover it. 00:28:30.284 [2024-12-09 17:38:59.270175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.284 [2024-12-09 17:38:59.270207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.284 qpair failed and we were unable to recover it. 00:28:30.284 [2024-12-09 17:38:59.270408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.284 [2024-12-09 17:38:59.270441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.284 qpair failed and we were unable to recover it. 00:28:30.284 [2024-12-09 17:38:59.270628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.284 [2024-12-09 17:38:59.270660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.284 qpair failed and we were unable to recover it. 00:28:30.284 [2024-12-09 17:38:59.270886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.284 [2024-12-09 17:38:59.270918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.284 qpair failed and we were unable to recover it. 00:28:30.284 [2024-12-09 17:38:59.271103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.284 [2024-12-09 17:38:59.271141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.284 qpair failed and we were unable to recover it. 00:28:30.284 [2024-12-09 17:38:59.271274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.284 [2024-12-09 17:38:59.271307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.284 qpair failed and we were unable to recover it. 00:28:30.284 [2024-12-09 17:38:59.271489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.284 [2024-12-09 17:38:59.271521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.284 qpair failed and we were unable to recover it. 00:28:30.284 [2024-12-09 17:38:59.271710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.284 [2024-12-09 17:38:59.271741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.284 qpair failed and we were unable to recover it. 00:28:30.284 [2024-12-09 17:38:59.271844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.284 [2024-12-09 17:38:59.271876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.284 qpair failed and we were unable to recover it. 00:28:30.284 [2024-12-09 17:38:59.271989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.284 [2024-12-09 17:38:59.272021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.284 qpair failed and we were unable to recover it. 00:28:30.284 [2024-12-09 17:38:59.272190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.284 [2024-12-09 17:38:59.272227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.284 qpair failed and we were unable to recover it. 00:28:30.284 [2024-12-09 17:38:59.272488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.284 [2024-12-09 17:38:59.272520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.284 qpair failed and we were unable to recover it. 00:28:30.284 [2024-12-09 17:38:59.272694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.284 [2024-12-09 17:38:59.272725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.284 qpair failed and we were unable to recover it. 00:28:30.284 [2024-12-09 17:38:59.272928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.284 [2024-12-09 17:38:59.272959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.284 qpair failed and we were unable to recover it. 00:28:30.284 [2024-12-09 17:38:59.273065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.284 [2024-12-09 17:38:59.273096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.284 qpair failed and we were unable to recover it. 00:28:30.284 [2024-12-09 17:38:59.273343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.284 [2024-12-09 17:38:59.273376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.284 qpair failed and we were unable to recover it. 00:28:30.284 [2024-12-09 17:38:59.273502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.284 [2024-12-09 17:38:59.273533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.284 qpair failed and we were unable to recover it. 00:28:30.284 [2024-12-09 17:38:59.273741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.284 [2024-12-09 17:38:59.273773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.284 qpair failed and we were unable to recover it. 00:28:30.284 [2024-12-09 17:38:59.274028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.284 [2024-12-09 17:38:59.274061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.284 qpair failed and we were unable to recover it. 00:28:30.284 [2024-12-09 17:38:59.274246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.284 [2024-12-09 17:38:59.274278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.284 qpair failed and we were unable to recover it. 00:28:30.284 [2024-12-09 17:38:59.274480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.284 [2024-12-09 17:38:59.274511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.284 qpair failed and we were unable to recover it. 00:28:30.284 [2024-12-09 17:38:59.274750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.284 [2024-12-09 17:38:59.274782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.284 qpair failed and we were unable to recover it. 00:28:30.284 [2024-12-09 17:38:59.275062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.284 [2024-12-09 17:38:59.275093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.284 qpair failed and we were unable to recover it. 00:28:30.284 [2024-12-09 17:38:59.275227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.284 [2024-12-09 17:38:59.275260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.284 qpair failed and we were unable to recover it. 00:28:30.284 [2024-12-09 17:38:59.275437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.284 [2024-12-09 17:38:59.275469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.284 qpair failed and we were unable to recover it. 00:28:30.284 [2024-12-09 17:38:59.275643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.284 [2024-12-09 17:38:59.275675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.284 qpair failed and we were unable to recover it. 00:28:30.284 [2024-12-09 17:38:59.275861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.284 [2024-12-09 17:38:59.275892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.284 qpair failed and we were unable to recover it. 00:28:30.284 [2024-12-09 17:38:59.276078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.284 [2024-12-09 17:38:59.276109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.284 qpair failed and we were unable to recover it. 00:28:30.284 [2024-12-09 17:38:59.276298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.284 [2024-12-09 17:38:59.276330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.284 qpair failed and we were unable to recover it. 00:28:30.284 [2024-12-09 17:38:59.276444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.284 [2024-12-09 17:38:59.276475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.284 qpair failed and we were unable to recover it. 00:28:30.284 [2024-12-09 17:38:59.276599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.284 [2024-12-09 17:38:59.276630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.284 qpair failed and we were unable to recover it. 00:28:30.284 [2024-12-09 17:38:59.276806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.284 [2024-12-09 17:38:59.276838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.284 qpair failed and we were unable to recover it. 00:28:30.284 [2024-12-09 17:38:59.277018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.284 [2024-12-09 17:38:59.277049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.284 qpair failed and we were unable to recover it. 00:28:30.284 [2024-12-09 17:38:59.277243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.284 [2024-12-09 17:38:59.277276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.284 qpair failed and we were unable to recover it. 00:28:30.284 [2024-12-09 17:38:59.277538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.284 [2024-12-09 17:38:59.277569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.284 qpair failed and we were unable to recover it. 00:28:30.284 [2024-12-09 17:38:59.277762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.284 [2024-12-09 17:38:59.277793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.284 qpair failed and we were unable to recover it. 00:28:30.284 [2024-12-09 17:38:59.277933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.284 [2024-12-09 17:38:59.277964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.284 qpair failed and we were unable to recover it. 00:28:30.284 [2024-12-09 17:38:59.278165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.284 [2024-12-09 17:38:59.278196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.284 qpair failed and we were unable to recover it. 00:28:30.285 [2024-12-09 17:38:59.278396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.285 [2024-12-09 17:38:59.278428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.285 qpair failed and we were unable to recover it. 00:28:30.285 [2024-12-09 17:38:59.278553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.285 [2024-12-09 17:38:59.278584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.285 qpair failed and we were unable to recover it. 00:28:30.285 [2024-12-09 17:38:59.278687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.285 [2024-12-09 17:38:59.278718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.285 qpair failed and we were unable to recover it. 00:28:30.285 [2024-12-09 17:38:59.278901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.285 [2024-12-09 17:38:59.278933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.285 qpair failed and we were unable to recover it. 00:28:30.285 [2024-12-09 17:38:59.279200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.285 [2024-12-09 17:38:59.279238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.285 qpair failed and we were unable to recover it. 00:28:30.285 [2024-12-09 17:38:59.279416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.285 [2024-12-09 17:38:59.279448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.285 qpair failed and we were unable to recover it. 00:28:30.285 [2024-12-09 17:38:59.279649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.285 [2024-12-09 17:38:59.279681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.285 qpair failed and we were unable to recover it. 00:28:30.285 [2024-12-09 17:38:59.279926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.285 [2024-12-09 17:38:59.279963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.285 qpair failed and we were unable to recover it. 00:28:30.285 [2024-12-09 17:38:59.280153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.285 [2024-12-09 17:38:59.280185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.285 qpair failed and we were unable to recover it. 00:28:30.285 [2024-12-09 17:38:59.280328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.285 [2024-12-09 17:38:59.280361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.285 qpair failed and we were unable to recover it. 00:28:30.285 [2024-12-09 17:38:59.280530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.285 [2024-12-09 17:38:59.280561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.285 qpair failed and we were unable to recover it. 00:28:30.285 [2024-12-09 17:38:59.280684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.285 [2024-12-09 17:38:59.280716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.285 qpair failed and we were unable to recover it. 00:28:30.285 [2024-12-09 17:38:59.280908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.285 [2024-12-09 17:38:59.280939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.285 qpair failed and we were unable to recover it. 00:28:30.285 [2024-12-09 17:38:59.281239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.285 [2024-12-09 17:38:59.281272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.285 qpair failed and we were unable to recover it. 00:28:30.285 [2024-12-09 17:38:59.281454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.285 [2024-12-09 17:38:59.281485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.285 qpair failed and we were unable to recover it. 00:28:30.285 [2024-12-09 17:38:59.281591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.285 [2024-12-09 17:38:59.281622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.285 qpair failed and we were unable to recover it. 00:28:30.285 [2024-12-09 17:38:59.281807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.285 [2024-12-09 17:38:59.281837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.285 qpair failed and we were unable to recover it. 00:28:30.285 [2024-12-09 17:38:59.282022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.285 [2024-12-09 17:38:59.282055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.285 qpair failed and we were unable to recover it. 00:28:30.285 [2024-12-09 17:38:59.282174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.285 [2024-12-09 17:38:59.282205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.285 qpair failed and we were unable to recover it. 00:28:30.285 [2024-12-09 17:38:59.282402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.285 [2024-12-09 17:38:59.282434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.285 qpair failed and we were unable to recover it. 00:28:30.285 [2024-12-09 17:38:59.282551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.285 [2024-12-09 17:38:59.282582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.285 qpair failed and we were unable to recover it. 00:28:30.285 [2024-12-09 17:38:59.282848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.285 [2024-12-09 17:38:59.282879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.285 qpair failed and we were unable to recover it. 00:28:30.285 [2024-12-09 17:38:59.283000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.285 [2024-12-09 17:38:59.283032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.285 qpair failed and we were unable to recover it. 00:28:30.285 [2024-12-09 17:38:59.283155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.285 [2024-12-09 17:38:59.283186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.285 qpair failed and we were unable to recover it. 00:28:30.285 [2024-12-09 17:38:59.283389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.285 [2024-12-09 17:38:59.283421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.285 qpair failed and we were unable to recover it. 00:28:30.285 [2024-12-09 17:38:59.283612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.285 [2024-12-09 17:38:59.283644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.285 qpair failed and we were unable to recover it. 00:28:30.285 [2024-12-09 17:38:59.283778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.285 [2024-12-09 17:38:59.283809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.285 qpair failed and we were unable to recover it. 00:28:30.285 [2024-12-09 17:38:59.283982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.285 [2024-12-09 17:38:59.284014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.285 qpair failed and we were unable to recover it. 00:28:30.285 [2024-12-09 17:38:59.284203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.285 [2024-12-09 17:38:59.284246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.285 qpair failed and we were unable to recover it. 00:28:30.285 [2024-12-09 17:38:59.284359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.285 [2024-12-09 17:38:59.284390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.285 qpair failed and we were unable to recover it. 00:28:30.285 [2024-12-09 17:38:59.284628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.285 [2024-12-09 17:38:59.284659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.285 qpair failed and we were unable to recover it. 00:28:30.285 [2024-12-09 17:38:59.284764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.285 [2024-12-09 17:38:59.284795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.285 qpair failed and we were unable to recover it. 00:28:30.285 [2024-12-09 17:38:59.284975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.285 [2024-12-09 17:38:59.285007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.285 qpair failed and we were unable to recover it. 00:28:30.285 [2024-12-09 17:38:59.285128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.285 [2024-12-09 17:38:59.285160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.285 qpair failed and we were unable to recover it. 00:28:30.285 [2024-12-09 17:38:59.285286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.285 [2024-12-09 17:38:59.285324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.285 qpair failed and we were unable to recover it. 00:28:30.285 [2024-12-09 17:38:59.285451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.285 [2024-12-09 17:38:59.285483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.285 qpair failed and we were unable to recover it. 00:28:30.285 [2024-12-09 17:38:59.285598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.285 [2024-12-09 17:38:59.285629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.285 qpair failed and we were unable to recover it. 00:28:30.285 [2024-12-09 17:38:59.285812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.285 [2024-12-09 17:38:59.285844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.285 qpair failed and we were unable to recover it. 00:28:30.285 [2024-12-09 17:38:59.286018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.285 [2024-12-09 17:38:59.286050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.285 qpair failed and we were unable to recover it. 00:28:30.285 [2024-12-09 17:38:59.286293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.285 [2024-12-09 17:38:59.286327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.286 qpair failed and we were unable to recover it. 00:28:30.286 [2024-12-09 17:38:59.286444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.286 [2024-12-09 17:38:59.286474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.286 qpair failed and we were unable to recover it. 00:28:30.286 [2024-12-09 17:38:59.286660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.286 [2024-12-09 17:38:59.286692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.286 qpair failed and we were unable to recover it. 00:28:30.286 [2024-12-09 17:38:59.286872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.286 [2024-12-09 17:38:59.286903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.286 qpair failed and we were unable to recover it. 00:28:30.286 [2024-12-09 17:38:59.287084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.286 [2024-12-09 17:38:59.287116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.286 qpair failed and we were unable to recover it. 00:28:30.286 [2024-12-09 17:38:59.287243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.286 [2024-12-09 17:38:59.287278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.286 qpair failed and we were unable to recover it. 00:28:30.286 [2024-12-09 17:38:59.287518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.286 [2024-12-09 17:38:59.287551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.286 qpair failed and we were unable to recover it. 00:28:30.286 [2024-12-09 17:38:59.287723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.286 [2024-12-09 17:38:59.287756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.286 qpair failed and we were unable to recover it. 00:28:30.286 [2024-12-09 17:38:59.287976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.286 [2024-12-09 17:38:59.288008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.286 qpair failed and we were unable to recover it. 00:28:30.286 [2024-12-09 17:38:59.288122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.286 [2024-12-09 17:38:59.288154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.286 qpair failed and we were unable to recover it. 00:28:30.286 [2024-12-09 17:38:59.288393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.286 [2024-12-09 17:38:59.288426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.286 qpair failed and we were unable to recover it. 00:28:30.286 [2024-12-09 17:38:59.288540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.286 [2024-12-09 17:38:59.288572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.286 qpair failed and we were unable to recover it. 00:28:30.286 [2024-12-09 17:38:59.288689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.286 [2024-12-09 17:38:59.288720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.286 qpair failed and we were unable to recover it. 00:28:30.286 [2024-12-09 17:38:59.288895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.286 [2024-12-09 17:38:59.288927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.286 qpair failed and we were unable to recover it. 00:28:30.286 [2024-12-09 17:38:59.289100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.286 [2024-12-09 17:38:59.289131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.286 qpair failed and we were unable to recover it. 00:28:30.286 [2024-12-09 17:38:59.289303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.286 [2024-12-09 17:38:59.289336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.286 qpair failed and we were unable to recover it. 00:28:30.286 [2024-12-09 17:38:59.289463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.286 [2024-12-09 17:38:59.289495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.286 qpair failed and we were unable to recover it. 00:28:30.286 [2024-12-09 17:38:59.289689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.286 [2024-12-09 17:38:59.289721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.286 qpair failed and we were unable to recover it. 00:28:30.286 [2024-12-09 17:38:59.289844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.286 [2024-12-09 17:38:59.289876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.286 qpair failed and we were unable to recover it. 00:28:30.286 [2024-12-09 17:38:59.290004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.286 [2024-12-09 17:38:59.290036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.286 qpair failed and we were unable to recover it. 00:28:30.286 [2024-12-09 17:38:59.290214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.286 [2024-12-09 17:38:59.290255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.286 qpair failed and we were unable to recover it. 00:28:30.286 [2024-12-09 17:38:59.290427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.286 [2024-12-09 17:38:59.290459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.286 qpair failed and we were unable to recover it. 00:28:30.286 [2024-12-09 17:38:59.290566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.286 [2024-12-09 17:38:59.290597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.286 qpair failed and we were unable to recover it. 00:28:30.286 [2024-12-09 17:38:59.290723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.286 [2024-12-09 17:38:59.290755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.286 qpair failed and we were unable to recover it. 00:28:30.286 [2024-12-09 17:38:59.290877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.286 [2024-12-09 17:38:59.290909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.286 qpair failed and we were unable to recover it. 00:28:30.286 [2024-12-09 17:38:59.291026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.286 [2024-12-09 17:38:59.291057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.286 qpair failed and we were unable to recover it. 00:28:30.286 [2024-12-09 17:38:59.291176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.286 [2024-12-09 17:38:59.291208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.286 qpair failed and we were unable to recover it. 00:28:30.286 [2024-12-09 17:38:59.291329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.286 [2024-12-09 17:38:59.291359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.286 qpair failed and we were unable to recover it. 00:28:30.286 [2024-12-09 17:38:59.291491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.286 [2024-12-09 17:38:59.291524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.286 qpair failed and we were unable to recover it. 00:28:30.286 [2024-12-09 17:38:59.291698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.286 [2024-12-09 17:38:59.291730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.286 qpair failed and we were unable to recover it. 00:28:30.286 [2024-12-09 17:38:59.291853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.286 [2024-12-09 17:38:59.291884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.286 qpair failed and we were unable to recover it. 00:28:30.286 [2024-12-09 17:38:59.292095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.286 [2024-12-09 17:38:59.292126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.286 qpair failed and we were unable to recover it. 00:28:30.286 [2024-12-09 17:38:59.292302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.286 [2024-12-09 17:38:59.292336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.286 qpair failed and we were unable to recover it. 00:28:30.286 [2024-12-09 17:38:59.292515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.286 [2024-12-09 17:38:59.292547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.286 qpair failed and we were unable to recover it. 00:28:30.286 [2024-12-09 17:38:59.292728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.286 [2024-12-09 17:38:59.292759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.286 qpair failed and we were unable to recover it. 00:28:30.286 [2024-12-09 17:38:59.292876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.286 [2024-12-09 17:38:59.292908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.286 qpair failed and we were unable to recover it. 00:28:30.286 [2024-12-09 17:38:59.293161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.286 [2024-12-09 17:38:59.293197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.286 qpair failed and we were unable to recover it. 00:28:30.286 [2024-12-09 17:38:59.293415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.286 [2024-12-09 17:38:59.293447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.286 qpair failed and we were unable to recover it. 00:28:30.286 [2024-12-09 17:38:59.293573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.286 [2024-12-09 17:38:59.293603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.286 qpair failed and we were unable to recover it. 00:28:30.286 [2024-12-09 17:38:59.293863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.286 [2024-12-09 17:38:59.293895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.286 qpair failed and we were unable to recover it. 00:28:30.286 [2024-12-09 17:38:59.294006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.287 [2024-12-09 17:38:59.294038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.287 qpair failed and we were unable to recover it. 00:28:30.287 [2024-12-09 17:38:59.294211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.287 [2024-12-09 17:38:59.294253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.287 qpair failed and we were unable to recover it. 00:28:30.287 [2024-12-09 17:38:59.294500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.287 [2024-12-09 17:38:59.294532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.287 qpair failed and we were unable to recover it. 00:28:30.287 [2024-12-09 17:38:59.294716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.287 [2024-12-09 17:38:59.294748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.287 qpair failed and we were unable to recover it. 00:28:30.287 [2024-12-09 17:38:59.294985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.287 [2024-12-09 17:38:59.295016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.287 qpair failed and we were unable to recover it. 00:28:30.287 [2024-12-09 17:38:59.295126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.287 [2024-12-09 17:38:59.295158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.287 qpair failed and we were unable to recover it. 00:28:30.287 [2024-12-09 17:38:59.295341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.287 [2024-12-09 17:38:59.295375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.287 qpair failed and we were unable to recover it. 00:28:30.287 [2024-12-09 17:38:59.295480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.287 [2024-12-09 17:38:59.295512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.287 qpair failed and we were unable to recover it. 00:28:30.287 [2024-12-09 17:38:59.295695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.287 [2024-12-09 17:38:59.295726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.287 qpair failed and we were unable to recover it. 00:28:30.287 [2024-12-09 17:38:59.295898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.287 [2024-12-09 17:38:59.295931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.287 qpair failed and we were unable to recover it. 00:28:30.287 [2024-12-09 17:38:59.296060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.287 [2024-12-09 17:38:59.296091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.287 qpair failed and we were unable to recover it. 00:28:30.287 [2024-12-09 17:38:59.296278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.287 [2024-12-09 17:38:59.296332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.287 qpair failed and we were unable to recover it. 00:28:30.287 [2024-12-09 17:38:59.296471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.287 [2024-12-09 17:38:59.296502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.287 qpair failed and we were unable to recover it. 00:28:30.287 [2024-12-09 17:38:59.296607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.287 [2024-12-09 17:38:59.296639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.287 qpair failed and we were unable to recover it. 00:28:30.287 [2024-12-09 17:38:59.296885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.287 [2024-12-09 17:38:59.296917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.287 qpair failed and we were unable to recover it. 00:28:30.287 [2024-12-09 17:38:59.297164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.287 [2024-12-09 17:38:59.297196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.287 qpair failed and we were unable to recover it. 00:28:30.287 [2024-12-09 17:38:59.297377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.287 [2024-12-09 17:38:59.297409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.287 qpair failed and we were unable to recover it. 00:28:30.287 [2024-12-09 17:38:59.297539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.287 [2024-12-09 17:38:59.297571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.287 qpair failed and we were unable to recover it. 00:28:30.287 [2024-12-09 17:38:59.297863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.287 [2024-12-09 17:38:59.297894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.287 qpair failed and we were unable to recover it. 00:28:30.287 [2024-12-09 17:38:59.298020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.287 [2024-12-09 17:38:59.298052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.287 qpair failed and we were unable to recover it. 00:28:30.287 [2024-12-09 17:38:59.298230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.287 [2024-12-09 17:38:59.298263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.287 qpair failed and we were unable to recover it. 00:28:30.287 [2024-12-09 17:38:59.298399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.287 [2024-12-09 17:38:59.298430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.287 qpair failed and we were unable to recover it. 00:28:30.287 [2024-12-09 17:38:59.298557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.287 [2024-12-09 17:38:59.298590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.287 qpair failed and we were unable to recover it. 00:28:30.287 [2024-12-09 17:38:59.298803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.287 [2024-12-09 17:38:59.298835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.287 qpair failed and we were unable to recover it. 00:28:30.287 [2024-12-09 17:38:59.298965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.287 [2024-12-09 17:38:59.298996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.287 qpair failed and we were unable to recover it. 00:28:30.287 [2024-12-09 17:38:59.299241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.287 [2024-12-09 17:38:59.299275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.287 qpair failed and we were unable to recover it. 00:28:30.287 [2024-12-09 17:38:59.299396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.287 [2024-12-09 17:38:59.299428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.287 qpair failed and we were unable to recover it. 00:28:30.287 [2024-12-09 17:38:59.299532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.287 [2024-12-09 17:38:59.299562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.287 qpair failed and we were unable to recover it. 00:28:30.287 [2024-12-09 17:38:59.299693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.287 [2024-12-09 17:38:59.299726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.287 qpair failed and we were unable to recover it. 00:28:30.287 [2024-12-09 17:38:59.299827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.287 [2024-12-09 17:38:59.299858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.287 qpair failed and we were unable to recover it. 00:28:30.287 [2024-12-09 17:38:59.299985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.287 [2024-12-09 17:38:59.300017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.287 qpair failed and we were unable to recover it. 00:28:30.287 [2024-12-09 17:38:59.300155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.287 [2024-12-09 17:38:59.300185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.287 qpair failed and we were unable to recover it. 00:28:30.287 [2024-12-09 17:38:59.300399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.287 [2024-12-09 17:38:59.300432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.287 qpair failed and we were unable to recover it. 00:28:30.287 [2024-12-09 17:38:59.300637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.287 [2024-12-09 17:38:59.300669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.287 qpair failed and we were unable to recover it. 00:28:30.287 [2024-12-09 17:38:59.300841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.287 [2024-12-09 17:38:59.300873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.287 qpair failed and we were unable to recover it. 00:28:30.287 [2024-12-09 17:38:59.301053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.287 [2024-12-09 17:38:59.301085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.287 qpair failed and we were unable to recover it. 00:28:30.288 [2024-12-09 17:38:59.301257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.288 [2024-12-09 17:38:59.301291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.288 qpair failed and we were unable to recover it. 00:28:30.288 [2024-12-09 17:38:59.301493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.288 [2024-12-09 17:38:59.301525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.288 qpair failed and we were unable to recover it. 00:28:30.288 [2024-12-09 17:38:59.301644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.288 [2024-12-09 17:38:59.301677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.288 qpair failed and we were unable to recover it. 00:28:30.288 [2024-12-09 17:38:59.301864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.288 [2024-12-09 17:38:59.301895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.288 qpair failed and we were unable to recover it. 00:28:30.288 [2024-12-09 17:38:59.302007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.288 [2024-12-09 17:38:59.302040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.288 qpair failed and we were unable to recover it. 00:28:30.288 [2024-12-09 17:38:59.302318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.288 [2024-12-09 17:38:59.302352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.288 qpair failed and we were unable to recover it. 00:28:30.288 [2024-12-09 17:38:59.302454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.288 [2024-12-09 17:38:59.302485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.288 qpair failed and we were unable to recover it. 00:28:30.288 [2024-12-09 17:38:59.302614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.288 [2024-12-09 17:38:59.302647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.288 qpair failed and we were unable to recover it. 00:28:30.288 [2024-12-09 17:38:59.302817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.288 [2024-12-09 17:38:59.302848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.288 qpair failed and we were unable to recover it. 00:28:30.288 [2024-12-09 17:38:59.303047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.288 [2024-12-09 17:38:59.303079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.288 qpair failed and we were unable to recover it. 00:28:30.288 [2024-12-09 17:38:59.303210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.288 [2024-12-09 17:38:59.303251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.288 qpair failed and we were unable to recover it. 00:28:30.288 [2024-12-09 17:38:59.303523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.288 [2024-12-09 17:38:59.303555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.288 qpair failed and we were unable to recover it. 00:28:30.288 [2024-12-09 17:38:59.303699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.288 [2024-12-09 17:38:59.303730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.288 qpair failed and we were unable to recover it. 00:28:30.288 [2024-12-09 17:38:59.303847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.288 [2024-12-09 17:38:59.303879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.288 qpair failed and we were unable to recover it. 00:28:30.288 [2024-12-09 17:38:59.304074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.288 [2024-12-09 17:38:59.304105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.288 qpair failed and we were unable to recover it. 00:28:30.288 [2024-12-09 17:38:59.304308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.288 [2024-12-09 17:38:59.304342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.288 qpair failed and we were unable to recover it. 00:28:30.288 [2024-12-09 17:38:59.304470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.288 [2024-12-09 17:38:59.304502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.288 qpair failed and we were unable to recover it. 00:28:30.288 [2024-12-09 17:38:59.304690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.288 [2024-12-09 17:38:59.304722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.288 qpair failed and we were unable to recover it. 00:28:30.288 [2024-12-09 17:38:59.304849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.288 [2024-12-09 17:38:59.304880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.288 qpair failed and we were unable to recover it. 00:28:30.288 [2024-12-09 17:38:59.305072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.288 [2024-12-09 17:38:59.305104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.288 qpair failed and we were unable to recover it. 00:28:30.288 [2024-12-09 17:38:59.305283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.288 [2024-12-09 17:38:59.305316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.288 qpair failed and we were unable to recover it. 00:28:30.288 [2024-12-09 17:38:59.305497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.288 [2024-12-09 17:38:59.305528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.288 qpair failed and we were unable to recover it. 00:28:30.288 [2024-12-09 17:38:59.305737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.288 [2024-12-09 17:38:59.305769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.288 qpair failed and we were unable to recover it. 00:28:30.288 [2024-12-09 17:38:59.305939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.288 [2024-12-09 17:38:59.305970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.288 qpair failed and we were unable to recover it. 00:28:30.288 [2024-12-09 17:38:59.306112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.288 [2024-12-09 17:38:59.306144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.288 qpair failed and we were unable to recover it. 00:28:30.288 [2024-12-09 17:38:59.306393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.288 [2024-12-09 17:38:59.306426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.288 qpair failed and we were unable to recover it. 00:28:30.288 [2024-12-09 17:38:59.306609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.288 [2024-12-09 17:38:59.306640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.288 qpair failed and we were unable to recover it. 00:28:30.288 [2024-12-09 17:38:59.306761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.288 [2024-12-09 17:38:59.306792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.288 qpair failed and we were unable to recover it. 00:28:30.288 [2024-12-09 17:38:59.306969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.288 [2024-12-09 17:38:59.307006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.288 qpair failed and we were unable to recover it. 00:28:30.288 [2024-12-09 17:38:59.307135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.288 [2024-12-09 17:38:59.307167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.288 qpair failed and we were unable to recover it. 00:28:30.288 [2024-12-09 17:38:59.307300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.288 [2024-12-09 17:38:59.307332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.288 qpair failed and we were unable to recover it. 00:28:30.288 [2024-12-09 17:38:59.307507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.288 [2024-12-09 17:38:59.307538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.288 qpair failed and we were unable to recover it. 00:28:30.288 [2024-12-09 17:38:59.307672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.288 [2024-12-09 17:38:59.307703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.288 qpair failed and we were unable to recover it. 00:28:30.288 [2024-12-09 17:38:59.307894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.288 [2024-12-09 17:38:59.307926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.288 qpair failed and we were unable to recover it. 00:28:30.288 [2024-12-09 17:38:59.308192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.288 [2024-12-09 17:38:59.308234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.288 qpair failed and we were unable to recover it. 00:28:30.288 [2024-12-09 17:38:59.308417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.288 [2024-12-09 17:38:59.308448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.288 qpair failed and we were unable to recover it. 00:28:30.288 [2024-12-09 17:38:59.308628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.288 [2024-12-09 17:38:59.308659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.288 qpair failed and we were unable to recover it. 00:28:30.288 [2024-12-09 17:38:59.308853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.288 [2024-12-09 17:38:59.308886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.288 qpair failed and we were unable to recover it. 00:28:30.288 [2024-12-09 17:38:59.309099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.288 [2024-12-09 17:38:59.309130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.288 qpair failed and we were unable to recover it. 00:28:30.288 [2024-12-09 17:38:59.309258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.289 [2024-12-09 17:38:59.309291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.289 qpair failed and we were unable to recover it. 00:28:30.289 [2024-12-09 17:38:59.309409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.289 [2024-12-09 17:38:59.309441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.289 qpair failed and we were unable to recover it. 00:28:30.289 [2024-12-09 17:38:59.309549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.289 [2024-12-09 17:38:59.309581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.289 qpair failed and we were unable to recover it. 00:28:30.289 [2024-12-09 17:38:59.309824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.289 [2024-12-09 17:38:59.309856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.289 qpair failed and we were unable to recover it. 00:28:30.289 [2024-12-09 17:38:59.310064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.289 [2024-12-09 17:38:59.310097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.289 qpair failed and we were unable to recover it. 00:28:30.289 [2024-12-09 17:38:59.310233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.289 [2024-12-09 17:38:59.310266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.289 qpair failed and we were unable to recover it. 00:28:30.289 [2024-12-09 17:38:59.310385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.289 [2024-12-09 17:38:59.310415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.289 qpair failed and we were unable to recover it. 00:28:30.289 [2024-12-09 17:38:59.310521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.289 [2024-12-09 17:38:59.310552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.289 qpair failed and we were unable to recover it. 00:28:30.289 [2024-12-09 17:38:59.310736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.289 [2024-12-09 17:38:59.310767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.289 qpair failed and we were unable to recover it. 00:28:30.289 [2024-12-09 17:38:59.310936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.289 [2024-12-09 17:38:59.310968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.289 qpair failed and we were unable to recover it. 00:28:30.289 [2024-12-09 17:38:59.311107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.289 [2024-12-09 17:38:59.311139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.289 qpair failed and we were unable to recover it. 00:28:30.289 [2024-12-09 17:38:59.311309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.289 [2024-12-09 17:38:59.311344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.289 qpair failed and we were unable to recover it. 00:28:30.289 [2024-12-09 17:38:59.311479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.289 [2024-12-09 17:38:59.311511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.289 qpair failed and we were unable to recover it. 00:28:30.289 [2024-12-09 17:38:59.311625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.289 [2024-12-09 17:38:59.311657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.289 qpair failed and we were unable to recover it. 00:28:30.289 [2024-12-09 17:38:59.311862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.289 [2024-12-09 17:38:59.311894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.289 qpair failed and we were unable to recover it. 00:28:30.289 [2024-12-09 17:38:59.312019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.289 [2024-12-09 17:38:59.312051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.289 qpair failed and we were unable to recover it. 00:28:30.289 [2024-12-09 17:38:59.312191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.289 [2024-12-09 17:38:59.312230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.289 qpair failed and we were unable to recover it. 00:28:30.289 [2024-12-09 17:38:59.312341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.289 [2024-12-09 17:38:59.312372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.289 qpair failed and we were unable to recover it. 00:28:30.289 [2024-12-09 17:38:59.312556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.289 [2024-12-09 17:38:59.312588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.289 qpair failed and we were unable to recover it. 00:28:30.289 [2024-12-09 17:38:59.312854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.289 [2024-12-09 17:38:59.312885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.289 qpair failed and we were unable to recover it. 00:28:30.289 [2024-12-09 17:38:59.313082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.289 [2024-12-09 17:38:59.313114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.289 qpair failed and we were unable to recover it. 00:28:30.289 [2024-12-09 17:38:59.313229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.289 [2024-12-09 17:38:59.313261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.289 qpair failed and we were unable to recover it. 00:28:30.289 [2024-12-09 17:38:59.313434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.289 [2024-12-09 17:38:59.313466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.289 qpair failed and we were unable to recover it. 00:28:30.289 [2024-12-09 17:38:59.313651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.289 [2024-12-09 17:38:59.313682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.289 qpair failed and we were unable to recover it. 00:28:30.289 [2024-12-09 17:38:59.313813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.289 [2024-12-09 17:38:59.313845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.289 qpair failed and we were unable to recover it. 00:28:30.289 [2024-12-09 17:38:59.313985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.289 [2024-12-09 17:38:59.314017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.289 qpair failed and we were unable to recover it. 00:28:30.289 [2024-12-09 17:38:59.314212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.289 [2024-12-09 17:38:59.314254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.289 qpair failed and we were unable to recover it. 00:28:30.289 [2024-12-09 17:38:59.314370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.289 [2024-12-09 17:38:59.314402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.289 qpair failed and we were unable to recover it. 00:28:30.289 [2024-12-09 17:38:59.314510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.289 [2024-12-09 17:38:59.314543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.289 qpair failed and we were unable to recover it. 00:28:30.289 [2024-12-09 17:38:59.314646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.289 [2024-12-09 17:38:59.314677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.289 qpair failed and we were unable to recover it. 00:28:30.289 [2024-12-09 17:38:59.314832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.289 [2024-12-09 17:38:59.314905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.289 qpair failed and we were unable to recover it. 00:28:30.289 [2024-12-09 17:38:59.315055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.289 [2024-12-09 17:38:59.315091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.289 qpair failed and we were unable to recover it. 00:28:30.289 [2024-12-09 17:38:59.315208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.289 [2024-12-09 17:38:59.315261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.289 qpair failed and we were unable to recover it. 00:28:30.289 [2024-12-09 17:38:59.315495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.289 [2024-12-09 17:38:59.315527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.289 qpair failed and we were unable to recover it. 00:28:30.289 [2024-12-09 17:38:59.315641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.289 [2024-12-09 17:38:59.315674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.289 qpair failed and we were unable to recover it. 00:28:30.289 [2024-12-09 17:38:59.315800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.289 [2024-12-09 17:38:59.315831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.289 qpair failed and we were unable to recover it. 00:28:30.289 [2024-12-09 17:38:59.315947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.289 [2024-12-09 17:38:59.315980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.289 qpair failed and we were unable to recover it. 00:28:30.289 [2024-12-09 17:38:59.316114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.289 [2024-12-09 17:38:59.316145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.289 qpair failed and we were unable to recover it. 00:28:30.289 [2024-12-09 17:38:59.316388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.289 [2024-12-09 17:38:59.316422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.289 qpair failed and we were unable to recover it. 00:28:30.289 [2024-12-09 17:38:59.316616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.289 [2024-12-09 17:38:59.316647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.289 qpair failed and we were unable to recover it. 00:28:30.290 [2024-12-09 17:38:59.316843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.290 [2024-12-09 17:38:59.316875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.290 qpair failed and we were unable to recover it. 00:28:30.290 [2024-12-09 17:38:59.316985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.290 [2024-12-09 17:38:59.317017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.290 qpair failed and we were unable to recover it. 00:28:30.290 [2024-12-09 17:38:59.317128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.290 [2024-12-09 17:38:59.317160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.290 qpair failed and we were unable to recover it. 00:28:30.290 [2024-12-09 17:38:59.317362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.290 [2024-12-09 17:38:59.317405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.290 qpair failed and we were unable to recover it. 00:28:30.290 [2024-12-09 17:38:59.317596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.290 [2024-12-09 17:38:59.317628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.290 qpair failed and we were unable to recover it. 00:28:30.290 [2024-12-09 17:38:59.317754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.290 [2024-12-09 17:38:59.317785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.290 qpair failed and we were unable to recover it. 00:28:30.290 [2024-12-09 17:38:59.317910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.290 [2024-12-09 17:38:59.317941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.290 qpair failed and we were unable to recover it. 00:28:30.290 [2024-12-09 17:38:59.318160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.290 [2024-12-09 17:38:59.318192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.290 qpair failed and we were unable to recover it. 00:28:30.290 [2024-12-09 17:38:59.318393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.290 [2024-12-09 17:38:59.318426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.290 qpair failed and we were unable to recover it. 00:28:30.290 [2024-12-09 17:38:59.318604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.290 [2024-12-09 17:38:59.318635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.290 qpair failed and we were unable to recover it. 00:28:30.290 [2024-12-09 17:38:59.318810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.290 [2024-12-09 17:38:59.318841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.290 qpair failed and we were unable to recover it. 00:28:30.290 [2024-12-09 17:38:59.319046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.290 [2024-12-09 17:38:59.319078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.290 qpair failed and we were unable to recover it. 00:28:30.290 [2024-12-09 17:38:59.319264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.290 [2024-12-09 17:38:59.319297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.290 qpair failed and we were unable to recover it. 00:28:30.290 [2024-12-09 17:38:59.319559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.290 [2024-12-09 17:38:59.319591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.290 qpair failed and we were unable to recover it. 00:28:30.290 [2024-12-09 17:38:59.319782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.290 [2024-12-09 17:38:59.319814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.290 qpair failed and we were unable to recover it. 00:28:30.290 [2024-12-09 17:38:59.319926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.290 [2024-12-09 17:38:59.319957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.290 qpair failed and we were unable to recover it. 00:28:30.290 [2024-12-09 17:38:59.320073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.290 [2024-12-09 17:38:59.320106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.290 qpair failed and we were unable to recover it. 00:28:30.290 [2024-12-09 17:38:59.320230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.290 [2024-12-09 17:38:59.320263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.290 qpair failed and we were unable to recover it. 00:28:30.290 [2024-12-09 17:38:59.320393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.290 [2024-12-09 17:38:59.320425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.290 qpair failed and we were unable to recover it. 00:28:30.290 [2024-12-09 17:38:59.320547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.290 [2024-12-09 17:38:59.320579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.290 qpair failed and we were unable to recover it. 00:28:30.290 [2024-12-09 17:38:59.320693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.290 [2024-12-09 17:38:59.320725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.290 qpair failed and we were unable to recover it. 00:28:30.290 [2024-12-09 17:38:59.320832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.290 [2024-12-09 17:38:59.320865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.290 qpair failed and we were unable to recover it. 00:28:30.290 [2024-12-09 17:38:59.321071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.290 [2024-12-09 17:38:59.321102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.290 qpair failed and we were unable to recover it. 00:28:30.290 [2024-12-09 17:38:59.321289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.290 [2024-12-09 17:38:59.321323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.290 qpair failed and we were unable to recover it. 00:28:30.290 [2024-12-09 17:38:59.321565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.290 [2024-12-09 17:38:59.321597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.290 qpair failed and we were unable to recover it. 00:28:30.290 [2024-12-09 17:38:59.321716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.290 [2024-12-09 17:38:59.321748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.290 qpair failed and we were unable to recover it. 00:28:30.290 [2024-12-09 17:38:59.321861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.290 [2024-12-09 17:38:59.321893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.290 qpair failed and we were unable to recover it. 00:28:30.290 [2024-12-09 17:38:59.322008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.290 [2024-12-09 17:38:59.322040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.290 qpair failed and we were unable to recover it. 00:28:30.290 [2024-12-09 17:38:59.322165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.290 [2024-12-09 17:38:59.322196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.290 qpair failed and we were unable to recover it. 00:28:30.290 [2024-12-09 17:38:59.322383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.290 [2024-12-09 17:38:59.322416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.290 qpair failed and we were unable to recover it. 00:28:30.290 [2024-12-09 17:38:59.322536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.290 [2024-12-09 17:38:59.322573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.290 qpair failed and we were unable to recover it. 00:28:30.290 [2024-12-09 17:38:59.322752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.290 [2024-12-09 17:38:59.322783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.290 qpair failed and we were unable to recover it. 00:28:30.290 [2024-12-09 17:38:59.322889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.290 [2024-12-09 17:38:59.322921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.290 qpair failed and we were unable to recover it. 00:28:30.290 [2024-12-09 17:38:59.323040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.290 [2024-12-09 17:38:59.323072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.290 qpair failed and we were unable to recover it. 00:28:30.290 [2024-12-09 17:38:59.323284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.290 [2024-12-09 17:38:59.323318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.290 qpair failed and we were unable to recover it. 00:28:30.290 [2024-12-09 17:38:59.323532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.290 [2024-12-09 17:38:59.323564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.290 qpair failed and we were unable to recover it. 00:28:30.290 [2024-12-09 17:38:59.323774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.290 [2024-12-09 17:38:59.323806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.290 qpair failed and we were unable to recover it. 00:28:30.290 [2024-12-09 17:38:59.323922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.290 [2024-12-09 17:38:59.323954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.290 qpair failed and we were unable to recover it. 00:28:30.290 [2024-12-09 17:38:59.324125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.290 [2024-12-09 17:38:59.324158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.290 qpair failed and we were unable to recover it. 00:28:30.290 [2024-12-09 17:38:59.324277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.291 [2024-12-09 17:38:59.324310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.291 qpair failed and we were unable to recover it. 00:28:30.291 [2024-12-09 17:38:59.324521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.291 [2024-12-09 17:38:59.324553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.291 qpair failed and we were unable to recover it. 00:28:30.291 [2024-12-09 17:38:59.324819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.291 [2024-12-09 17:38:59.324853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.291 qpair failed and we were unable to recover it. 00:28:30.291 [2024-12-09 17:38:59.324976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.291 [2024-12-09 17:38:59.325008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.291 qpair failed and we were unable to recover it. 00:28:30.291 [2024-12-09 17:38:59.325192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.291 [2024-12-09 17:38:59.325235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.291 qpair failed and we were unable to recover it. 00:28:30.291 [2024-12-09 17:38:59.325453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.291 [2024-12-09 17:38:59.325485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.291 qpair failed and we were unable to recover it. 00:28:30.291 [2024-12-09 17:38:59.325657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.291 [2024-12-09 17:38:59.325691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.291 qpair failed and we were unable to recover it. 00:28:30.291 [2024-12-09 17:38:59.325800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.291 [2024-12-09 17:38:59.325832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.291 qpair failed and we were unable to recover it. 00:28:30.291 [2024-12-09 17:38:59.326018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.291 [2024-12-09 17:38:59.326050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.291 qpair failed and we were unable to recover it. 00:28:30.291 [2024-12-09 17:38:59.326178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.291 [2024-12-09 17:38:59.326210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.291 qpair failed and we were unable to recover it. 00:28:30.291 [2024-12-09 17:38:59.326343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.291 [2024-12-09 17:38:59.326374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.291 qpair failed and we were unable to recover it. 00:28:30.291 [2024-12-09 17:38:59.326548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.291 [2024-12-09 17:38:59.326580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.291 qpair failed and we were unable to recover it. 00:28:30.291 [2024-12-09 17:38:59.326766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.291 [2024-12-09 17:38:59.326797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.291 qpair failed and we were unable to recover it. 00:28:30.291 [2024-12-09 17:38:59.326901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.291 [2024-12-09 17:38:59.326933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.291 qpair failed and we were unable to recover it. 00:28:30.291 [2024-12-09 17:38:59.327058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.291 [2024-12-09 17:38:59.327089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.291 qpair failed and we were unable to recover it. 00:28:30.291 [2024-12-09 17:38:59.327271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.291 [2024-12-09 17:38:59.327304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.291 qpair failed and we were unable to recover it. 00:28:30.291 [2024-12-09 17:38:59.327433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.291 [2024-12-09 17:38:59.327465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.291 qpair failed and we were unable to recover it. 00:28:30.291 [2024-12-09 17:38:59.327607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.291 [2024-12-09 17:38:59.327640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.291 qpair failed and we were unable to recover it. 00:28:30.291 [2024-12-09 17:38:59.327852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.291 [2024-12-09 17:38:59.327884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.291 qpair failed and we were unable to recover it. 00:28:30.291 [2024-12-09 17:38:59.328000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.291 [2024-12-09 17:38:59.328032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.291 qpair failed and we were unable to recover it. 00:28:30.291 [2024-12-09 17:38:59.328273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.291 [2024-12-09 17:38:59.328305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.291 qpair failed and we were unable to recover it. 00:28:30.291 [2024-12-09 17:38:59.328483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.291 [2024-12-09 17:38:59.328514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.291 qpair failed and we were unable to recover it. 00:28:30.291 [2024-12-09 17:38:59.328716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.291 [2024-12-09 17:38:59.328748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.291 qpair failed and we were unable to recover it. 00:28:30.291 [2024-12-09 17:38:59.328967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.291 [2024-12-09 17:38:59.328998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.291 qpair failed and we were unable to recover it. 00:28:30.291 [2024-12-09 17:38:59.329132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.291 [2024-12-09 17:38:59.329164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.291 qpair failed and we were unable to recover it. 00:28:30.291 [2024-12-09 17:38:59.329296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.291 [2024-12-09 17:38:59.329329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.291 qpair failed and we were unable to recover it. 00:28:30.291 [2024-12-09 17:38:59.329451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.291 [2024-12-09 17:38:59.329483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.291 qpair failed and we were unable to recover it. 00:28:30.291 [2024-12-09 17:38:59.329609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.291 [2024-12-09 17:38:59.329640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.291 qpair failed and we were unable to recover it. 00:28:30.291 [2024-12-09 17:38:59.329742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.291 [2024-12-09 17:38:59.329773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.291 qpair failed and we were unable to recover it. 00:28:30.291 [2024-12-09 17:38:59.329895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.291 [2024-12-09 17:38:59.329926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.291 qpair failed and we were unable to recover it. 00:28:30.291 [2024-12-09 17:38:59.330047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.291 [2024-12-09 17:38:59.330078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.291 qpair failed and we were unable to recover it. 00:28:30.291 [2024-12-09 17:38:59.330191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.291 [2024-12-09 17:38:59.330247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.291 qpair failed and we were unable to recover it. 00:28:30.291 [2024-12-09 17:38:59.330366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.291 [2024-12-09 17:38:59.330398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.291 qpair failed and we were unable to recover it. 00:28:30.291 [2024-12-09 17:38:59.330506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.291 [2024-12-09 17:38:59.330536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.291 qpair failed and we were unable to recover it. 00:28:30.291 [2024-12-09 17:38:59.330706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.291 [2024-12-09 17:38:59.330737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.291 qpair failed and we were unable to recover it. 00:28:30.291 [2024-12-09 17:38:59.330858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.291 [2024-12-09 17:38:59.330889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.291 qpair failed and we were unable to recover it. 00:28:30.291 [2024-12-09 17:38:59.331019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.291 [2024-12-09 17:38:59.331050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.291 qpair failed and we were unable to recover it. 00:28:30.291 [2024-12-09 17:38:59.331230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.291 [2024-12-09 17:38:59.331263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.291 qpair failed and we were unable to recover it. 00:28:30.291 [2024-12-09 17:38:59.331443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.291 [2024-12-09 17:38:59.331475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.292 qpair failed and we were unable to recover it. 00:28:30.292 [2024-12-09 17:38:59.331595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.292 [2024-12-09 17:38:59.331626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.292 qpair failed and we were unable to recover it. 00:28:30.292 [2024-12-09 17:38:59.331750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.292 [2024-12-09 17:38:59.331782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.292 qpair failed and we were unable to recover it. 00:28:30.292 [2024-12-09 17:38:59.332029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.292 [2024-12-09 17:38:59.332060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.292 qpair failed and we were unable to recover it. 00:28:30.292 [2024-12-09 17:38:59.332245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.292 [2024-12-09 17:38:59.332279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.292 qpair failed and we were unable to recover it. 00:28:30.292 [2024-12-09 17:38:59.332468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.292 [2024-12-09 17:38:59.332499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.292 qpair failed and we were unable to recover it. 00:28:30.292 [2024-12-09 17:38:59.332625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.292 [2024-12-09 17:38:59.332658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.292 qpair failed and we were unable to recover it. 00:28:30.292 [2024-12-09 17:38:59.332786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.292 [2024-12-09 17:38:59.332817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.292 qpair failed and we were unable to recover it. 00:28:30.292 [2024-12-09 17:38:59.333000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.292 [2024-12-09 17:38:59.333033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.292 qpair failed and we were unable to recover it. 00:28:30.292 [2024-12-09 17:38:59.333213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.292 [2024-12-09 17:38:59.333253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.292 qpair failed and we were unable to recover it. 00:28:30.292 [2024-12-09 17:38:59.333381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.292 [2024-12-09 17:38:59.333413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.292 qpair failed and we were unable to recover it. 00:28:30.292 [2024-12-09 17:38:59.333600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.292 [2024-12-09 17:38:59.333632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.292 qpair failed and we were unable to recover it. 00:28:30.292 [2024-12-09 17:38:59.333748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.292 [2024-12-09 17:38:59.333779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.292 qpair failed and we were unable to recover it. 00:28:30.292 [2024-12-09 17:38:59.333898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.292 [2024-12-09 17:38:59.333930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.292 qpair failed and we were unable to recover it. 00:28:30.292 [2024-12-09 17:38:59.334108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.292 [2024-12-09 17:38:59.334140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.292 qpair failed and we were unable to recover it. 00:28:30.292 [2024-12-09 17:38:59.334260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.292 [2024-12-09 17:38:59.334292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.292 qpair failed and we were unable to recover it. 00:28:30.292 [2024-12-09 17:38:59.334398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.292 [2024-12-09 17:38:59.334427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.292 qpair failed and we were unable to recover it. 00:28:30.292 [2024-12-09 17:38:59.334534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.292 [2024-12-09 17:38:59.334564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.292 qpair failed and we were unable to recover it. 00:28:30.292 [2024-12-09 17:38:59.334739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.292 [2024-12-09 17:38:59.334769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.292 qpair failed and we were unable to recover it. 00:28:30.292 [2024-12-09 17:38:59.334883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.292 [2024-12-09 17:38:59.334913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.292 qpair failed and we were unable to recover it. 00:28:30.292 [2024-12-09 17:38:59.335025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.292 [2024-12-09 17:38:59.335054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.292 qpair failed and we were unable to recover it. 00:28:30.292 [2024-12-09 17:38:59.335290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.292 [2024-12-09 17:38:59.335323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.292 qpair failed and we were unable to recover it. 00:28:30.292 [2024-12-09 17:38:59.335498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.292 [2024-12-09 17:38:59.335530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.292 qpair failed and we were unable to recover it. 00:28:30.292 [2024-12-09 17:38:59.335661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.292 [2024-12-09 17:38:59.335692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.292 qpair failed and we were unable to recover it. 00:28:30.292 [2024-12-09 17:38:59.335873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.292 [2024-12-09 17:38:59.335905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.292 qpair failed and we were unable to recover it. 00:28:30.292 [2024-12-09 17:38:59.336093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.292 [2024-12-09 17:38:59.336125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.292 qpair failed and we were unable to recover it. 00:28:30.292 [2024-12-09 17:38:59.336255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.292 [2024-12-09 17:38:59.336288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.292 qpair failed and we were unable to recover it. 00:28:30.292 [2024-12-09 17:38:59.336409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.292 [2024-12-09 17:38:59.336441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.292 qpair failed and we were unable to recover it. 00:28:30.292 [2024-12-09 17:38:59.336627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.292 [2024-12-09 17:38:59.336659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.292 qpair failed and we were unable to recover it. 00:28:30.292 [2024-12-09 17:38:59.336777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.292 [2024-12-09 17:38:59.336808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.292 qpair failed and we were unable to recover it. 00:28:30.292 [2024-12-09 17:38:59.337006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.292 [2024-12-09 17:38:59.337038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.292 qpair failed and we were unable to recover it. 00:28:30.292 [2024-12-09 17:38:59.337160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.292 [2024-12-09 17:38:59.337192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.292 qpair failed and we were unable to recover it. 00:28:30.292 [2024-12-09 17:38:59.337379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.292 [2024-12-09 17:38:59.337410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.292 qpair failed and we were unable to recover it. 00:28:30.292 [2024-12-09 17:38:59.337655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.292 [2024-12-09 17:38:59.337693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.292 qpair failed and we were unable to recover it. 00:28:30.292 [2024-12-09 17:38:59.337819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.292 [2024-12-09 17:38:59.337851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.292 qpair failed and we were unable to recover it. 00:28:30.292 [2024-12-09 17:38:59.337975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.293 [2024-12-09 17:38:59.338007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.293 qpair failed and we were unable to recover it. 00:28:30.293 [2024-12-09 17:38:59.338127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.293 [2024-12-09 17:38:59.338157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.293 qpair failed and we were unable to recover it. 00:28:30.293 [2024-12-09 17:38:59.338273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.293 [2024-12-09 17:38:59.338307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.293 qpair failed and we were unable to recover it. 00:28:30.293 [2024-12-09 17:38:59.338413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.293 [2024-12-09 17:38:59.338444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.293 qpair failed and we were unable to recover it. 00:28:30.293 [2024-12-09 17:38:59.338726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.293 [2024-12-09 17:38:59.338758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.293 qpair failed and we were unable to recover it. 00:28:30.293 [2024-12-09 17:38:59.338889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.293 [2024-12-09 17:38:59.338920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.293 qpair failed and we were unable to recover it. 00:28:30.293 [2024-12-09 17:38:59.339165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.293 [2024-12-09 17:38:59.339197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.293 qpair failed and we were unable to recover it. 00:28:30.293 [2024-12-09 17:38:59.339323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.293 [2024-12-09 17:38:59.339354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.293 qpair failed and we were unable to recover it. 00:28:30.293 [2024-12-09 17:38:59.339556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.293 [2024-12-09 17:38:59.339589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.293 qpair failed and we were unable to recover it. 00:28:30.293 [2024-12-09 17:38:59.339762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.293 [2024-12-09 17:38:59.339793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.293 qpair failed and we were unable to recover it. 00:28:30.293 [2024-12-09 17:38:59.339972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.293 [2024-12-09 17:38:59.340005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.293 qpair failed and we were unable to recover it. 00:28:30.293 [2024-12-09 17:38:59.340182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.293 [2024-12-09 17:38:59.340214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.293 qpair failed and we were unable to recover it. 00:28:30.293 [2024-12-09 17:38:59.340350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.293 [2024-12-09 17:38:59.340382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.293 qpair failed and we were unable to recover it. 00:28:30.293 [2024-12-09 17:38:59.340524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.293 [2024-12-09 17:38:59.340556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.293 qpair failed and we were unable to recover it. 00:28:30.293 [2024-12-09 17:38:59.340741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.293 [2024-12-09 17:38:59.340772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.293 qpair failed and we were unable to recover it. 00:28:30.293 [2024-12-09 17:38:59.340899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.293 [2024-12-09 17:38:59.340930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.293 qpair failed and we were unable to recover it. 00:28:30.293 [2024-12-09 17:38:59.341169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.293 [2024-12-09 17:38:59.341202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.293 qpair failed and we were unable to recover it. 00:28:30.293 [2024-12-09 17:38:59.341325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.293 [2024-12-09 17:38:59.341357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.293 qpair failed and we were unable to recover it. 00:28:30.293 [2024-12-09 17:38:59.341524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.293 [2024-12-09 17:38:59.341556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.293 qpair failed and we were unable to recover it. 00:28:30.293 [2024-12-09 17:38:59.341758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.293 [2024-12-09 17:38:59.341790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.293 qpair failed and we were unable to recover it. 00:28:30.293 [2024-12-09 17:38:59.341928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.293 [2024-12-09 17:38:59.341960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.293 qpair failed and we were unable to recover it. 00:28:30.293 [2024-12-09 17:38:59.342156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.293 [2024-12-09 17:38:59.342188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.293 qpair failed and we were unable to recover it. 00:28:30.293 [2024-12-09 17:38:59.342444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.293 [2024-12-09 17:38:59.342476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.293 qpair failed and we were unable to recover it. 00:28:30.293 [2024-12-09 17:38:59.342598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.293 [2024-12-09 17:38:59.342629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.293 qpair failed and we were unable to recover it. 00:28:30.293 [2024-12-09 17:38:59.342750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.293 [2024-12-09 17:38:59.342780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.293 qpair failed and we were unable to recover it. 00:28:30.293 [2024-12-09 17:38:59.342896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.293 [2024-12-09 17:38:59.342928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.293 qpair failed and we were unable to recover it. 00:28:30.293 [2024-12-09 17:38:59.343106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.293 [2024-12-09 17:38:59.343137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.293 qpair failed and we were unable to recover it. 00:28:30.293 [2024-12-09 17:38:59.343313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.293 [2024-12-09 17:38:59.343346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.293 qpair failed and we were unable to recover it. 00:28:30.293 [2024-12-09 17:38:59.343518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.293 [2024-12-09 17:38:59.343549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.293 qpair failed and we were unable to recover it. 00:28:30.293 [2024-12-09 17:38:59.343659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.293 [2024-12-09 17:38:59.343690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.293 qpair failed and we were unable to recover it. 00:28:30.293 [2024-12-09 17:38:59.343977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.293 [2024-12-09 17:38:59.344008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.293 qpair failed and we were unable to recover it. 00:28:30.293 [2024-12-09 17:38:59.344133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.293 [2024-12-09 17:38:59.344165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.293 qpair failed and we were unable to recover it. 00:28:30.293 [2024-12-09 17:38:59.344363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.293 [2024-12-09 17:38:59.344396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.293 qpair failed and we were unable to recover it. 00:28:30.293 [2024-12-09 17:38:59.344507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.293 [2024-12-09 17:38:59.344538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.293 qpair failed and we were unable to recover it. 00:28:30.293 [2024-12-09 17:38:59.344749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.293 [2024-12-09 17:38:59.344781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.293 qpair failed and we were unable to recover it. 00:28:30.293 [2024-12-09 17:38:59.344971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.293 [2024-12-09 17:38:59.345003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.293 qpair failed and we were unable to recover it. 00:28:30.293 [2024-12-09 17:38:59.345193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.293 [2024-12-09 17:38:59.345234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.293 qpair failed and we were unable to recover it. 00:28:30.293 [2024-12-09 17:38:59.345341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.293 [2024-12-09 17:38:59.345372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.293 qpair failed and we were unable to recover it. 00:28:30.293 [2024-12-09 17:38:59.345494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.294 [2024-12-09 17:38:59.345532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.294 qpair failed and we were unable to recover it. 00:28:30.294 [2024-12-09 17:38:59.345653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.294 [2024-12-09 17:38:59.345684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.294 qpair failed and we were unable to recover it. 00:28:30.294 [2024-12-09 17:38:59.345796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.294 [2024-12-09 17:38:59.345829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.294 qpair failed and we were unable to recover it. 00:28:30.294 [2024-12-09 17:38:59.346010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.294 [2024-12-09 17:38:59.346042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.294 qpair failed and we were unable to recover it. 00:28:30.294 [2024-12-09 17:38:59.346232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.294 [2024-12-09 17:38:59.346264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.294 qpair failed and we were unable to recover it. 00:28:30.294 [2024-12-09 17:38:59.346451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.294 [2024-12-09 17:38:59.346482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.294 qpair failed and we were unable to recover it. 00:28:30.294 [2024-12-09 17:38:59.346726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.294 [2024-12-09 17:38:59.346758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.294 qpair failed and we were unable to recover it. 00:28:30.294 [2024-12-09 17:38:59.346946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.294 [2024-12-09 17:38:59.346977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.294 qpair failed and we were unable to recover it. 00:28:30.294 [2024-12-09 17:38:59.347085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.294 [2024-12-09 17:38:59.347117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.294 qpair failed and we were unable to recover it. 00:28:30.294 [2024-12-09 17:38:59.347303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.294 [2024-12-09 17:38:59.347335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.294 qpair failed and we were unable to recover it. 00:28:30.294 [2024-12-09 17:38:59.347506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.294 [2024-12-09 17:38:59.347543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.294 qpair failed and we were unable to recover it. 00:28:30.294 [2024-12-09 17:38:59.347736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.294 [2024-12-09 17:38:59.347768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.294 qpair failed and we were unable to recover it. 00:28:30.294 [2024-12-09 17:38:59.347905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.294 [2024-12-09 17:38:59.347936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.294 qpair failed and we were unable to recover it. 00:28:30.294 [2024-12-09 17:38:59.348051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.294 [2024-12-09 17:38:59.348082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.294 qpair failed and we were unable to recover it. 00:28:30.294 [2024-12-09 17:38:59.348271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.294 [2024-12-09 17:38:59.348304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.294 qpair failed and we were unable to recover it. 00:28:30.294 [2024-12-09 17:38:59.348422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.294 [2024-12-09 17:38:59.348454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.294 qpair failed and we were unable to recover it. 00:28:30.294 [2024-12-09 17:38:59.348639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.294 [2024-12-09 17:38:59.348672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.294 qpair failed and we were unable to recover it. 00:28:30.294 [2024-12-09 17:38:59.348815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.294 [2024-12-09 17:38:59.348846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.294 qpair failed and we were unable to recover it. 00:28:30.294 [2024-12-09 17:38:59.349031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.294 [2024-12-09 17:38:59.349063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.294 qpair failed and we were unable to recover it. 00:28:30.294 [2024-12-09 17:38:59.349181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.294 [2024-12-09 17:38:59.349214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.294 qpair failed and we were unable to recover it. 00:28:30.294 [2024-12-09 17:38:59.349355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.294 [2024-12-09 17:38:59.349387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.294 qpair failed and we were unable to recover it. 00:28:30.294 [2024-12-09 17:38:59.349571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.294 [2024-12-09 17:38:59.349603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.294 qpair failed and we were unable to recover it. 00:28:30.294 [2024-12-09 17:38:59.349793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.294 [2024-12-09 17:38:59.349824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.294 qpair failed and we were unable to recover it. 00:28:30.294 [2024-12-09 17:38:59.350015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.294 [2024-12-09 17:38:59.350048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.294 qpair failed and we were unable to recover it. 00:28:30.294 [2024-12-09 17:38:59.350173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.294 [2024-12-09 17:38:59.350204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.294 qpair failed and we were unable to recover it. 00:28:30.294 [2024-12-09 17:38:59.350394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.294 [2024-12-09 17:38:59.350426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.294 qpair failed and we were unable to recover it. 00:28:30.294 [2024-12-09 17:38:59.350600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.294 [2024-12-09 17:38:59.350632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.294 qpair failed and we were unable to recover it. 00:28:30.294 [2024-12-09 17:38:59.350757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.294 [2024-12-09 17:38:59.350789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.294 qpair failed and we were unable to recover it. 00:28:30.294 [2024-12-09 17:38:59.350974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.294 [2024-12-09 17:38:59.351006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.294 qpair failed and we were unable to recover it. 00:28:30.294 [2024-12-09 17:38:59.351123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.294 [2024-12-09 17:38:59.351154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.294 qpair failed and we were unable to recover it. 00:28:30.294 [2024-12-09 17:38:59.351276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.294 [2024-12-09 17:38:59.351310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.294 qpair failed and we were unable to recover it. 00:28:30.294 [2024-12-09 17:38:59.351498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.294 [2024-12-09 17:38:59.351529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.294 qpair failed and we were unable to recover it. 00:28:30.294 [2024-12-09 17:38:59.351720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.294 [2024-12-09 17:38:59.351753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.294 qpair failed and we were unable to recover it. 00:28:30.294 [2024-12-09 17:38:59.351962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.294 [2024-12-09 17:38:59.351994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.294 qpair failed and we were unable to recover it. 00:28:30.294 [2024-12-09 17:38:59.352173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.294 [2024-12-09 17:38:59.352205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.294 qpair failed and we were unable to recover it. 00:28:30.294 [2024-12-09 17:38:59.352330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.294 [2024-12-09 17:38:59.352362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.294 qpair failed and we were unable to recover it. 00:28:30.294 [2024-12-09 17:38:59.352510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.294 [2024-12-09 17:38:59.352544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.294 qpair failed and we were unable to recover it. 00:28:30.294 [2024-12-09 17:38:59.352738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.294 [2024-12-09 17:38:59.352770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.294 qpair failed and we were unable to recover it. 00:28:30.294 [2024-12-09 17:38:59.352942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.294 [2024-12-09 17:38:59.352974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.294 qpair failed and we were unable to recover it. 00:28:30.294 [2024-12-09 17:38:59.353149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.295 [2024-12-09 17:38:59.353181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.295 qpair failed and we were unable to recover it. 00:28:30.295 [2024-12-09 17:38:59.353413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.295 [2024-12-09 17:38:59.353457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.295 qpair failed and we were unable to recover it. 00:28:30.295 [2024-12-09 17:38:59.353584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.295 [2024-12-09 17:38:59.353616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.295 qpair failed and we were unable to recover it. 00:28:30.295 [2024-12-09 17:38:59.353732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.295 [2024-12-09 17:38:59.353763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.295 qpair failed and we were unable to recover it. 00:28:30.295 [2024-12-09 17:38:59.353942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.295 [2024-12-09 17:38:59.353975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.295 qpair failed and we were unable to recover it. 00:28:30.295 [2024-12-09 17:38:59.354080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.295 [2024-12-09 17:38:59.354111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.295 qpair failed and we were unable to recover it. 00:28:30.295 [2024-12-09 17:38:59.354241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.295 [2024-12-09 17:38:59.354275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.295 qpair failed and we were unable to recover it. 00:28:30.295 [2024-12-09 17:38:59.354390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.295 [2024-12-09 17:38:59.354421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.295 qpair failed and we were unable to recover it. 00:28:30.295 [2024-12-09 17:38:59.354555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.295 [2024-12-09 17:38:59.354587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.295 qpair failed and we were unable to recover it. 00:28:30.295 [2024-12-09 17:38:59.354771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.295 [2024-12-09 17:38:59.354804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.295 qpair failed and we were unable to recover it. 00:28:30.295 [2024-12-09 17:38:59.354977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.295 [2024-12-09 17:38:59.355009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.295 qpair failed and we were unable to recover it. 00:28:30.295 [2024-12-09 17:38:59.355113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.295 [2024-12-09 17:38:59.355145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.295 qpair failed and we were unable to recover it. 00:28:30.295 [2024-12-09 17:38:59.355318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.295 [2024-12-09 17:38:59.355351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.295 qpair failed and we were unable to recover it. 00:28:30.295 [2024-12-09 17:38:59.355477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.295 [2024-12-09 17:38:59.355509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.295 qpair failed and we were unable to recover it. 00:28:30.295 [2024-12-09 17:38:59.355618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.295 [2024-12-09 17:38:59.355649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.295 qpair failed and we were unable to recover it. 00:28:30.295 [2024-12-09 17:38:59.355761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.295 [2024-12-09 17:38:59.355793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.295 qpair failed and we were unable to recover it. 00:28:30.295 [2024-12-09 17:38:59.356017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.295 [2024-12-09 17:38:59.356049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.295 qpair failed and we were unable to recover it. 00:28:30.295 [2024-12-09 17:38:59.356227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.295 [2024-12-09 17:38:59.356261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.295 qpair failed and we were unable to recover it. 00:28:30.295 [2024-12-09 17:38:59.356447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.295 [2024-12-09 17:38:59.356479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.295 qpair failed and we were unable to recover it. 00:28:30.295 [2024-12-09 17:38:59.356600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.295 [2024-12-09 17:38:59.356631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.295 qpair failed and we were unable to recover it. 00:28:30.295 [2024-12-09 17:38:59.356822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.295 [2024-12-09 17:38:59.356854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.295 qpair failed and we were unable to recover it. 00:28:30.295 [2024-12-09 17:38:59.356954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.295 [2024-12-09 17:38:59.356985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.295 qpair failed and we were unable to recover it. 00:28:30.295 [2024-12-09 17:38:59.357094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.295 [2024-12-09 17:38:59.357126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.295 qpair failed and we were unable to recover it. 00:28:30.295 [2024-12-09 17:38:59.357302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.295 [2024-12-09 17:38:59.357335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.295 qpair failed and we were unable to recover it. 00:28:30.295 [2024-12-09 17:38:59.357446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.295 [2024-12-09 17:38:59.357478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.295 qpair failed and we were unable to recover it. 00:28:30.295 [2024-12-09 17:38:59.357677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.295 [2024-12-09 17:38:59.357709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.295 qpair failed and we were unable to recover it. 00:28:30.295 [2024-12-09 17:38:59.357839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.295 [2024-12-09 17:38:59.357870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.295 qpair failed and we were unable to recover it. 00:28:30.295 [2024-12-09 17:38:59.357975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.295 [2024-12-09 17:38:59.358006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.295 qpair failed and we were unable to recover it. 00:28:30.295 [2024-12-09 17:38:59.358115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.295 [2024-12-09 17:38:59.358148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.295 qpair failed and we were unable to recover it. 00:28:30.295 [2024-12-09 17:38:59.358323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.295 [2024-12-09 17:38:59.358356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.295 qpair failed and we were unable to recover it. 00:28:30.295 [2024-12-09 17:38:59.358461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.295 [2024-12-09 17:38:59.358493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.295 qpair failed and we were unable to recover it. 00:28:30.295 [2024-12-09 17:38:59.358615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.295 [2024-12-09 17:38:59.358647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.295 qpair failed and we were unable to recover it. 00:28:30.295 [2024-12-09 17:38:59.358762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.295 [2024-12-09 17:38:59.358794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.295 qpair failed and we were unable to recover it. 00:28:30.295 [2024-12-09 17:38:59.358903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.295 [2024-12-09 17:38:59.358934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.295 qpair failed and we were unable to recover it. 00:28:30.295 [2024-12-09 17:38:59.359051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.295 [2024-12-09 17:38:59.359084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.295 qpair failed and we were unable to recover it. 00:28:30.295 [2024-12-09 17:38:59.359297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.295 [2024-12-09 17:38:59.359329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.295 qpair failed and we were unable to recover it. 00:28:30.295 [2024-12-09 17:38:59.359573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.295 [2024-12-09 17:38:59.359605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.295 qpair failed and we were unable to recover it. 00:28:30.295 [2024-12-09 17:38:59.359795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.295 [2024-12-09 17:38:59.359827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.295 qpair failed and we were unable to recover it. 00:28:30.295 [2024-12-09 17:38:59.359930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.295 [2024-12-09 17:38:59.359961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.295 qpair failed and we were unable to recover it. 00:28:30.295 [2024-12-09 17:38:59.360132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.296 [2024-12-09 17:38:59.360163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.296 qpair failed and we were unable to recover it. 00:28:30.296 [2024-12-09 17:38:59.360280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.296 [2024-12-09 17:38:59.360312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.296 qpair failed and we were unable to recover it. 00:28:30.296 [2024-12-09 17:38:59.360515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.296 [2024-12-09 17:38:59.360553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.296 qpair failed and we were unable to recover it. 00:28:30.296 [2024-12-09 17:38:59.360728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.296 [2024-12-09 17:38:59.360760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.296 qpair failed and we were unable to recover it. 00:28:30.296 [2024-12-09 17:38:59.360945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.296 [2024-12-09 17:38:59.360975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.296 qpair failed and we were unable to recover it. 00:28:30.296 [2024-12-09 17:38:59.361103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.296 [2024-12-09 17:38:59.361136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.296 qpair failed and we were unable to recover it. 00:28:30.296 [2024-12-09 17:38:59.361280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.296 [2024-12-09 17:38:59.361313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.296 qpair failed and we were unable to recover it. 00:28:30.296 [2024-12-09 17:38:59.361432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.296 [2024-12-09 17:38:59.361464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.296 qpair failed and we were unable to recover it. 00:28:30.296 [2024-12-09 17:38:59.361726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.296 [2024-12-09 17:38:59.361759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.296 qpair failed and we were unable to recover it. 00:28:30.296 [2024-12-09 17:38:59.361888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.296 [2024-12-09 17:38:59.361919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.296 qpair failed and we were unable to recover it. 00:28:30.296 [2024-12-09 17:38:59.362089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.296 [2024-12-09 17:38:59.362121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.296 qpair failed and we were unable to recover it. 00:28:30.296 [2024-12-09 17:38:59.362241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.296 [2024-12-09 17:38:59.362274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.296 qpair failed and we were unable to recover it. 00:28:30.296 [2024-12-09 17:38:59.362397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.296 [2024-12-09 17:38:59.362429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.296 qpair failed and we were unable to recover it. 00:28:30.296 [2024-12-09 17:38:59.362538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.296 [2024-12-09 17:38:59.362569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.296 qpair failed and we were unable to recover it. 00:28:30.296 [2024-12-09 17:38:59.362765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.296 [2024-12-09 17:38:59.362797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.296 qpair failed and we were unable to recover it. 00:28:30.296 [2024-12-09 17:38:59.362913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.296 [2024-12-09 17:38:59.362945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.296 qpair failed and we were unable to recover it. 00:28:30.296 [2024-12-09 17:38:59.363121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.296 [2024-12-09 17:38:59.363153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.296 qpair failed and we were unable to recover it. 00:28:30.296 [2024-12-09 17:38:59.363270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.296 [2024-12-09 17:38:59.363303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.296 qpair failed and we were unable to recover it. 00:28:30.296 [2024-12-09 17:38:59.363507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.296 [2024-12-09 17:38:59.363539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.296 qpair failed and we were unable to recover it. 00:28:30.296 [2024-12-09 17:38:59.363673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.296 [2024-12-09 17:38:59.363706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.296 qpair failed and we were unable to recover it. 00:28:30.296 [2024-12-09 17:38:59.363830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.296 [2024-12-09 17:38:59.363873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.296 qpair failed and we were unable to recover it. 00:28:30.296 [2024-12-09 17:38:59.363988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.296 [2024-12-09 17:38:59.364019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.296 qpair failed and we were unable to recover it. 00:28:30.296 [2024-12-09 17:38:59.364196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.296 [2024-12-09 17:38:59.364238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.296 qpair failed and we were unable to recover it. 00:28:30.296 [2024-12-09 17:38:59.364353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.296 [2024-12-09 17:38:59.364385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.296 qpair failed and we were unable to recover it. 00:28:30.296 [2024-12-09 17:38:59.364574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.296 [2024-12-09 17:38:59.364606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.296 qpair failed and we were unable to recover it. 00:28:30.296 [2024-12-09 17:38:59.364778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.296 [2024-12-09 17:38:59.364810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.296 qpair failed and we were unable to recover it. 00:28:30.296 [2024-12-09 17:38:59.364914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.296 [2024-12-09 17:38:59.364945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.296 qpair failed and we were unable to recover it. 00:28:30.296 [2024-12-09 17:38:59.365053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.296 [2024-12-09 17:38:59.365084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.296 qpair failed and we were unable to recover it. 00:28:30.296 [2024-12-09 17:38:59.365273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.296 [2024-12-09 17:38:59.365306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.296 qpair failed and we were unable to recover it. 00:28:30.296 [2024-12-09 17:38:59.365493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.296 [2024-12-09 17:38:59.365525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.296 qpair failed and we were unable to recover it. 00:28:30.296 [2024-12-09 17:38:59.365651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.296 [2024-12-09 17:38:59.365683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.296 qpair failed and we were unable to recover it. 00:28:30.296 [2024-12-09 17:38:59.365791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.296 [2024-12-09 17:38:59.365822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.296 qpair failed and we were unable to recover it. 00:28:30.296 [2024-12-09 17:38:59.366011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.296 [2024-12-09 17:38:59.366042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.296 qpair failed and we were unable to recover it. 00:28:30.296 [2024-12-09 17:38:59.366167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.296 [2024-12-09 17:38:59.366199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.296 qpair failed and we were unable to recover it. 00:28:30.296 [2024-12-09 17:38:59.366356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.296 [2024-12-09 17:38:59.366390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.297 qpair failed and we were unable to recover it. 00:28:30.297 [2024-12-09 17:38:59.366514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.297 [2024-12-09 17:38:59.366545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.297 qpair failed and we were unable to recover it. 00:28:30.297 [2024-12-09 17:38:59.366726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.297 [2024-12-09 17:38:59.366757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.297 qpair failed and we were unable to recover it. 00:28:30.297 [2024-12-09 17:38:59.366962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.297 [2024-12-09 17:38:59.366995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.297 qpair failed and we were unable to recover it. 00:28:30.297 [2024-12-09 17:38:59.367136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.297 [2024-12-09 17:38:59.367167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.297 qpair failed and we were unable to recover it. 00:28:30.297 [2024-12-09 17:38:59.367297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.297 [2024-12-09 17:38:59.367330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.297 qpair failed and we were unable to recover it. 00:28:30.297 [2024-12-09 17:38:59.367612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.297 [2024-12-09 17:38:59.367645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.297 qpair failed and we were unable to recover it. 00:28:30.297 [2024-12-09 17:38:59.367763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.297 [2024-12-09 17:38:59.367795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.297 qpair failed and we were unable to recover it. 00:28:30.297 [2024-12-09 17:38:59.367980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.297 [2024-12-09 17:38:59.368017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.297 qpair failed and we were unable to recover it. 00:28:30.297 [2024-12-09 17:38:59.368142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.297 [2024-12-09 17:38:59.368174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.297 qpair failed and we were unable to recover it. 00:28:30.297 [2024-12-09 17:38:59.368320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.297 [2024-12-09 17:38:59.368353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.297 qpair failed and we were unable to recover it. 00:28:30.297 [2024-12-09 17:38:59.368474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.297 [2024-12-09 17:38:59.368506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.297 qpair failed and we were unable to recover it. 00:28:30.297 [2024-12-09 17:38:59.368606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.297 [2024-12-09 17:38:59.368638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.297 qpair failed and we were unable to recover it. 00:28:30.297 [2024-12-09 17:38:59.368815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.297 [2024-12-09 17:38:59.368846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.297 qpair failed and we were unable to recover it. 00:28:30.297 [2024-12-09 17:38:59.369043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.297 [2024-12-09 17:38:59.369075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.297 qpair failed and we were unable to recover it. 00:28:30.297 [2024-12-09 17:38:59.369199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.297 [2024-12-09 17:38:59.369242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.297 qpair failed and we were unable to recover it. 00:28:30.297 [2024-12-09 17:38:59.369363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.297 [2024-12-09 17:38:59.369395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.297 qpair failed and we were unable to recover it. 00:28:30.297 [2024-12-09 17:38:59.369594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.297 [2024-12-09 17:38:59.369626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.297 qpair failed and we were unable to recover it. 00:28:30.297 [2024-12-09 17:38:59.369739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.297 [2024-12-09 17:38:59.369770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.297 qpair failed and we were unable to recover it. 00:28:30.297 [2024-12-09 17:38:59.369896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.297 [2024-12-09 17:38:59.369929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.297 qpair failed and we were unable to recover it. 00:28:30.297 [2024-12-09 17:38:59.370052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.297 [2024-12-09 17:38:59.370083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.297 qpair failed and we were unable to recover it. 00:28:30.297 [2024-12-09 17:38:59.370256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.297 [2024-12-09 17:38:59.370288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.297 qpair failed and we were unable to recover it. 00:28:30.297 [2024-12-09 17:38:59.370476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.297 [2024-12-09 17:38:59.370508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.297 qpair failed and we were unable to recover it. 00:28:30.297 [2024-12-09 17:38:59.370610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.297 [2024-12-09 17:38:59.370641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.297 qpair failed and we were unable to recover it. 00:28:30.297 [2024-12-09 17:38:59.370833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.297 [2024-12-09 17:38:59.370865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.297 qpair failed and we were unable to recover it. 00:28:30.297 [2024-12-09 17:38:59.370990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.297 [2024-12-09 17:38:59.371021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.297 qpair failed and we were unable to recover it. 00:28:30.297 [2024-12-09 17:38:59.371193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.297 [2024-12-09 17:38:59.371232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.297 qpair failed and we were unable to recover it. 00:28:30.297 [2024-12-09 17:38:59.371414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.297 [2024-12-09 17:38:59.371446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.297 qpair failed and we were unable to recover it. 00:28:30.297 [2024-12-09 17:38:59.371555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.297 [2024-12-09 17:38:59.371587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.297 qpair failed and we were unable to recover it. 00:28:30.297 [2024-12-09 17:38:59.371693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.297 [2024-12-09 17:38:59.371724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.297 qpair failed and we were unable to recover it. 00:28:30.297 [2024-12-09 17:38:59.371846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.297 [2024-12-09 17:38:59.371876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.297 qpair failed and we were unable to recover it. 00:28:30.297 [2024-12-09 17:38:59.372064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.297 [2024-12-09 17:38:59.372096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.297 qpair failed and we were unable to recover it. 00:28:30.297 [2024-12-09 17:38:59.372275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.297 [2024-12-09 17:38:59.372307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.297 qpair failed and we were unable to recover it. 00:28:30.297 [2024-12-09 17:38:59.372414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.297 [2024-12-09 17:38:59.372446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.297 qpair failed and we were unable to recover it. 00:28:30.297 [2024-12-09 17:38:59.372563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.297 [2024-12-09 17:38:59.372594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.297 qpair failed and we were unable to recover it. 00:28:30.297 [2024-12-09 17:38:59.372764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.297 [2024-12-09 17:38:59.372834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.297 qpair failed and we were unable to recover it. 00:28:30.297 [2024-12-09 17:38:59.372987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.297 [2024-12-09 17:38:59.373024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.297 qpair failed and we were unable to recover it. 00:28:30.297 [2024-12-09 17:38:59.373141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.297 [2024-12-09 17:38:59.373175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.297 qpair failed and we were unable to recover it. 00:28:30.297 [2024-12-09 17:38:59.373371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.297 [2024-12-09 17:38:59.373404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.297 qpair failed and we were unable to recover it. 00:28:30.297 [2024-12-09 17:38:59.373526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.297 [2024-12-09 17:38:59.373558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.298 qpair failed and we were unable to recover it. 00:28:30.298 [2024-12-09 17:38:59.373812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.298 [2024-12-09 17:38:59.373844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.298 qpair failed and we were unable to recover it. 00:28:30.298 [2024-12-09 17:38:59.373956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.298 [2024-12-09 17:38:59.373989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.298 qpair failed and we were unable to recover it. 00:28:30.298 [2024-12-09 17:38:59.374112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.298 [2024-12-09 17:38:59.374143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.298 qpair failed and we were unable to recover it. 00:28:30.298 [2024-12-09 17:38:59.374431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.298 [2024-12-09 17:38:59.374464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.298 qpair failed and we were unable to recover it. 00:28:30.298 [2024-12-09 17:38:59.374646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.298 [2024-12-09 17:38:59.374679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.298 qpair failed and we were unable to recover it. 00:28:30.298 [2024-12-09 17:38:59.374807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.298 [2024-12-09 17:38:59.374839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.298 qpair failed and we were unable to recover it. 00:28:30.298 [2024-12-09 17:38:59.375045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.298 [2024-12-09 17:38:59.375076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.298 qpair failed and we were unable to recover it. 00:28:30.298 [2024-12-09 17:38:59.375286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.298 [2024-12-09 17:38:59.375319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.298 qpair failed and we were unable to recover it. 00:28:30.298 [2024-12-09 17:38:59.375566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.298 [2024-12-09 17:38:59.375606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.298 qpair failed and we were unable to recover it. 00:28:30.298 [2024-12-09 17:38:59.375730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.298 [2024-12-09 17:38:59.375762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.298 qpair failed and we were unable to recover it. 00:28:30.298 [2024-12-09 17:38:59.375936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.298 [2024-12-09 17:38:59.375968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.298 qpair failed and we were unable to recover it. 00:28:30.298 [2024-12-09 17:38:59.376158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.298 [2024-12-09 17:38:59.376189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.298 qpair failed and we were unable to recover it. 00:28:30.298 [2024-12-09 17:38:59.376383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.298 [2024-12-09 17:38:59.376416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.298 qpair failed and we were unable to recover it. 00:28:30.298 [2024-12-09 17:38:59.376597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.298 [2024-12-09 17:38:59.376629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.298 qpair failed and we were unable to recover it. 00:28:30.298 [2024-12-09 17:38:59.376801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.298 [2024-12-09 17:38:59.376833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.298 qpair failed and we were unable to recover it. 00:28:30.298 [2024-12-09 17:38:59.376961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.298 [2024-12-09 17:38:59.376993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.298 qpair failed and we were unable to recover it. 00:28:30.298 [2024-12-09 17:38:59.377101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.298 [2024-12-09 17:38:59.377133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.298 qpair failed and we were unable to recover it. 00:28:30.298 [2024-12-09 17:38:59.377257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.298 [2024-12-09 17:38:59.377293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.298 qpair failed and we were unable to recover it. 00:28:30.298 [2024-12-09 17:38:59.377419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.298 [2024-12-09 17:38:59.377450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.298 qpair failed and we were unable to recover it. 00:28:30.298 [2024-12-09 17:38:59.377642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.298 [2024-12-09 17:38:59.377673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.298 qpair failed and we were unable to recover it. 00:28:30.298 [2024-12-09 17:38:59.377798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.298 [2024-12-09 17:38:59.377829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.298 qpair failed and we were unable to recover it. 00:28:30.298 [2024-12-09 17:38:59.377999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.298 [2024-12-09 17:38:59.378031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.298 qpair failed and we were unable to recover it. 00:28:30.298 [2024-12-09 17:38:59.378329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.298 [2024-12-09 17:38:59.378362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.298 qpair failed and we were unable to recover it. 00:28:30.298 [2024-12-09 17:38:59.378490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.298 [2024-12-09 17:38:59.378522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.298 qpair failed and we were unable to recover it. 00:28:30.298 [2024-12-09 17:38:59.378717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.298 [2024-12-09 17:38:59.378748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.298 qpair failed and we were unable to recover it. 00:28:30.298 [2024-12-09 17:38:59.378870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.298 [2024-12-09 17:38:59.378901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.298 qpair failed and we were unable to recover it. 00:28:30.298 [2024-12-09 17:38:59.379082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.298 [2024-12-09 17:38:59.379113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.298 qpair failed and we were unable to recover it. 00:28:30.298 [2024-12-09 17:38:59.379287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.298 [2024-12-09 17:38:59.379319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.298 qpair failed and we were unable to recover it. 00:28:30.298 [2024-12-09 17:38:59.379444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.298 [2024-12-09 17:38:59.379476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.298 qpair failed and we were unable to recover it. 00:28:30.298 [2024-12-09 17:38:59.379655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.298 [2024-12-09 17:38:59.379686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.298 qpair failed and we were unable to recover it. 00:28:30.298 [2024-12-09 17:38:59.379873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.298 [2024-12-09 17:38:59.379905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.298 qpair failed and we were unable to recover it. 00:28:30.298 [2024-12-09 17:38:59.380021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.298 [2024-12-09 17:38:59.380053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.298 qpair failed and we were unable to recover it. 00:28:30.298 [2024-12-09 17:38:59.380252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.298 [2024-12-09 17:38:59.380284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.298 qpair failed and we were unable to recover it. 00:28:30.298 [2024-12-09 17:38:59.380527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.298 [2024-12-09 17:38:59.380558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.298 qpair failed and we were unable to recover it. 00:28:30.298 [2024-12-09 17:38:59.380737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.298 [2024-12-09 17:38:59.380769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.298 qpair failed and we were unable to recover it. 00:28:30.298 [2024-12-09 17:38:59.380958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.298 [2024-12-09 17:38:59.380989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.298 qpair failed and we were unable to recover it. 00:28:30.298 [2024-12-09 17:38:59.381165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.298 [2024-12-09 17:38:59.381197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.298 qpair failed and we were unable to recover it. 00:28:30.298 [2024-12-09 17:38:59.381331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.298 [2024-12-09 17:38:59.381363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.298 qpair failed and we were unable to recover it. 00:28:30.298 [2024-12-09 17:38:59.381491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.298 [2024-12-09 17:38:59.381522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.298 qpair failed and we were unable to recover it. 00:28:30.299 [2024-12-09 17:38:59.381716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.299 [2024-12-09 17:38:59.381748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.299 qpair failed and we were unable to recover it. 00:28:30.299 [2024-12-09 17:38:59.381881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.299 [2024-12-09 17:38:59.381913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.299 qpair failed and we were unable to recover it. 00:28:30.299 [2024-12-09 17:38:59.382095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.299 [2024-12-09 17:38:59.382127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.299 qpair failed and we were unable to recover it. 00:28:30.299 [2024-12-09 17:38:59.382321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.299 [2024-12-09 17:38:59.382355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.299 qpair failed and we were unable to recover it. 00:28:30.299 [2024-12-09 17:38:59.382572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.299 [2024-12-09 17:38:59.382604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.299 qpair failed and we were unable to recover it. 00:28:30.299 [2024-12-09 17:38:59.382780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.299 [2024-12-09 17:38:59.382811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.299 qpair failed and we were unable to recover it. 00:28:30.299 [2024-12-09 17:38:59.382915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.299 [2024-12-09 17:38:59.382947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.299 qpair failed and we were unable to recover it. 00:28:30.299 [2024-12-09 17:38:59.383067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.299 [2024-12-09 17:38:59.383099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.299 qpair failed and we were unable to recover it. 00:28:30.299 [2024-12-09 17:38:59.383229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.299 [2024-12-09 17:38:59.383261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.299 qpair failed and we were unable to recover it. 00:28:30.299 [2024-12-09 17:38:59.383377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.299 [2024-12-09 17:38:59.383414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.299 qpair failed and we were unable to recover it. 00:28:30.299 [2024-12-09 17:38:59.383589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.299 [2024-12-09 17:38:59.383621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.299 qpair failed and we were unable to recover it. 00:28:30.299 [2024-12-09 17:38:59.383735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.299 [2024-12-09 17:38:59.383766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.299 qpair failed and we were unable to recover it. 00:28:30.299 [2024-12-09 17:38:59.383943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.299 [2024-12-09 17:38:59.383975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.299 qpair failed and we were unable to recover it. 00:28:30.299 [2024-12-09 17:38:59.384194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.299 [2024-12-09 17:38:59.384235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.299 qpair failed and we were unable to recover it. 00:28:30.299 [2024-12-09 17:38:59.384349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.299 [2024-12-09 17:38:59.384381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.299 qpair failed and we were unable to recover it. 00:28:30.299 [2024-12-09 17:38:59.384511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.299 [2024-12-09 17:38:59.384542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.299 qpair failed and we were unable to recover it. 00:28:30.299 [2024-12-09 17:38:59.384715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.299 [2024-12-09 17:38:59.384747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.299 qpair failed and we were unable to recover it. 00:28:30.299 [2024-12-09 17:38:59.384933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.299 [2024-12-09 17:38:59.384965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.299 qpair failed and we were unable to recover it. 00:28:30.299 [2024-12-09 17:38:59.385070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.299 [2024-12-09 17:38:59.385102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.299 qpair failed and we were unable to recover it. 00:28:30.299 [2024-12-09 17:38:59.385226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.299 [2024-12-09 17:38:59.385259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.299 qpair failed and we were unable to recover it. 00:28:30.299 [2024-12-09 17:38:59.385386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.299 [2024-12-09 17:38:59.385418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.299 qpair failed and we were unable to recover it. 00:28:30.299 [2024-12-09 17:38:59.385537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.299 [2024-12-09 17:38:59.385569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.299 qpair failed and we were unable to recover it. 00:28:30.299 [2024-12-09 17:38:59.385681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.299 [2024-12-09 17:38:59.385712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.299 qpair failed and we were unable to recover it. 00:28:30.299 [2024-12-09 17:38:59.385843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.299 [2024-12-09 17:38:59.385877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.299 qpair failed and we were unable to recover it. 00:28:30.299 [2024-12-09 17:38:59.386048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.299 [2024-12-09 17:38:59.386080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.299 qpair failed and we were unable to recover it. 00:28:30.299 [2024-12-09 17:38:59.386252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.299 [2024-12-09 17:38:59.386287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.299 qpair failed and we were unable to recover it. 00:28:30.299 [2024-12-09 17:38:59.386412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.299 [2024-12-09 17:38:59.386444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.299 qpair failed and we were unable to recover it. 00:28:30.299 [2024-12-09 17:38:59.386560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.299 [2024-12-09 17:38:59.386592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.299 qpair failed and we were unable to recover it. 00:28:30.299 [2024-12-09 17:38:59.386770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.299 [2024-12-09 17:38:59.386802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.299 qpair failed and we were unable to recover it. 00:28:30.299 [2024-12-09 17:38:59.386933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.299 [2024-12-09 17:38:59.386965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.299 qpair failed and we were unable to recover it. 00:28:30.299 [2024-12-09 17:38:59.387089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.299 [2024-12-09 17:38:59.387119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.299 qpair failed and we were unable to recover it. 00:28:30.299 [2024-12-09 17:38:59.387300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.299 [2024-12-09 17:38:59.387332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.299 qpair failed and we were unable to recover it. 00:28:30.299 [2024-12-09 17:38:59.387459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.299 [2024-12-09 17:38:59.387490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.299 qpair failed and we were unable to recover it. 00:28:30.299 [2024-12-09 17:38:59.387609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.299 [2024-12-09 17:38:59.387641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.299 qpair failed and we were unable to recover it. 00:28:30.299 [2024-12-09 17:38:59.387822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.299 [2024-12-09 17:38:59.387852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.299 qpair failed and we were unable to recover it. 00:28:30.299 [2024-12-09 17:38:59.387990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.299 [2024-12-09 17:38:59.388021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.299 qpair failed and we were unable to recover it. 00:28:30.299 [2024-12-09 17:38:59.388132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.299 [2024-12-09 17:38:59.388164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.299 qpair failed and we were unable to recover it. 00:28:30.299 [2024-12-09 17:38:59.388420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.299 [2024-12-09 17:38:59.388452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.299 qpair failed and we were unable to recover it. 00:28:30.299 [2024-12-09 17:38:59.388629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.299 [2024-12-09 17:38:59.388661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.299 qpair failed and we were unable to recover it. 00:28:30.299 [2024-12-09 17:38:59.388800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.300 [2024-12-09 17:38:59.388832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.300 qpair failed and we were unable to recover it. 00:28:30.300 [2024-12-09 17:38:59.388942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.300 [2024-12-09 17:38:59.388973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.300 qpair failed and we were unable to recover it. 00:28:30.300 [2024-12-09 17:38:59.389073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.300 [2024-12-09 17:38:59.389104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.300 qpair failed and we were unable to recover it. 00:28:30.300 [2024-12-09 17:38:59.389285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.300 [2024-12-09 17:38:59.389317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.300 qpair failed and we were unable to recover it. 00:28:30.300 [2024-12-09 17:38:59.389427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.300 [2024-12-09 17:38:59.389458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.300 qpair failed and we were unable to recover it. 00:28:30.300 [2024-12-09 17:38:59.389566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.300 [2024-12-09 17:38:59.389598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.300 qpair failed and we were unable to recover it. 00:28:30.300 [2024-12-09 17:38:59.389775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.300 [2024-12-09 17:38:59.389807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.300 qpair failed and we were unable to recover it. 00:28:30.300 [2024-12-09 17:38:59.389997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.300 [2024-12-09 17:38:59.390028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.300 qpair failed and we were unable to recover it. 00:28:30.300 [2024-12-09 17:38:59.390154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.300 [2024-12-09 17:38:59.390185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.300 qpair failed and we were unable to recover it. 00:28:30.300 [2024-12-09 17:38:59.390305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.300 [2024-12-09 17:38:59.390337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.300 qpair failed and we were unable to recover it. 00:28:30.300 [2024-12-09 17:38:59.390450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.300 [2024-12-09 17:38:59.390488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.300 qpair failed and we were unable to recover it. 00:28:30.300 [2024-12-09 17:38:59.390608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.300 [2024-12-09 17:38:59.390639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.300 qpair failed and we were unable to recover it. 00:28:30.300 [2024-12-09 17:38:59.390754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.300 [2024-12-09 17:38:59.390785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.300 qpair failed and we were unable to recover it. 00:28:30.300 [2024-12-09 17:38:59.390912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.300 [2024-12-09 17:38:59.390945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.300 qpair failed and we were unable to recover it. 00:28:30.300 [2024-12-09 17:38:59.391133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.300 [2024-12-09 17:38:59.391164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.300 qpair failed and we were unable to recover it. 00:28:30.300 [2024-12-09 17:38:59.391314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.300 [2024-12-09 17:38:59.391348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.300 qpair failed and we were unable to recover it. 00:28:30.300 [2024-12-09 17:38:59.391519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.300 [2024-12-09 17:38:59.391551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.300 qpair failed and we were unable to recover it. 00:28:30.300 [2024-12-09 17:38:59.391729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.300 [2024-12-09 17:38:59.391762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.300 qpair failed and we were unable to recover it. 00:28:30.300 [2024-12-09 17:38:59.391953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.300 [2024-12-09 17:38:59.391985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.300 qpair failed and we were unable to recover it. 00:28:30.300 [2024-12-09 17:38:59.392104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.300 [2024-12-09 17:38:59.392136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.300 qpair failed and we were unable to recover it. 00:28:30.300 [2024-12-09 17:38:59.392267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.300 [2024-12-09 17:38:59.392301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.300 qpair failed and we were unable to recover it. 00:28:30.300 [2024-12-09 17:38:59.392418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.300 [2024-12-09 17:38:59.392450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.300 qpair failed and we were unable to recover it. 00:28:30.300 [2024-12-09 17:38:59.392624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.300 [2024-12-09 17:38:59.392655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.300 qpair failed and we were unable to recover it. 00:28:30.300 [2024-12-09 17:38:59.392805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.300 [2024-12-09 17:38:59.392838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.300 qpair failed and we were unable to recover it. 00:28:30.300 [2024-12-09 17:38:59.392969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.300 [2024-12-09 17:38:59.393002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.300 qpair failed and we were unable to recover it. 00:28:30.300 [2024-12-09 17:38:59.393133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.300 [2024-12-09 17:38:59.393164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.300 qpair failed and we were unable to recover it. 00:28:30.300 [2024-12-09 17:38:59.393296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.300 [2024-12-09 17:38:59.393330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.300 qpair failed and we were unable to recover it. 00:28:30.300 [2024-12-09 17:38:59.393518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.300 [2024-12-09 17:38:59.393549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.300 qpair failed and we were unable to recover it. 00:28:30.300 [2024-12-09 17:38:59.393658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.300 [2024-12-09 17:38:59.393689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.300 qpair failed and we were unable to recover it. 00:28:30.300 [2024-12-09 17:38:59.393864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.300 [2024-12-09 17:38:59.393895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.300 qpair failed and we were unable to recover it. 00:28:30.300 [2024-12-09 17:38:59.394016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.300 [2024-12-09 17:38:59.394048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.300 qpair failed and we were unable to recover it. 00:28:30.300 [2024-12-09 17:38:59.394165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.300 [2024-12-09 17:38:59.394196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.300 qpair failed and we were unable to recover it. 00:28:30.300 [2024-12-09 17:38:59.394387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.300 [2024-12-09 17:38:59.394419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.300 qpair failed and we were unable to recover it. 00:28:30.300 [2024-12-09 17:38:59.394558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.300 [2024-12-09 17:38:59.394590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.300 qpair failed and we were unable to recover it. 00:28:30.300 [2024-12-09 17:38:59.394710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.300 [2024-12-09 17:38:59.394741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.300 qpair failed and we were unable to recover it. 00:28:30.300 [2024-12-09 17:38:59.394942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.300 [2024-12-09 17:38:59.394973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.300 qpair failed and we were unable to recover it. 00:28:30.300 [2024-12-09 17:38:59.395117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.300 [2024-12-09 17:38:59.395149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.300 qpair failed and we were unable to recover it. 00:28:30.300 [2024-12-09 17:38:59.395266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.300 [2024-12-09 17:38:59.395298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.300 qpair failed and we were unable to recover it. 00:28:30.300 [2024-12-09 17:38:59.395479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.300 [2024-12-09 17:38:59.395511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.300 qpair failed and we were unable to recover it. 00:28:30.301 [2024-12-09 17:38:59.395683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.301 [2024-12-09 17:38:59.395715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.301 qpair failed and we were unable to recover it. 00:28:30.301 [2024-12-09 17:38:59.395927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.301 [2024-12-09 17:38:59.395958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.301 qpair failed and we were unable to recover it. 00:28:30.301 [2024-12-09 17:38:59.396142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.301 [2024-12-09 17:38:59.396174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.301 qpair failed and we were unable to recover it. 00:28:30.301 [2024-12-09 17:38:59.396298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.301 [2024-12-09 17:38:59.396331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.301 qpair failed and we were unable to recover it. 00:28:30.301 [2024-12-09 17:38:59.396510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.301 [2024-12-09 17:38:59.396542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.301 qpair failed and we were unable to recover it. 00:28:30.301 [2024-12-09 17:38:59.396744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.301 [2024-12-09 17:38:59.396776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.301 qpair failed and we were unable to recover it. 00:28:30.301 [2024-12-09 17:38:59.396888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.301 [2024-12-09 17:38:59.396919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.301 qpair failed and we were unable to recover it. 00:28:30.301 [2024-12-09 17:38:59.397023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.301 [2024-12-09 17:38:59.397054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.301 qpair failed and we were unable to recover it. 00:28:30.301 [2024-12-09 17:38:59.397239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.301 [2024-12-09 17:38:59.397273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.301 qpair failed and we were unable to recover it. 00:28:30.301 [2024-12-09 17:38:59.397515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.301 [2024-12-09 17:38:59.397546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.301 qpair failed and we were unable to recover it. 00:28:30.301 [2024-12-09 17:38:59.397663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.301 [2024-12-09 17:38:59.397695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.301 qpair failed and we were unable to recover it. 00:28:30.301 [2024-12-09 17:38:59.397880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.301 [2024-12-09 17:38:59.397918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.301 qpair failed and we were unable to recover it. 00:28:30.301 [2024-12-09 17:38:59.398036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.301 [2024-12-09 17:38:59.398068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.301 qpair failed and we were unable to recover it. 00:28:30.301 [2024-12-09 17:38:59.398198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.301 [2024-12-09 17:38:59.398242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.301 qpair failed and we were unable to recover it. 00:28:30.301 [2024-12-09 17:38:59.398448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.301 [2024-12-09 17:38:59.398481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.301 qpair failed and we were unable to recover it. 00:28:30.301 [2024-12-09 17:38:59.398681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.301 [2024-12-09 17:38:59.398712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.301 qpair failed and we were unable to recover it. 00:28:30.301 [2024-12-09 17:38:59.398910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.301 [2024-12-09 17:38:59.398941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.301 qpair failed and we were unable to recover it. 00:28:30.301 [2024-12-09 17:38:59.399056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.301 [2024-12-09 17:38:59.399088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.301 qpair failed and we were unable to recover it. 00:28:30.301 [2024-12-09 17:38:59.399215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.301 [2024-12-09 17:38:59.399279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.301 qpair failed and we were unable to recover it. 00:28:30.301 [2024-12-09 17:38:59.399400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.301 [2024-12-09 17:38:59.399432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.301 qpair failed and we were unable to recover it. 00:28:30.301 [2024-12-09 17:38:59.399614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.301 [2024-12-09 17:38:59.399645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.301 qpair failed and we were unable to recover it. 00:28:30.301 [2024-12-09 17:38:59.399821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.301 [2024-12-09 17:38:59.399851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.301 qpair failed and we were unable to recover it. 00:28:30.301 [2024-12-09 17:38:59.399957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.301 [2024-12-09 17:38:59.399989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.301 qpair failed and we were unable to recover it. 00:28:30.301 [2024-12-09 17:38:59.400109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.301 [2024-12-09 17:38:59.400140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.301 qpair failed and we were unable to recover it. 00:28:30.301 [2024-12-09 17:38:59.400336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.301 [2024-12-09 17:38:59.400369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.301 qpair failed and we were unable to recover it. 00:28:30.301 [2024-12-09 17:38:59.400499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.301 [2024-12-09 17:38:59.400532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.301 qpair failed and we were unable to recover it. 00:28:30.301 [2024-12-09 17:38:59.400734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.301 [2024-12-09 17:38:59.400765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.301 qpair failed and we were unable to recover it. 00:28:30.301 [2024-12-09 17:38:59.400894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.301 [2024-12-09 17:38:59.400927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.301 qpair failed and we were unable to recover it. 00:28:30.301 [2024-12-09 17:38:59.401140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.301 [2024-12-09 17:38:59.401172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.301 qpair failed and we were unable to recover it. 00:28:30.301 [2024-12-09 17:38:59.401287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.301 [2024-12-09 17:38:59.401320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.301 qpair failed and we were unable to recover it. 00:28:30.301 [2024-12-09 17:38:59.401440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.301 [2024-12-09 17:38:59.401471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.301 qpair failed and we were unable to recover it. 00:28:30.301 [2024-12-09 17:38:59.401608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.301 [2024-12-09 17:38:59.401640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.301 qpair failed and we were unable to recover it. 00:28:30.301 [2024-12-09 17:38:59.401767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.301 [2024-12-09 17:38:59.401799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.301 qpair failed and we were unable to recover it. 00:28:30.301 [2024-12-09 17:38:59.402002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.301 [2024-12-09 17:38:59.402033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.301 qpair failed and we were unable to recover it. 00:28:30.301 [2024-12-09 17:38:59.402241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.301 [2024-12-09 17:38:59.402273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.301 qpair failed and we were unable to recover it. 00:28:30.301 [2024-12-09 17:38:59.402448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.301 [2024-12-09 17:38:59.402481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.301 qpair failed and we were unable to recover it. 00:28:30.301 [2024-12-09 17:38:59.402612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.301 [2024-12-09 17:38:59.402644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.301 qpair failed and we were unable to recover it. 00:28:30.301 [2024-12-09 17:38:59.402886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.301 [2024-12-09 17:38:59.402918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.301 qpair failed and we were unable to recover it. 00:28:30.301 [2024-12-09 17:38:59.403095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.302 [2024-12-09 17:38:59.403166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.302 qpair failed and we were unable to recover it. 00:28:30.302 [2024-12-09 17:38:59.403340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.302 [2024-12-09 17:38:59.403378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.302 qpair failed and we were unable to recover it. 00:28:30.302 [2024-12-09 17:38:59.403570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.302 [2024-12-09 17:38:59.403603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.302 qpair failed and we were unable to recover it. 00:28:30.302 [2024-12-09 17:38:59.403778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.302 [2024-12-09 17:38:59.403810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.302 qpair failed and we were unable to recover it. 00:28:30.302 [2024-12-09 17:38:59.404004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.302 [2024-12-09 17:38:59.404037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.302 qpair failed and we were unable to recover it. 00:28:30.302 [2024-12-09 17:38:59.404208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.302 [2024-12-09 17:38:59.404254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.302 qpair failed and we were unable to recover it. 00:28:30.302 [2024-12-09 17:38:59.404442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.302 [2024-12-09 17:38:59.404474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.302 qpair failed and we were unable to recover it. 00:28:30.302 [2024-12-09 17:38:59.404582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.302 [2024-12-09 17:38:59.404615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.302 qpair failed and we were unable to recover it. 00:28:30.302 [2024-12-09 17:38:59.404736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.302 [2024-12-09 17:38:59.404769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.302 qpair failed and we were unable to recover it. 00:28:30.302 [2024-12-09 17:38:59.404901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.302 [2024-12-09 17:38:59.404933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.302 qpair failed and we were unable to recover it. 00:28:30.302 [2024-12-09 17:38:59.405119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.302 [2024-12-09 17:38:59.405151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.302 qpair failed and we were unable to recover it. 00:28:30.302 [2024-12-09 17:38:59.405336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.302 [2024-12-09 17:38:59.405370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.302 qpair failed and we were unable to recover it. 00:28:30.302 [2024-12-09 17:38:59.405473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.302 [2024-12-09 17:38:59.405504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.302 qpair failed and we were unable to recover it. 00:28:30.302 [2024-12-09 17:38:59.405688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.302 [2024-12-09 17:38:59.405730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.302 qpair failed and we were unable to recover it. 00:28:30.302 [2024-12-09 17:38:59.405931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.302 [2024-12-09 17:38:59.405962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.302 qpair failed and we were unable to recover it. 00:28:30.302 [2024-12-09 17:38:59.406090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.302 [2024-12-09 17:38:59.406122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.302 qpair failed and we were unable to recover it. 00:28:30.302 [2024-12-09 17:38:59.406249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.302 [2024-12-09 17:38:59.406281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.302 qpair failed and we were unable to recover it. 00:28:30.302 [2024-12-09 17:38:59.406399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.302 [2024-12-09 17:38:59.406430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.302 qpair failed and we were unable to recover it. 00:28:30.302 [2024-12-09 17:38:59.406558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.302 [2024-12-09 17:38:59.406589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.302 qpair failed and we were unable to recover it. 00:28:30.302 [2024-12-09 17:38:59.406772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.302 [2024-12-09 17:38:59.406803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.302 qpair failed and we were unable to recover it. 00:28:30.302 [2024-12-09 17:38:59.406991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.302 [2024-12-09 17:38:59.407023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.302 qpair failed and we were unable to recover it. 00:28:30.302 [2024-12-09 17:38:59.407146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.302 [2024-12-09 17:38:59.407176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.302 qpair failed and we were unable to recover it. 00:28:30.302 [2024-12-09 17:38:59.407306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.302 [2024-12-09 17:38:59.407340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.302 qpair failed and we were unable to recover it. 00:28:30.302 [2024-12-09 17:38:59.407514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.302 [2024-12-09 17:38:59.407546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.302 qpair failed and we were unable to recover it. 00:28:30.302 [2024-12-09 17:38:59.407674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.302 [2024-12-09 17:38:59.407706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.302 qpair failed and we were unable to recover it. 00:28:30.302 [2024-12-09 17:38:59.407813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.302 [2024-12-09 17:38:59.407844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.302 qpair failed and we were unable to recover it. 00:28:30.302 [2024-12-09 17:38:59.408031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.302 [2024-12-09 17:38:59.408063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.302 qpair failed and we were unable to recover it. 00:28:30.302 [2024-12-09 17:38:59.408171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.302 [2024-12-09 17:38:59.408204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.302 qpair failed and we were unable to recover it. 00:28:30.302 [2024-12-09 17:38:59.408334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.302 [2024-12-09 17:38:59.408366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.302 qpair failed and we were unable to recover it. 00:28:30.302 [2024-12-09 17:38:59.408503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.302 [2024-12-09 17:38:59.408536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.302 qpair failed and we were unable to recover it. 00:28:30.302 [2024-12-09 17:38:59.408722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.302 [2024-12-09 17:38:59.408754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.302 qpair failed and we were unable to recover it. 00:28:30.302 [2024-12-09 17:38:59.408880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.302 [2024-12-09 17:38:59.408912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.302 qpair failed and we were unable to recover it. 00:28:30.302 [2024-12-09 17:38:59.409013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.302 [2024-12-09 17:38:59.409044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.302 qpair failed and we were unable to recover it. 00:28:30.302 [2024-12-09 17:38:59.409151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.302 [2024-12-09 17:38:59.409183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.302 qpair failed and we were unable to recover it. 00:28:30.302 [2024-12-09 17:38:59.409379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.302 [2024-12-09 17:38:59.409412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.302 qpair failed and we were unable to recover it. 00:28:30.302 [2024-12-09 17:38:59.409601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.302 [2024-12-09 17:38:59.409633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.302 qpair failed and we were unable to recover it. 00:28:30.302 [2024-12-09 17:38:59.409749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.302 [2024-12-09 17:38:59.409781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.302 qpair failed and we were unable to recover it. 00:28:30.302 [2024-12-09 17:38:59.409890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.302 [2024-12-09 17:38:59.409923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.302 qpair failed and we were unable to recover it. 00:28:30.302 [2024-12-09 17:38:59.410027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.302 [2024-12-09 17:38:59.410058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.302 qpair failed and we were unable to recover it. 00:28:30.302 [2024-12-09 17:38:59.410168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.302 [2024-12-09 17:38:59.410200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.302 qpair failed and we were unable to recover it. 00:28:30.302 [2024-12-09 17:38:59.410477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.302 [2024-12-09 17:38:59.410512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.302 qpair failed and we were unable to recover it. 00:28:30.302 [2024-12-09 17:38:59.410636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.302 [2024-12-09 17:38:59.410667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.302 qpair failed and we were unable to recover it. 00:28:30.302 [2024-12-09 17:38:59.410779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.302 [2024-12-09 17:38:59.410823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.302 qpair failed and we were unable to recover it. 00:28:30.303 [2024-12-09 17:38:59.411105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.303 [2024-12-09 17:38:59.411143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.303 qpair failed and we were unable to recover it. 00:28:30.303 [2024-12-09 17:38:59.411249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.303 [2024-12-09 17:38:59.411282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.303 qpair failed and we were unable to recover it. 00:28:30.303 [2024-12-09 17:38:59.411471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.303 [2024-12-09 17:38:59.411504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.303 qpair failed and we were unable to recover it. 00:28:30.303 [2024-12-09 17:38:59.411630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.303 [2024-12-09 17:38:59.411667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.303 qpair failed and we were unable to recover it. 00:28:30.303 [2024-12-09 17:38:59.411862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.303 [2024-12-09 17:38:59.411897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.303 qpair failed and we were unable to recover it. 00:28:30.303 [2024-12-09 17:38:59.412032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.303 [2024-12-09 17:38:59.412065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.303 qpair failed and we were unable to recover it. 00:28:30.303 [2024-12-09 17:38:59.412241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.303 [2024-12-09 17:38:59.412275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.303 qpair failed and we were unable to recover it. 00:28:30.303 [2024-12-09 17:38:59.412410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.303 [2024-12-09 17:38:59.412441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.303 qpair failed and we were unable to recover it. 00:28:30.303 [2024-12-09 17:38:59.412616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.303 [2024-12-09 17:38:59.412648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.303 qpair failed and we were unable to recover it. 00:28:30.303 [2024-12-09 17:38:59.412766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.303 [2024-12-09 17:38:59.412811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.303 qpair failed and we were unable to recover it. 00:28:30.303 [2024-12-09 17:38:59.412967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.303 [2024-12-09 17:38:59.413014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.303 qpair failed and we were unable to recover it. 00:28:30.303 [2024-12-09 17:38:59.413227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.303 [2024-12-09 17:38:59.413263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.303 qpair failed and we were unable to recover it. 00:28:30.303 [2024-12-09 17:38:59.413379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.303 [2024-12-09 17:38:59.413411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.303 qpair failed and we were unable to recover it. 00:28:30.303 [2024-12-09 17:38:59.413512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.303 [2024-12-09 17:38:59.413543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.303 qpair failed and we were unable to recover it. 00:28:30.303 [2024-12-09 17:38:59.413671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.303 [2024-12-09 17:38:59.413716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.303 qpair failed and we were unable to recover it. 00:28:30.303 [2024-12-09 17:38:59.413905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.303 [2024-12-09 17:38:59.413937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.303 qpair failed and we were unable to recover it. 00:28:30.303 [2024-12-09 17:38:59.414110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.303 [2024-12-09 17:38:59.414141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.303 qpair failed and we were unable to recover it. 00:28:30.303 [2024-12-09 17:38:59.414265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.303 [2024-12-09 17:38:59.414298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.303 qpair failed and we were unable to recover it. 00:28:30.303 [2024-12-09 17:38:59.414398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.303 [2024-12-09 17:38:59.414431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.303 qpair failed and we were unable to recover it. 00:28:30.303 [2024-12-09 17:38:59.414545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.303 [2024-12-09 17:38:59.414576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.303 qpair failed and we were unable to recover it. 00:28:30.303 [2024-12-09 17:38:59.414757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.303 [2024-12-09 17:38:59.414794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.303 qpair failed and we were unable to recover it. 00:28:30.303 [2024-12-09 17:38:59.414925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.303 [2024-12-09 17:38:59.414963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.303 qpair failed and we were unable to recover it. 00:28:30.303 [2024-12-09 17:38:59.415246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.303 [2024-12-09 17:38:59.415279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.303 qpair failed and we were unable to recover it. 00:28:30.582 [2024-12-09 17:38:59.415414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.582 [2024-12-09 17:38:59.415446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.582 qpair failed and we were unable to recover it. 00:28:30.582 [2024-12-09 17:38:59.415564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.582 [2024-12-09 17:38:59.415597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.582 qpair failed and we were unable to recover it. 00:28:30.582 [2024-12-09 17:38:59.415705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.582 [2024-12-09 17:38:59.415737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.582 qpair failed and we were unable to recover it. 00:28:30.582 [2024-12-09 17:38:59.415845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.582 [2024-12-09 17:38:59.415876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.582 qpair failed and we were unable to recover it. 00:28:30.582 [2024-12-09 17:38:59.416058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.582 [2024-12-09 17:38:59.416091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.582 qpair failed and we were unable to recover it. 00:28:30.582 [2024-12-09 17:38:59.416268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.582 [2024-12-09 17:38:59.416301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.582 qpair failed and we were unable to recover it. 00:28:30.582 [2024-12-09 17:38:59.416412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.582 [2024-12-09 17:38:59.416445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.582 qpair failed and we were unable to recover it. 00:28:30.582 [2024-12-09 17:38:59.416560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.582 [2024-12-09 17:38:59.416593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.582 qpair failed and we were unable to recover it. 00:28:30.582 [2024-12-09 17:38:59.416706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.582 [2024-12-09 17:38:59.416738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.582 qpair failed and we were unable to recover it. 00:28:30.582 [2024-12-09 17:38:59.416911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.582 [2024-12-09 17:38:59.416944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.582 qpair failed and we were unable to recover it. 00:28:30.582 [2024-12-09 17:38:59.417125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.582 [2024-12-09 17:38:59.417156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.582 qpair failed and we were unable to recover it. 00:28:30.582 [2024-12-09 17:38:59.417282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.582 [2024-12-09 17:38:59.417315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.582 qpair failed and we were unable to recover it. 00:28:30.582 [2024-12-09 17:38:59.417441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.582 [2024-12-09 17:38:59.417473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.582 qpair failed and we were unable to recover it. 00:28:30.582 [2024-12-09 17:38:59.417650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.582 [2024-12-09 17:38:59.417682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.582 qpair failed and we were unable to recover it. 00:28:30.582 [2024-12-09 17:38:59.417925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.582 [2024-12-09 17:38:59.417962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.582 qpair failed and we were unable to recover it. 00:28:30.582 [2024-12-09 17:38:59.418072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.582 [2024-12-09 17:38:59.418103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.582 qpair failed and we were unable to recover it. 00:28:30.582 [2024-12-09 17:38:59.418247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.582 [2024-12-09 17:38:59.418280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.582 qpair failed and we were unable to recover it. 00:28:30.582 [2024-12-09 17:38:59.418389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.582 [2024-12-09 17:38:59.418422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.582 qpair failed and we were unable to recover it. 00:28:30.582 [2024-12-09 17:38:59.418546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.582 [2024-12-09 17:38:59.418579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.582 qpair failed and we were unable to recover it. 00:28:30.582 [2024-12-09 17:38:59.418760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.582 [2024-12-09 17:38:59.418791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.582 qpair failed and we were unable to recover it. 00:28:30.582 [2024-12-09 17:38:59.418910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.582 [2024-12-09 17:38:59.418942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.582 qpair failed and we were unable to recover it. 00:28:30.582 [2024-12-09 17:38:59.419065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.582 [2024-12-09 17:38:59.419097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.582 qpair failed and we were unable to recover it. 00:28:30.582 [2024-12-09 17:38:59.419203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.582 [2024-12-09 17:38:59.419248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.582 qpair failed and we were unable to recover it. 00:28:30.582 [2024-12-09 17:38:59.419434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.582 [2024-12-09 17:38:59.419467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.582 qpair failed and we were unable to recover it. 00:28:30.582 [2024-12-09 17:38:59.419603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.582 [2024-12-09 17:38:59.419636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.582 qpair failed and we were unable to recover it. 00:28:30.582 [2024-12-09 17:38:59.419763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.582 [2024-12-09 17:38:59.419794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.582 qpair failed and we were unable to recover it. 00:28:30.583 [2024-12-09 17:38:59.419975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.583 [2024-12-09 17:38:59.420007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.583 qpair failed and we were unable to recover it. 00:28:30.583 [2024-12-09 17:38:59.420127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.583 [2024-12-09 17:38:59.420159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.583 qpair failed and we were unable to recover it. 00:28:30.583 [2024-12-09 17:38:59.420322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.583 [2024-12-09 17:38:59.420357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.583 qpair failed and we were unable to recover it. 00:28:30.583 [2024-12-09 17:38:59.420467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.583 [2024-12-09 17:38:59.420500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.583 qpair failed and we were unable to recover it. 00:28:30.583 [2024-12-09 17:38:59.420702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.583 [2024-12-09 17:38:59.420734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.583 qpair failed and we were unable to recover it. 00:28:30.583 [2024-12-09 17:38:59.420933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.583 [2024-12-09 17:38:59.420965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.583 qpair failed and we were unable to recover it. 00:28:30.583 [2024-12-09 17:38:59.421151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.583 [2024-12-09 17:38:59.421184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.583 qpair failed and we were unable to recover it. 00:28:30.583 [2024-12-09 17:38:59.421322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.583 [2024-12-09 17:38:59.421356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.583 qpair failed and we were unable to recover it. 00:28:30.583 [2024-12-09 17:38:59.421472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.583 [2024-12-09 17:38:59.421505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.583 qpair failed and we were unable to recover it. 00:28:30.583 [2024-12-09 17:38:59.421684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.583 [2024-12-09 17:38:59.421715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.583 qpair failed and we were unable to recover it. 00:28:30.583 [2024-12-09 17:38:59.421838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.583 [2024-12-09 17:38:59.421870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.583 qpair failed and we were unable to recover it. 00:28:30.583 [2024-12-09 17:38:59.421986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.583 [2024-12-09 17:38:59.422018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.583 qpair failed and we were unable to recover it. 00:28:30.583 [2024-12-09 17:38:59.422280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.583 [2024-12-09 17:38:59.422315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.583 qpair failed and we were unable to recover it. 00:28:30.583 [2024-12-09 17:38:59.422424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.583 [2024-12-09 17:38:59.422455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.583 qpair failed and we were unable to recover it. 00:28:30.583 [2024-12-09 17:38:59.422574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.583 [2024-12-09 17:38:59.422606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.583 qpair failed and we were unable to recover it. 00:28:30.583 [2024-12-09 17:38:59.422717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.583 [2024-12-09 17:38:59.422749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.583 qpair failed and we were unable to recover it. 00:28:30.583 [2024-12-09 17:38:59.422948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.583 [2024-12-09 17:38:59.422981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.583 qpair failed and we were unable to recover it. 00:28:30.583 [2024-12-09 17:38:59.423153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.583 [2024-12-09 17:38:59.423184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.583 qpair failed and we were unable to recover it. 00:28:30.583 [2024-12-09 17:38:59.423301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.583 [2024-12-09 17:38:59.423335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.583 qpair failed and we were unable to recover it. 00:28:30.583 [2024-12-09 17:38:59.423521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.583 [2024-12-09 17:38:59.423554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.583 qpair failed and we were unable to recover it. 00:28:30.583 [2024-12-09 17:38:59.423756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.583 [2024-12-09 17:38:59.423789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.583 qpair failed and we were unable to recover it. 00:28:30.583 [2024-12-09 17:38:59.423924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.583 [2024-12-09 17:38:59.423956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.583 qpair failed and we were unable to recover it. 00:28:30.583 [2024-12-09 17:38:59.424133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.583 [2024-12-09 17:38:59.424165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.583 qpair failed and we were unable to recover it. 00:28:30.583 [2024-12-09 17:38:59.424349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.583 [2024-12-09 17:38:59.424381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.583 qpair failed and we were unable to recover it. 00:28:30.583 [2024-12-09 17:38:59.424497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.583 [2024-12-09 17:38:59.424530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.583 qpair failed and we were unable to recover it. 00:28:30.583 [2024-12-09 17:38:59.424753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.583 [2024-12-09 17:38:59.424786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.583 qpair failed and we were unable to recover it. 00:28:30.583 [2024-12-09 17:38:59.424905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.583 [2024-12-09 17:38:59.424937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.583 qpair failed and we were unable to recover it. 00:28:30.583 [2024-12-09 17:38:59.425070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.583 [2024-12-09 17:38:59.425102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.583 qpair failed and we were unable to recover it. 00:28:30.583 [2024-12-09 17:38:59.425242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.583 [2024-12-09 17:38:59.425280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.583 qpair failed and we were unable to recover it. 00:28:30.583 [2024-12-09 17:38:59.425458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.583 [2024-12-09 17:38:59.425491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.583 qpair failed and we were unable to recover it. 00:28:30.583 [2024-12-09 17:38:59.425741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.583 [2024-12-09 17:38:59.425773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.583 qpair failed and we were unable to recover it. 00:28:30.583 [2024-12-09 17:38:59.425891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.583 [2024-12-09 17:38:59.425922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.583 qpair failed and we were unable to recover it. 00:28:30.583 [2024-12-09 17:38:59.426166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.583 [2024-12-09 17:38:59.426199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.583 qpair failed and we were unable to recover it. 00:28:30.583 [2024-12-09 17:38:59.426315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.583 [2024-12-09 17:38:59.426347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.583 qpair failed and we were unable to recover it. 00:28:30.583 [2024-12-09 17:38:59.426526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.583 [2024-12-09 17:38:59.426559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.583 qpair failed and we were unable to recover it. 00:28:30.583 [2024-12-09 17:38:59.426682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.583 [2024-12-09 17:38:59.426714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.583 qpair failed and we were unable to recover it. 00:28:30.583 [2024-12-09 17:38:59.426856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.583 [2024-12-09 17:38:59.426889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.583 qpair failed and we were unable to recover it. 00:28:30.583 [2024-12-09 17:38:59.427132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.583 [2024-12-09 17:38:59.427163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.583 qpair failed and we were unable to recover it. 00:28:30.583 [2024-12-09 17:38:59.427290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.584 [2024-12-09 17:38:59.427324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.584 qpair failed and we were unable to recover it. 00:28:30.584 [2024-12-09 17:38:59.427433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.584 [2024-12-09 17:38:59.427465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.584 qpair failed and we were unable to recover it. 00:28:30.584 [2024-12-09 17:38:59.427665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.584 [2024-12-09 17:38:59.427697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.584 qpair failed and we were unable to recover it. 00:28:30.584 [2024-12-09 17:38:59.427816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.584 [2024-12-09 17:38:59.427849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.584 qpair failed and we were unable to recover it. 00:28:30.584 [2024-12-09 17:38:59.428030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.584 [2024-12-09 17:38:59.428063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.584 qpair failed and we were unable to recover it. 00:28:30.584 [2024-12-09 17:38:59.428188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.584 [2024-12-09 17:38:59.428239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.584 qpair failed and we were unable to recover it. 00:28:30.584 [2024-12-09 17:38:59.428479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.584 [2024-12-09 17:38:59.428512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.584 qpair failed and we were unable to recover it. 00:28:30.584 [2024-12-09 17:38:59.428725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.584 [2024-12-09 17:38:59.428757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.584 qpair failed and we were unable to recover it. 00:28:30.584 [2024-12-09 17:38:59.428941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.584 [2024-12-09 17:38:59.428973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.584 qpair failed and we were unable to recover it. 00:28:30.584 [2024-12-09 17:38:59.429154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.584 [2024-12-09 17:38:59.429186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.584 qpair failed and we were unable to recover it. 00:28:30.584 [2024-12-09 17:38:59.429313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.584 [2024-12-09 17:38:59.429347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.584 qpair failed and we were unable to recover it. 00:28:30.584 [2024-12-09 17:38:59.429532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.584 [2024-12-09 17:38:59.429565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.584 qpair failed and we were unable to recover it. 00:28:30.584 [2024-12-09 17:38:59.429753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.584 [2024-12-09 17:38:59.429785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.584 qpair failed and we were unable to recover it. 00:28:30.584 [2024-12-09 17:38:59.429961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.584 [2024-12-09 17:38:59.429994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.584 qpair failed and we were unable to recover it. 00:28:30.584 [2024-12-09 17:38:59.430169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.584 [2024-12-09 17:38:59.430202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.584 qpair failed and we were unable to recover it. 00:28:30.584 [2024-12-09 17:38:59.430415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.584 [2024-12-09 17:38:59.430447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.584 qpair failed and we were unable to recover it. 00:28:30.584 [2024-12-09 17:38:59.430620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.584 [2024-12-09 17:38:59.430652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.584 qpair failed and we were unable to recover it. 00:28:30.584 [2024-12-09 17:38:59.430770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.584 [2024-12-09 17:38:59.430802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.584 qpair failed and we were unable to recover it. 00:28:30.584 [2024-12-09 17:38:59.430925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.584 [2024-12-09 17:38:59.430958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.584 qpair failed and we were unable to recover it. 00:28:30.584 [2024-12-09 17:38:59.431136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.584 [2024-12-09 17:38:59.431169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.584 qpair failed and we were unable to recover it. 00:28:30.584 [2024-12-09 17:38:59.431283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.584 [2024-12-09 17:38:59.431316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.584 qpair failed and we were unable to recover it. 00:28:30.584 [2024-12-09 17:38:59.431557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.584 [2024-12-09 17:38:59.431590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.584 qpair failed and we were unable to recover it. 00:28:30.584 [2024-12-09 17:38:59.431821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.584 [2024-12-09 17:38:59.431854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.584 qpair failed and we were unable to recover it. 00:28:30.584 [2024-12-09 17:38:59.432123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.584 [2024-12-09 17:38:59.432156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.584 qpair failed and we were unable to recover it. 00:28:30.584 [2024-12-09 17:38:59.432306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.584 [2024-12-09 17:38:59.432341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.584 qpair failed and we were unable to recover it. 00:28:30.584 [2024-12-09 17:38:59.432519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.584 [2024-12-09 17:38:59.432551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.584 qpair failed and we were unable to recover it. 00:28:30.584 [2024-12-09 17:38:59.432674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.584 [2024-12-09 17:38:59.432706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.584 qpair failed and we were unable to recover it. 00:28:30.584 [2024-12-09 17:38:59.432815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.584 [2024-12-09 17:38:59.432846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.584 qpair failed and we were unable to recover it. 00:28:30.584 [2024-12-09 17:38:59.433117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.584 [2024-12-09 17:38:59.433150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.584 qpair failed and we were unable to recover it. 00:28:30.584 [2024-12-09 17:38:59.433293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.584 [2024-12-09 17:38:59.433327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.584 qpair failed and we were unable to recover it. 00:28:30.584 [2024-12-09 17:38:59.433581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.584 [2024-12-09 17:38:59.433620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.584 qpair failed and we were unable to recover it. 00:28:30.584 [2024-12-09 17:38:59.433792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.584 [2024-12-09 17:38:59.433825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.584 qpair failed and we were unable to recover it. 00:28:30.584 [2024-12-09 17:38:59.433943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.584 [2024-12-09 17:38:59.433974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.584 qpair failed and we were unable to recover it. 00:28:30.584 [2024-12-09 17:38:59.434092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.584 [2024-12-09 17:38:59.434126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.584 qpair failed and we were unable to recover it. 00:28:30.584 [2024-12-09 17:38:59.434242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.584 [2024-12-09 17:38:59.434276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.584 qpair failed and we were unable to recover it. 00:28:30.584 [2024-12-09 17:38:59.434396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.584 [2024-12-09 17:38:59.434428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.584 qpair failed and we were unable to recover it. 00:28:30.584 [2024-12-09 17:38:59.434548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.584 [2024-12-09 17:38:59.434581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.584 qpair failed and we were unable to recover it. 00:28:30.584 [2024-12-09 17:38:59.434765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.584 [2024-12-09 17:38:59.434796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.584 qpair failed and we were unable to recover it. 00:28:30.584 [2024-12-09 17:38:59.435056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.584 [2024-12-09 17:38:59.435089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.584 qpair failed and we were unable to recover it. 00:28:30.584 [2024-12-09 17:38:59.435205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.584 [2024-12-09 17:38:59.435249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.585 qpair failed and we were unable to recover it. 00:28:30.585 [2024-12-09 17:38:59.435377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.585 [2024-12-09 17:38:59.435409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.585 qpair failed and we were unable to recover it. 00:28:30.585 [2024-12-09 17:38:59.435585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.585 [2024-12-09 17:38:59.435618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.585 qpair failed and we were unable to recover it. 00:28:30.585 [2024-12-09 17:38:59.435810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.585 [2024-12-09 17:38:59.435843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.585 qpair failed and we were unable to recover it. 00:28:30.585 [2024-12-09 17:38:59.435963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.585 [2024-12-09 17:38:59.435997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.585 qpair failed and we were unable to recover it. 00:28:30.585 [2024-12-09 17:38:59.436132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.585 [2024-12-09 17:38:59.436163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.585 qpair failed and we were unable to recover it. 00:28:30.585 [2024-12-09 17:38:59.436313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.585 [2024-12-09 17:38:59.436347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.585 qpair failed and we were unable to recover it. 00:28:30.585 [2024-12-09 17:38:59.436449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.585 [2024-12-09 17:38:59.436481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.585 qpair failed and we were unable to recover it. 00:28:30.585 [2024-12-09 17:38:59.436696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.585 [2024-12-09 17:38:59.436733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.585 qpair failed and we were unable to recover it. 00:28:30.585 [2024-12-09 17:38:59.436860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.585 [2024-12-09 17:38:59.436892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.585 qpair failed and we were unable to recover it. 00:28:30.585 [2024-12-09 17:38:59.437083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.585 [2024-12-09 17:38:59.437121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.585 qpair failed and we were unable to recover it. 00:28:30.585 [2024-12-09 17:38:59.437239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.585 [2024-12-09 17:38:59.437273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.585 qpair failed and we were unable to recover it. 00:28:30.585 [2024-12-09 17:38:59.437398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.585 [2024-12-09 17:38:59.437433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.585 qpair failed and we were unable to recover it. 00:28:30.585 [2024-12-09 17:38:59.437721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.585 [2024-12-09 17:38:59.437755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.585 qpair failed and we were unable to recover it. 00:28:30.585 [2024-12-09 17:38:59.437952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.585 [2024-12-09 17:38:59.437984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.585 qpair failed and we were unable to recover it. 00:28:30.585 [2024-12-09 17:38:59.438109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.585 [2024-12-09 17:38:59.438141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.585 qpair failed and we were unable to recover it. 00:28:30.585 [2024-12-09 17:38:59.438319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.585 [2024-12-09 17:38:59.438355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.585 qpair failed and we were unable to recover it. 00:28:30.585 [2024-12-09 17:38:59.438467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.585 [2024-12-09 17:38:59.438500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.585 qpair failed and we were unable to recover it. 00:28:30.585 [2024-12-09 17:38:59.438640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.585 [2024-12-09 17:38:59.438676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.585 qpair failed and we were unable to recover it. 00:28:30.585 [2024-12-09 17:38:59.438796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.585 [2024-12-09 17:38:59.438830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.585 qpair failed and we were unable to recover it. 00:28:30.585 [2024-12-09 17:38:59.438960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.585 [2024-12-09 17:38:59.438992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.585 qpair failed and we were unable to recover it. 00:28:30.585 [2024-12-09 17:38:59.439189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.585 [2024-12-09 17:38:59.439230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.585 qpair failed and we were unable to recover it. 00:28:30.585 [2024-12-09 17:38:59.439356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.585 [2024-12-09 17:38:59.439388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.585 qpair failed and we were unable to recover it. 00:28:30.585 [2024-12-09 17:38:59.439664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.585 [2024-12-09 17:38:59.439710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.585 qpair failed and we were unable to recover it. 00:28:30.585 [2024-12-09 17:38:59.439899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.585 [2024-12-09 17:38:59.439935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.585 qpair failed and we were unable to recover it. 00:28:30.585 [2024-12-09 17:38:59.440125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.585 [2024-12-09 17:38:59.440159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.585 qpair failed and we were unable to recover it. 00:28:30.585 [2024-12-09 17:38:59.440408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.585 [2024-12-09 17:38:59.440444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.585 qpair failed and we were unable to recover it. 00:28:30.585 [2024-12-09 17:38:59.440644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.585 [2024-12-09 17:38:59.440684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.585 qpair failed and we were unable to recover it. 00:28:30.585 [2024-12-09 17:38:59.440934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.585 [2024-12-09 17:38:59.440971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.585 qpair failed and we were unable to recover it. 00:28:30.585 [2024-12-09 17:38:59.441101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.585 [2024-12-09 17:38:59.441137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.585 qpair failed and we were unable to recover it. 00:28:30.585 [2024-12-09 17:38:59.441281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.585 [2024-12-09 17:38:59.441316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.585 qpair failed and we were unable to recover it. 00:28:30.585 [2024-12-09 17:38:59.441447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.585 [2024-12-09 17:38:59.441488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.585 qpair failed and we were unable to recover it. 00:28:30.585 [2024-12-09 17:38:59.441613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.585 [2024-12-09 17:38:59.441647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.585 qpair failed and we were unable to recover it. 00:28:30.585 [2024-12-09 17:38:59.441918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.585 [2024-12-09 17:38:59.441951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.585 qpair failed and we were unable to recover it. 00:28:30.585 [2024-12-09 17:38:59.442123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.585 [2024-12-09 17:38:59.442160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.585 qpair failed and we were unable to recover it. 00:28:30.585 [2024-12-09 17:38:59.442363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.585 [2024-12-09 17:38:59.442397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.585 qpair failed and we were unable to recover it. 00:28:30.585 [2024-12-09 17:38:59.442573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.585 [2024-12-09 17:38:59.442606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.585 qpair failed and we were unable to recover it. 00:28:30.585 [2024-12-09 17:38:59.442721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.585 [2024-12-09 17:38:59.442753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.585 qpair failed and we were unable to recover it. 00:28:30.585 [2024-12-09 17:38:59.442947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.585 [2024-12-09 17:38:59.442978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.585 qpair failed and we were unable to recover it. 00:28:30.585 [2024-12-09 17:38:59.443108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.586 [2024-12-09 17:38:59.443141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.586 qpair failed and we were unable to recover it. 00:28:30.586 [2024-12-09 17:38:59.443316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.586 [2024-12-09 17:38:59.443350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.586 qpair failed and we were unable to recover it. 00:28:30.586 [2024-12-09 17:38:59.443482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.586 [2024-12-09 17:38:59.443514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.586 qpair failed and we were unable to recover it. 00:28:30.586 [2024-12-09 17:38:59.443708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.586 [2024-12-09 17:38:59.443742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.586 qpair failed and we were unable to recover it. 00:28:30.586 [2024-12-09 17:38:59.443872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.586 [2024-12-09 17:38:59.443903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.586 qpair failed and we were unable to recover it. 00:28:30.586 [2024-12-09 17:38:59.444116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.586 [2024-12-09 17:38:59.444149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.586 qpair failed and we were unable to recover it. 00:28:30.586 [2024-12-09 17:38:59.444343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.586 [2024-12-09 17:38:59.444382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.586 qpair failed and we were unable to recover it. 00:28:30.586 [2024-12-09 17:38:59.444576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.586 [2024-12-09 17:38:59.444608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.586 qpair failed and we were unable to recover it. 00:28:30.586 [2024-12-09 17:38:59.444905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.586 [2024-12-09 17:38:59.444938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.586 qpair failed and we were unable to recover it. 00:28:30.586 [2024-12-09 17:38:59.445184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.586 [2024-12-09 17:38:59.445226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.586 qpair failed and we were unable to recover it. 00:28:30.586 [2024-12-09 17:38:59.445345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.586 [2024-12-09 17:38:59.445379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.586 qpair failed and we were unable to recover it. 00:28:30.586 [2024-12-09 17:38:59.445562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.586 [2024-12-09 17:38:59.445596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.586 qpair failed and we were unable to recover it. 00:28:30.586 [2024-12-09 17:38:59.445804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.586 [2024-12-09 17:38:59.445841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.586 qpair failed and we were unable to recover it. 00:28:30.586 [2024-12-09 17:38:59.445967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.586 [2024-12-09 17:38:59.445999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.586 qpair failed and we were unable to recover it. 00:28:30.586 [2024-12-09 17:38:59.446175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.586 [2024-12-09 17:38:59.446209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.586 qpair failed and we were unable to recover it. 00:28:30.586 [2024-12-09 17:38:59.446345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.586 [2024-12-09 17:38:59.446380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.586 qpair failed and we were unable to recover it. 00:28:30.586 [2024-12-09 17:38:59.446564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.586 [2024-12-09 17:38:59.446600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.586 qpair failed and we were unable to recover it. 00:28:30.586 [2024-12-09 17:38:59.446721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.586 [2024-12-09 17:38:59.446764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.586 qpair failed and we were unable to recover it. 00:28:30.586 [2024-12-09 17:38:59.446945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.586 [2024-12-09 17:38:59.446979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.586 qpair failed and we were unable to recover it. 00:28:30.586 [2024-12-09 17:38:59.447101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.586 [2024-12-09 17:38:59.447134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.586 qpair failed and we were unable to recover it. 00:28:30.586 [2024-12-09 17:38:59.447252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.586 [2024-12-09 17:38:59.447285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.586 qpair failed and we were unable to recover it. 00:28:30.586 [2024-12-09 17:38:59.447401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.586 [2024-12-09 17:38:59.447432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.586 qpair failed and we were unable to recover it. 00:28:30.586 [2024-12-09 17:38:59.447603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.586 [2024-12-09 17:38:59.447635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.586 qpair failed and we were unable to recover it. 00:28:30.586 [2024-12-09 17:38:59.447820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.586 [2024-12-09 17:38:59.447852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.586 qpair failed and we were unable to recover it. 00:28:30.586 [2024-12-09 17:38:59.447956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.586 [2024-12-09 17:38:59.447987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.586 qpair failed and we were unable to recover it. 00:28:30.586 [2024-12-09 17:38:59.448193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.586 [2024-12-09 17:38:59.448247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.586 qpair failed and we were unable to recover it. 00:28:30.586 [2024-12-09 17:38:59.448356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.586 [2024-12-09 17:38:59.448388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.586 qpair failed and we were unable to recover it. 00:28:30.586 [2024-12-09 17:38:59.448564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.586 [2024-12-09 17:38:59.448595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.586 qpair failed and we were unable to recover it. 00:28:30.586 [2024-12-09 17:38:59.448774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.586 [2024-12-09 17:38:59.448806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.586 qpair failed and we were unable to recover it. 00:28:30.586 [2024-12-09 17:38:59.448992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.586 [2024-12-09 17:38:59.449024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.586 qpair failed and we were unable to recover it. 00:28:30.586 [2024-12-09 17:38:59.449165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.586 [2024-12-09 17:38:59.449197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.586 qpair failed and we were unable to recover it. 00:28:30.586 [2024-12-09 17:38:59.449389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.586 [2024-12-09 17:38:59.449422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.586 qpair failed and we were unable to recover it. 00:28:30.586 [2024-12-09 17:38:59.449615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.586 [2024-12-09 17:38:59.449653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.586 qpair failed and we were unable to recover it. 00:28:30.586 [2024-12-09 17:38:59.449836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.587 [2024-12-09 17:38:59.449867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.587 qpair failed and we were unable to recover it. 00:28:30.587 [2024-12-09 17:38:59.450108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.587 [2024-12-09 17:38:59.450139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.587 qpair failed and we were unable to recover it. 00:28:30.587 [2024-12-09 17:38:59.450317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.587 [2024-12-09 17:38:59.450350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.587 qpair failed and we were unable to recover it. 00:28:30.587 [2024-12-09 17:38:59.450479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.587 [2024-12-09 17:38:59.450510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.587 qpair failed and we were unable to recover it. 00:28:30.587 [2024-12-09 17:38:59.450685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.587 [2024-12-09 17:38:59.450722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.587 qpair failed and we were unable to recover it. 00:28:30.587 [2024-12-09 17:38:59.450900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.587 [2024-12-09 17:38:59.450938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.587 qpair failed and we were unable to recover it. 00:28:30.587 [2024-12-09 17:38:59.451114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.587 [2024-12-09 17:38:59.451147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.587 qpair failed and we were unable to recover it. 00:28:30.587 [2024-12-09 17:38:59.451269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.587 [2024-12-09 17:38:59.451304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.587 qpair failed and we were unable to recover it. 00:28:30.587 [2024-12-09 17:38:59.451491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.587 [2024-12-09 17:38:59.451524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.587 qpair failed and we were unable to recover it. 00:28:30.587 [2024-12-09 17:38:59.451704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.587 [2024-12-09 17:38:59.451737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.587 qpair failed and we were unable to recover it. 00:28:30.587 [2024-12-09 17:38:59.451910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.587 [2024-12-09 17:38:59.451948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.587 qpair failed and we were unable to recover it. 00:28:30.587 [2024-12-09 17:38:59.452134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.587 [2024-12-09 17:38:59.452169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.587 qpair failed and we were unable to recover it. 00:28:30.587 [2024-12-09 17:38:59.452306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.587 [2024-12-09 17:38:59.452339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.587 qpair failed and we were unable to recover it. 00:28:30.587 [2024-12-09 17:38:59.452533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.587 [2024-12-09 17:38:59.452566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.587 qpair failed and we were unable to recover it. 00:28:30.587 [2024-12-09 17:38:59.452761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.587 [2024-12-09 17:38:59.452795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.587 qpair failed and we were unable to recover it. 00:28:30.587 [2024-12-09 17:38:59.452989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.587 [2024-12-09 17:38:59.453024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.587 qpair failed and we were unable to recover it. 00:28:30.587 [2024-12-09 17:38:59.453151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.587 [2024-12-09 17:38:59.453183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.587 qpair failed and we were unable to recover it. 00:28:30.587 [2024-12-09 17:38:59.453459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.587 [2024-12-09 17:38:59.453493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.587 qpair failed and we were unable to recover it. 00:28:30.587 [2024-12-09 17:38:59.453607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.587 [2024-12-09 17:38:59.453640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.587 qpair failed and we were unable to recover it. 00:28:30.587 [2024-12-09 17:38:59.453767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.587 [2024-12-09 17:38:59.453800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.587 qpair failed and we were unable to recover it. 00:28:30.587 [2024-12-09 17:38:59.454004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.587 [2024-12-09 17:38:59.454037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.587 qpair failed and we were unable to recover it. 00:28:30.587 [2024-12-09 17:38:59.454289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.587 [2024-12-09 17:38:59.454322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.587 qpair failed and we were unable to recover it. 00:28:30.587 [2024-12-09 17:38:59.454438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.587 [2024-12-09 17:38:59.454471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.587 qpair failed and we were unable to recover it. 00:28:30.587 [2024-12-09 17:38:59.454644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.587 [2024-12-09 17:38:59.454679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.587 qpair failed and we were unable to recover it. 00:28:30.587 [2024-12-09 17:38:59.454807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.587 [2024-12-09 17:38:59.454839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.587 qpair failed and we were unable to recover it. 00:28:30.587 [2024-12-09 17:38:59.455016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.587 [2024-12-09 17:38:59.455050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.587 qpair failed and we were unable to recover it. 00:28:30.587 [2024-12-09 17:38:59.455244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.587 [2024-12-09 17:38:59.455279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.587 qpair failed and we were unable to recover it. 00:28:30.587 [2024-12-09 17:38:59.455447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.587 [2024-12-09 17:38:59.455482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.587 qpair failed and we were unable to recover it. 00:28:30.587 [2024-12-09 17:38:59.455697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.587 [2024-12-09 17:38:59.455732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.587 qpair failed and we were unable to recover it. 00:28:30.587 [2024-12-09 17:38:59.455948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.587 [2024-12-09 17:38:59.455985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.587 qpair failed and we were unable to recover it. 00:28:30.587 [2024-12-09 17:38:59.456108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.587 [2024-12-09 17:38:59.456152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.587 qpair failed and we were unable to recover it. 00:28:30.587 [2024-12-09 17:38:59.456383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.587 [2024-12-09 17:38:59.456420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.587 qpair failed and we were unable to recover it. 00:28:30.587 [2024-12-09 17:38:59.456620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.587 [2024-12-09 17:38:59.456659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.587 qpair failed and we were unable to recover it. 00:28:30.587 [2024-12-09 17:38:59.456843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.587 [2024-12-09 17:38:59.456878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.587 qpair failed and we were unable to recover it. 00:28:30.587 [2024-12-09 17:38:59.457064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.587 [2024-12-09 17:38:59.457100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.587 qpair failed and we were unable to recover it. 00:28:30.587 [2024-12-09 17:38:59.457238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.587 [2024-12-09 17:38:59.457276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.587 qpair failed and we were unable to recover it. 00:28:30.587 [2024-12-09 17:38:59.457401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.587 [2024-12-09 17:38:59.457438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.587 qpair failed and we were unable to recover it. 00:28:30.587 [2024-12-09 17:38:59.457701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.588 [2024-12-09 17:38:59.457737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.588 qpair failed and we were unable to recover it. 00:28:30.588 [2024-12-09 17:38:59.458009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.588 [2024-12-09 17:38:59.458045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.588 qpair failed and we were unable to recover it. 00:28:30.588 [2024-12-09 17:38:59.458187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.588 [2024-12-09 17:38:59.458239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.588 qpair failed and we were unable to recover it. 00:28:30.588 [2024-12-09 17:38:59.458431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.588 [2024-12-09 17:38:59.458466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.588 qpair failed and we were unable to recover it. 00:28:30.588 [2024-12-09 17:38:59.458714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.588 [2024-12-09 17:38:59.458750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.588 qpair failed and we were unable to recover it. 00:28:30.588 [2024-12-09 17:38:59.458925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.588 [2024-12-09 17:38:59.458959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.588 qpair failed and we were unable to recover it. 00:28:30.588 [2024-12-09 17:38:59.459167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.588 [2024-12-09 17:38:59.459202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.588 qpair failed and we were unable to recover it. 00:28:30.588 [2024-12-09 17:38:59.459333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.588 [2024-12-09 17:38:59.459370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.588 qpair failed and we were unable to recover it. 00:28:30.588 [2024-12-09 17:38:59.459541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.588 [2024-12-09 17:38:59.459575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.588 qpair failed and we were unable to recover it. 00:28:30.588 [2024-12-09 17:38:59.459768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.588 [2024-12-09 17:38:59.459801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.588 qpair failed and we were unable to recover it. 00:28:30.588 [2024-12-09 17:38:59.459934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.588 [2024-12-09 17:38:59.459968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.588 qpair failed and we were unable to recover it. 00:28:30.588 [2024-12-09 17:38:59.460209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.588 [2024-12-09 17:38:59.460253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.588 qpair failed and we were unable to recover it. 00:28:30.588 [2024-12-09 17:38:59.460359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.588 [2024-12-09 17:38:59.460391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.588 qpair failed and we were unable to recover it. 00:28:30.588 [2024-12-09 17:38:59.460629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.588 [2024-12-09 17:38:59.460669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.588 qpair failed and we were unable to recover it. 00:28:30.588 [2024-12-09 17:38:59.460792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.588 [2024-12-09 17:38:59.460825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.588 qpair failed and we were unable to recover it. 00:28:30.588 [2024-12-09 17:38:59.460949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.588 [2024-12-09 17:38:59.460985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.588 qpair failed and we were unable to recover it. 00:28:30.588 [2024-12-09 17:38:59.461178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.588 [2024-12-09 17:38:59.461210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.588 qpair failed and we were unable to recover it. 00:28:30.588 [2024-12-09 17:38:59.461397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.588 [2024-12-09 17:38:59.461431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.588 qpair failed and we were unable to recover it. 00:28:30.588 [2024-12-09 17:38:59.461598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.588 [2024-12-09 17:38:59.461637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.588 qpair failed and we were unable to recover it. 00:28:30.588 [2024-12-09 17:38:59.461762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.588 [2024-12-09 17:38:59.461793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.588 qpair failed and we were unable to recover it. 00:28:30.588 [2024-12-09 17:38:59.462041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.588 [2024-12-09 17:38:59.462074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.588 qpair failed and we were unable to recover it. 00:28:30.588 [2024-12-09 17:38:59.462248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.588 [2024-12-09 17:38:59.462281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.588 qpair failed and we were unable to recover it. 00:28:30.588 [2024-12-09 17:38:59.462475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.588 [2024-12-09 17:38:59.462509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.588 qpair failed and we were unable to recover it. 00:28:30.588 [2024-12-09 17:38:59.462637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.588 [2024-12-09 17:38:59.462674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.588 qpair failed and we were unable to recover it. 00:28:30.588 [2024-12-09 17:38:59.462809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.588 [2024-12-09 17:38:59.462843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.588 qpair failed and we were unable to recover it. 00:28:30.588 [2024-12-09 17:38:59.463054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.588 [2024-12-09 17:38:59.463088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.588 qpair failed and we were unable to recover it. 00:28:30.588 [2024-12-09 17:38:59.463282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.588 [2024-12-09 17:38:59.463317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.588 qpair failed and we were unable to recover it. 00:28:30.588 [2024-12-09 17:38:59.463444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.588 [2024-12-09 17:38:59.463478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.588 qpair failed and we were unable to recover it. 00:28:30.588 [2024-12-09 17:38:59.463532] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x51f460 (9): Bad file descriptor 00:28:30.588 [2024-12-09 17:38:59.463746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.588 [2024-12-09 17:38:59.463813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.588 qpair failed and we were unable to recover it. 00:28:30.588 [2024-12-09 17:38:59.464069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.588 [2024-12-09 17:38:59.464142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.588 qpair failed and we were unable to recover it. 00:28:30.588 [2024-12-09 17:38:59.464398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.588 [2024-12-09 17:38:59.464469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.588 qpair failed and we were unable to recover it. 00:28:30.588 [2024-12-09 17:38:59.464763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.588 [2024-12-09 17:38:59.464801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.588 qpair failed and we were unable to recover it. 00:28:30.588 [2024-12-09 17:38:59.464982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.588 [2024-12-09 17:38:59.465021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.588 qpair failed and we were unable to recover it. 00:28:30.588 [2024-12-09 17:38:59.465157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.588 [2024-12-09 17:38:59.465191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.588 qpair failed and we were unable to recover it. 00:28:30.588 [2024-12-09 17:38:59.465336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.588 [2024-12-09 17:38:59.465370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.588 qpair failed and we were unable to recover it. 00:28:30.588 [2024-12-09 17:38:59.465514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.588 [2024-12-09 17:38:59.465547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.588 qpair failed and we were unable to recover it. 00:28:30.588 [2024-12-09 17:38:59.465668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.588 [2024-12-09 17:38:59.465701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.588 qpair failed and we were unable to recover it. 00:28:30.588 [2024-12-09 17:38:59.465836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.588 [2024-12-09 17:38:59.465869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.588 qpair failed and we were unable to recover it. 00:28:30.588 [2024-12-09 17:38:59.466056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.588 [2024-12-09 17:38:59.466094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.588 qpair failed and we were unable to recover it. 00:28:30.589 [2024-12-09 17:38:59.466283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.589 [2024-12-09 17:38:59.466320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.589 qpair failed and we were unable to recover it. 00:28:30.589 [2024-12-09 17:38:59.466531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.589 [2024-12-09 17:38:59.466562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.589 qpair failed and we were unable to recover it. 00:28:30.589 [2024-12-09 17:38:59.466744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.589 [2024-12-09 17:38:59.466778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.589 qpair failed and we were unable to recover it. 00:28:30.589 [2024-12-09 17:38:59.466976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.589 [2024-12-09 17:38:59.467018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.589 qpair failed and we were unable to recover it. 00:28:30.589 [2024-12-09 17:38:59.467190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.589 [2024-12-09 17:38:59.467239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.589 qpair failed and we were unable to recover it. 00:28:30.589 [2024-12-09 17:38:59.467442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.589 [2024-12-09 17:38:59.467477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.589 qpair failed and we were unable to recover it. 00:28:30.589 [2024-12-09 17:38:59.467651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.589 [2024-12-09 17:38:59.467683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.589 qpair failed and we were unable to recover it. 00:28:30.589 [2024-12-09 17:38:59.467873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.589 [2024-12-09 17:38:59.467905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.589 qpair failed and we were unable to recover it. 00:28:30.589 [2024-12-09 17:38:59.468122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.589 [2024-12-09 17:38:59.468158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.589 qpair failed and we were unable to recover it. 00:28:30.589 [2024-12-09 17:38:59.468356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.589 [2024-12-09 17:38:59.468391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.589 qpair failed and we were unable to recover it. 00:28:30.589 [2024-12-09 17:38:59.468580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.589 [2024-12-09 17:38:59.468614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.589 qpair failed and we were unable to recover it. 00:28:30.589 [2024-12-09 17:38:59.468809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.589 [2024-12-09 17:38:59.468844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.589 qpair failed and we were unable to recover it. 00:28:30.589 [2024-12-09 17:38:59.469042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.589 [2024-12-09 17:38:59.469075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.589 qpair failed and we were unable to recover it. 00:28:30.589 [2024-12-09 17:38:59.469285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.589 [2024-12-09 17:38:59.469319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.589 qpair failed and we were unable to recover it. 00:28:30.589 [2024-12-09 17:38:59.469448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.589 [2024-12-09 17:38:59.469483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.589 qpair failed and we were unable to recover it. 00:28:30.589 [2024-12-09 17:38:59.469672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.589 [2024-12-09 17:38:59.469708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.589 qpair failed and we were unable to recover it. 00:28:30.589 [2024-12-09 17:38:59.469917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.589 [2024-12-09 17:38:59.469964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.589 qpair failed and we were unable to recover it. 00:28:30.589 [2024-12-09 17:38:59.470238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.589 [2024-12-09 17:38:59.470272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.589 qpair failed and we were unable to recover it. 00:28:30.589 [2024-12-09 17:38:59.470376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.589 [2024-12-09 17:38:59.470410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.589 qpair failed and we were unable to recover it. 00:28:30.589 [2024-12-09 17:38:59.470602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.589 [2024-12-09 17:38:59.470636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.589 qpair failed and we were unable to recover it. 00:28:30.589 [2024-12-09 17:38:59.470828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.589 [2024-12-09 17:38:59.470861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.589 qpair failed and we were unable to recover it. 00:28:30.589 [2024-12-09 17:38:59.471036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.589 [2024-12-09 17:38:59.471070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.589 qpair failed and we were unable to recover it. 00:28:30.589 [2024-12-09 17:38:59.471333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.589 [2024-12-09 17:38:59.471367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.589 qpair failed and we were unable to recover it. 00:28:30.589 [2024-12-09 17:38:59.471482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.589 [2024-12-09 17:38:59.471516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.589 qpair failed and we were unable to recover it. 00:28:30.589 [2024-12-09 17:38:59.471639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.589 [2024-12-09 17:38:59.471673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.589 qpair failed and we were unable to recover it. 00:28:30.589 [2024-12-09 17:38:59.471790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.589 [2024-12-09 17:38:59.471823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.589 qpair failed and we were unable to recover it. 00:28:30.589 [2024-12-09 17:38:59.471941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.589 [2024-12-09 17:38:59.471974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.589 qpair failed and we were unable to recover it. 00:28:30.589 [2024-12-09 17:38:59.472163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.589 [2024-12-09 17:38:59.472196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.589 qpair failed and we were unable to recover it. 00:28:30.589 [2024-12-09 17:38:59.472344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.589 [2024-12-09 17:38:59.472378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.589 qpair failed and we were unable to recover it. 00:28:30.589 [2024-12-09 17:38:59.472487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.589 [2024-12-09 17:38:59.472521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.589 qpair failed and we were unable to recover it. 00:28:30.589 [2024-12-09 17:38:59.472715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.589 [2024-12-09 17:38:59.472747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.589 qpair failed and we were unable to recover it. 00:28:30.589 [2024-12-09 17:38:59.473037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.589 [2024-12-09 17:38:59.473071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.589 qpair failed and we were unable to recover it. 00:28:30.589 [2024-12-09 17:38:59.473250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.589 [2024-12-09 17:38:59.473285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.589 qpair failed and we were unable to recover it. 00:28:30.589 [2024-12-09 17:38:59.473474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.589 [2024-12-09 17:38:59.473507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.589 qpair failed and we were unable to recover it. 00:28:30.589 [2024-12-09 17:38:59.473613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.589 [2024-12-09 17:38:59.473646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.589 qpair failed and we were unable to recover it. 00:28:30.589 [2024-12-09 17:38:59.473884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.589 [2024-12-09 17:38:59.473917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.589 qpair failed and we were unable to recover it. 00:28:30.589 [2024-12-09 17:38:59.474131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.589 [2024-12-09 17:38:59.474169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.589 qpair failed and we were unable to recover it. 00:28:30.589 [2024-12-09 17:38:59.474291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.589 [2024-12-09 17:38:59.474326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.589 qpair failed and we were unable to recover it. 00:28:30.589 [2024-12-09 17:38:59.474581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.590 [2024-12-09 17:38:59.474614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.590 qpair failed and we were unable to recover it. 00:28:30.590 [2024-12-09 17:38:59.474803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.590 [2024-12-09 17:38:59.474837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.590 qpair failed and we were unable to recover it. 00:28:30.590 [2024-12-09 17:38:59.475100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.590 [2024-12-09 17:38:59.475133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.590 qpair failed and we were unable to recover it. 00:28:30.590 [2024-12-09 17:38:59.475373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.590 [2024-12-09 17:38:59.475408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.590 qpair failed and we were unable to recover it. 00:28:30.590 [2024-12-09 17:38:59.475599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.590 [2024-12-09 17:38:59.475632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.590 qpair failed and we were unable to recover it. 00:28:30.590 [2024-12-09 17:38:59.475741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.590 [2024-12-09 17:38:59.475781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.590 qpair failed and we were unable to recover it. 00:28:30.590 [2024-12-09 17:38:59.475979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.590 [2024-12-09 17:38:59.476013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.590 qpair failed and we were unable to recover it. 00:28:30.590 [2024-12-09 17:38:59.476149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.590 [2024-12-09 17:38:59.476183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.590 qpair failed and we were unable to recover it. 00:28:30.590 [2024-12-09 17:38:59.476391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.590 [2024-12-09 17:38:59.476426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.590 qpair failed and we were unable to recover it. 00:28:30.590 [2024-12-09 17:38:59.476608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.590 [2024-12-09 17:38:59.476641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.590 qpair failed and we were unable to recover it. 00:28:30.590 [2024-12-09 17:38:59.476776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.590 [2024-12-09 17:38:59.476809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.590 qpair failed and we were unable to recover it. 00:28:30.590 [2024-12-09 17:38:59.477075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.590 [2024-12-09 17:38:59.477108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.590 qpair failed and we were unable to recover it. 00:28:30.590 [2024-12-09 17:38:59.477292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.590 [2024-12-09 17:38:59.477327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.590 qpair failed and we were unable to recover it. 00:28:30.590 [2024-12-09 17:38:59.477466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.590 [2024-12-09 17:38:59.477500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.590 qpair failed and we were unable to recover it. 00:28:30.590 [2024-12-09 17:38:59.477637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.590 [2024-12-09 17:38:59.477670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.590 qpair failed and we were unable to recover it. 00:28:30.590 [2024-12-09 17:38:59.477778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.590 [2024-12-09 17:38:59.477811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.590 qpair failed and we were unable to recover it. 00:28:30.590 [2024-12-09 17:38:59.478053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.590 [2024-12-09 17:38:59.478086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.590 qpair failed and we were unable to recover it. 00:28:30.590 [2024-12-09 17:38:59.478279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.590 [2024-12-09 17:38:59.478313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.590 qpair failed and we were unable to recover it. 00:28:30.590 [2024-12-09 17:38:59.478432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.590 [2024-12-09 17:38:59.478465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.590 qpair failed and we were unable to recover it. 00:28:30.590 [2024-12-09 17:38:59.478701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.590 [2024-12-09 17:38:59.478735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.590 qpair failed and we were unable to recover it. 00:28:30.590 [2024-12-09 17:38:59.478852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.590 [2024-12-09 17:38:59.478887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.590 qpair failed and we were unable to recover it. 00:28:30.590 [2024-12-09 17:38:59.479098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.590 [2024-12-09 17:38:59.479132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.590 qpair failed and we were unable to recover it. 00:28:30.590 [2024-12-09 17:38:59.479309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.590 [2024-12-09 17:38:59.479345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.590 qpair failed and we were unable to recover it. 00:28:30.590 [2024-12-09 17:38:59.479488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.590 [2024-12-09 17:38:59.479520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.590 qpair failed and we were unable to recover it. 00:28:30.590 [2024-12-09 17:38:59.479649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.590 [2024-12-09 17:38:59.479681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.590 qpair failed and we were unable to recover it. 00:28:30.590 [2024-12-09 17:38:59.479869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.590 [2024-12-09 17:38:59.479902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.590 qpair failed and we were unable to recover it. 00:28:30.590 [2024-12-09 17:38:59.480022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.590 [2024-12-09 17:38:59.480055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.590 qpair failed and we were unable to recover it. 00:28:30.590 [2024-12-09 17:38:59.480295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.590 [2024-12-09 17:38:59.480329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.590 qpair failed and we were unable to recover it. 00:28:30.590 [2024-12-09 17:38:59.480521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.590 [2024-12-09 17:38:59.480554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.590 qpair failed and we were unable to recover it. 00:28:30.590 [2024-12-09 17:38:59.480727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.590 [2024-12-09 17:38:59.480760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.590 qpair failed and we were unable to recover it. 00:28:30.590 [2024-12-09 17:38:59.481003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.590 [2024-12-09 17:38:59.481038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.590 qpair failed and we were unable to recover it. 00:28:30.590 [2024-12-09 17:38:59.481179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.590 [2024-12-09 17:38:59.481213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.590 qpair failed and we were unable to recover it. 00:28:30.590 [2024-12-09 17:38:59.481398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.590 [2024-12-09 17:38:59.481437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.590 qpair failed and we were unable to recover it. 00:28:30.590 [2024-12-09 17:38:59.481619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.590 [2024-12-09 17:38:59.481651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.590 qpair failed and we were unable to recover it. 00:28:30.590 [2024-12-09 17:38:59.481757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.590 [2024-12-09 17:38:59.481790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.590 qpair failed and we were unable to recover it. 00:28:30.590 [2024-12-09 17:38:59.482061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.590 [2024-12-09 17:38:59.482093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.590 qpair failed and we were unable to recover it. 00:28:30.590 [2024-12-09 17:38:59.482214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.590 [2024-12-09 17:38:59.482257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.590 qpair failed and we were unable to recover it. 00:28:30.590 [2024-12-09 17:38:59.482436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.590 [2024-12-09 17:38:59.482470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.590 qpair failed and we were unable to recover it. 00:28:30.590 [2024-12-09 17:38:59.482589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.590 [2024-12-09 17:38:59.482621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.590 qpair failed and we were unable to recover it. 00:28:30.590 [2024-12-09 17:38:59.482739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.590 [2024-12-09 17:38:59.482773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.590 qpair failed and we were unable to recover it. 00:28:30.591 [2024-12-09 17:38:59.483009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.591 [2024-12-09 17:38:59.483042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.591 qpair failed and we were unable to recover it. 00:28:30.591 [2024-12-09 17:38:59.483162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.591 [2024-12-09 17:38:59.483194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.591 qpair failed and we were unable to recover it. 00:28:30.591 [2024-12-09 17:38:59.483340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.591 [2024-12-09 17:38:59.483374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.591 qpair failed and we were unable to recover it. 00:28:30.591 [2024-12-09 17:38:59.483568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.591 [2024-12-09 17:38:59.483602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.591 qpair failed and we were unable to recover it. 00:28:30.591 [2024-12-09 17:38:59.483708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.591 [2024-12-09 17:38:59.483741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.591 qpair failed and we were unable to recover it. 00:28:30.591 [2024-12-09 17:38:59.483917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.591 [2024-12-09 17:38:59.483950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.591 qpair failed and we were unable to recover it. 00:28:30.591 [2024-12-09 17:38:59.484088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.591 [2024-12-09 17:38:59.484126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.591 qpair failed and we were unable to recover it. 00:28:30.591 [2024-12-09 17:38:59.484392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.591 [2024-12-09 17:38:59.484427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.591 qpair failed and we were unable to recover it. 00:28:30.591 [2024-12-09 17:38:59.484561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.591 [2024-12-09 17:38:59.484595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.591 qpair failed and we were unable to recover it. 00:28:30.591 [2024-12-09 17:38:59.484720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.591 [2024-12-09 17:38:59.484753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.591 qpair failed and we were unable to recover it. 00:28:30.591 [2024-12-09 17:38:59.484996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.591 [2024-12-09 17:38:59.485029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.591 qpair failed and we were unable to recover it. 00:28:30.591 [2024-12-09 17:38:59.485236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.591 [2024-12-09 17:38:59.485272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.591 qpair failed and we were unable to recover it. 00:28:30.591 [2024-12-09 17:38:59.485515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.591 [2024-12-09 17:38:59.485549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.591 qpair failed and we were unable to recover it. 00:28:30.591 [2024-12-09 17:38:59.485688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.591 [2024-12-09 17:38:59.485722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.591 qpair failed and we were unable to recover it. 00:28:30.591 [2024-12-09 17:38:59.485894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.591 [2024-12-09 17:38:59.485928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.591 qpair failed and we were unable to recover it. 00:28:30.591 [2024-12-09 17:38:59.486102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.591 [2024-12-09 17:38:59.486136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.591 qpair failed and we were unable to recover it. 00:28:30.591 [2024-12-09 17:38:59.486320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.591 [2024-12-09 17:38:59.486354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.591 qpair failed and we were unable to recover it. 00:28:30.591 [2024-12-09 17:38:59.486533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.591 [2024-12-09 17:38:59.486567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.591 qpair failed and we were unable to recover it. 00:28:30.591 [2024-12-09 17:38:59.486809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.591 [2024-12-09 17:38:59.486843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.591 qpair failed and we were unable to recover it. 00:28:30.591 [2024-12-09 17:38:59.487031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.591 [2024-12-09 17:38:59.487070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.591 qpair failed and we were unable to recover it. 00:28:30.591 [2024-12-09 17:38:59.487194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.591 [2024-12-09 17:38:59.487242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.591 qpair failed and we were unable to recover it. 00:28:30.591 [2024-12-09 17:38:59.487492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.591 [2024-12-09 17:38:59.487526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.591 qpair failed and we were unable to recover it. 00:28:30.591 [2024-12-09 17:38:59.487648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.591 [2024-12-09 17:38:59.487682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.591 qpair failed and we were unable to recover it. 00:28:30.591 [2024-12-09 17:38:59.487853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.591 [2024-12-09 17:38:59.487887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.591 qpair failed and we were unable to recover it. 00:28:30.591 [2024-12-09 17:38:59.488017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.591 [2024-12-09 17:38:59.488051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.591 qpair failed and we were unable to recover it. 00:28:30.591 [2024-12-09 17:38:59.488298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.591 [2024-12-09 17:38:59.488333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.591 qpair failed and we were unable to recover it. 00:28:30.591 [2024-12-09 17:38:59.488595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.591 [2024-12-09 17:38:59.488629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.591 qpair failed and we were unable to recover it. 00:28:30.591 [2024-12-09 17:38:59.488824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.591 [2024-12-09 17:38:59.488858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.591 qpair failed and we were unable to recover it. 00:28:30.591 [2024-12-09 17:38:59.489064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.591 [2024-12-09 17:38:59.489097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.591 qpair failed and we were unable to recover it. 00:28:30.591 [2024-12-09 17:38:59.489338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.591 [2024-12-09 17:38:59.489373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.591 qpair failed and we were unable to recover it. 00:28:30.591 [2024-12-09 17:38:59.489496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.591 [2024-12-09 17:38:59.489530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.591 qpair failed and we were unable to recover it. 00:28:30.591 [2024-12-09 17:38:59.489710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.591 [2024-12-09 17:38:59.489744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.591 qpair failed and we were unable to recover it. 00:28:30.591 [2024-12-09 17:38:59.489868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.591 [2024-12-09 17:38:59.489900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.591 qpair failed and we were unable to recover it. 00:28:30.591 [2024-12-09 17:38:59.490081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.591 [2024-12-09 17:38:59.490115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.591 qpair failed and we were unable to recover it. 00:28:30.592 [2024-12-09 17:38:59.490251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.592 [2024-12-09 17:38:59.490285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.592 qpair failed and we were unable to recover it. 00:28:30.592 [2024-12-09 17:38:59.490498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.592 [2024-12-09 17:38:59.490532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.592 qpair failed and we were unable to recover it. 00:28:30.592 [2024-12-09 17:38:59.490658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.592 [2024-12-09 17:38:59.490692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.592 qpair failed and we were unable to recover it. 00:28:30.592 [2024-12-09 17:38:59.490888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.592 [2024-12-09 17:38:59.490921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.592 qpair failed and we were unable to recover it. 00:28:30.592 [2024-12-09 17:38:59.491198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.592 [2024-12-09 17:38:59.491240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.592 qpair failed and we were unable to recover it. 00:28:30.592 [2024-12-09 17:38:59.491424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.592 [2024-12-09 17:38:59.491458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.592 qpair failed and we were unable to recover it. 00:28:30.592 [2024-12-09 17:38:59.491578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.592 [2024-12-09 17:38:59.491612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.592 qpair failed and we were unable to recover it. 00:28:30.592 [2024-12-09 17:38:59.491725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.592 [2024-12-09 17:38:59.491757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.592 qpair failed and we were unable to recover it. 00:28:30.592 [2024-12-09 17:38:59.491951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.592 [2024-12-09 17:38:59.491984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.592 qpair failed and we were unable to recover it. 00:28:30.592 [2024-12-09 17:38:59.492161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.592 [2024-12-09 17:38:59.492194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.592 qpair failed and we were unable to recover it. 00:28:30.592 [2024-12-09 17:38:59.496251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.592 [2024-12-09 17:38:59.496310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.592 qpair failed and we were unable to recover it. 00:28:30.592 [2024-12-09 17:38:59.496619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.592 [2024-12-09 17:38:59.496660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.592 qpair failed and we were unable to recover it. 00:28:30.592 [2024-12-09 17:38:59.496942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.592 [2024-12-09 17:38:59.496981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.592 qpair failed and we were unable to recover it. 00:28:30.592 [2024-12-09 17:38:59.497179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.592 [2024-12-09 17:38:59.497214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.592 qpair failed and we were unable to recover it. 00:28:30.592 [2024-12-09 17:38:59.497431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.592 [2024-12-09 17:38:59.497467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.592 qpair failed and we were unable to recover it. 00:28:30.592 [2024-12-09 17:38:59.497646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.592 [2024-12-09 17:38:59.497682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.592 qpair failed and we were unable to recover it. 00:28:30.592 [2024-12-09 17:38:59.497819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.592 [2024-12-09 17:38:59.497852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.592 qpair failed and we were unable to recover it. 00:28:30.592 [2024-12-09 17:38:59.498050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.592 [2024-12-09 17:38:59.498085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.592 qpair failed and we were unable to recover it. 00:28:30.592 [2024-12-09 17:38:59.498278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.592 [2024-12-09 17:38:59.498314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.592 qpair failed and we were unable to recover it. 00:28:30.592 [2024-12-09 17:38:59.498512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.592 [2024-12-09 17:38:59.498545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.592 qpair failed and we were unable to recover it. 00:28:30.592 [2024-12-09 17:38:59.498760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.592 [2024-12-09 17:38:59.498794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.592 qpair failed and we were unable to recover it. 00:28:30.592 [2024-12-09 17:38:59.499076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.592 [2024-12-09 17:38:59.499112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.592 qpair failed and we were unable to recover it. 00:28:30.592 [2024-12-09 17:38:59.499248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.592 [2024-12-09 17:38:59.499283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.592 qpair failed and we were unable to recover it. 00:28:30.592 [2024-12-09 17:38:59.499489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.592 [2024-12-09 17:38:59.499523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.592 qpair failed and we were unable to recover it. 00:28:30.592 [2024-12-09 17:38:59.499640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.592 [2024-12-09 17:38:59.499673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.592 qpair failed and we were unable to recover it. 00:28:30.592 [2024-12-09 17:38:59.499858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.592 [2024-12-09 17:38:59.499899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.592 qpair failed and we were unable to recover it. 00:28:30.592 [2024-12-09 17:38:59.500016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.592 [2024-12-09 17:38:59.500048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.592 qpair failed and we were unable to recover it. 00:28:30.592 [2024-12-09 17:38:59.500286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.592 [2024-12-09 17:38:59.500320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.592 qpair failed and we were unable to recover it. 00:28:30.592 [2024-12-09 17:38:59.500438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.592 [2024-12-09 17:38:59.500472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.592 qpair failed and we were unable to recover it. 00:28:30.592 [2024-12-09 17:38:59.500646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.592 [2024-12-09 17:38:59.500678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.592 qpair failed and we were unable to recover it. 00:28:30.592 [2024-12-09 17:38:59.500938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.592 [2024-12-09 17:38:59.500971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.592 qpair failed and we were unable to recover it. 00:28:30.592 [2024-12-09 17:38:59.501143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.592 [2024-12-09 17:38:59.501177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.592 qpair failed and we were unable to recover it. 00:28:30.592 [2024-12-09 17:38:59.501339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.592 [2024-12-09 17:38:59.501373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.592 qpair failed and we were unable to recover it. 00:28:30.592 [2024-12-09 17:38:59.501550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.592 [2024-12-09 17:38:59.501582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.592 qpair failed and we were unable to recover it. 00:28:30.592 [2024-12-09 17:38:59.501761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.592 [2024-12-09 17:38:59.501787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.592 qpair failed and we were unable to recover it. 00:28:30.592 [2024-12-09 17:38:59.501949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.592 [2024-12-09 17:38:59.501974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.592 qpair failed and we were unable to recover it. 00:28:30.592 [2024-12-09 17:38:59.502128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.593 [2024-12-09 17:38:59.502155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.593 qpair failed and we were unable to recover it. 00:28:30.593 [2024-12-09 17:38:59.502269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.593 [2024-12-09 17:38:59.502297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.593 qpair failed and we were unable to recover it. 00:28:30.593 [2024-12-09 17:38:59.502454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.593 [2024-12-09 17:38:59.502479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.593 qpair failed and we were unable to recover it. 00:28:30.593 [2024-12-09 17:38:59.502588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.593 [2024-12-09 17:38:59.502614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.593 qpair failed and we were unable to recover it. 00:28:30.593 [2024-12-09 17:38:59.502815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.593 [2024-12-09 17:38:59.502849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.593 qpair failed and we were unable to recover it. 00:28:30.593 [2024-12-09 17:38:59.502992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.593 [2024-12-09 17:38:59.503025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.593 qpair failed and we were unable to recover it. 00:28:30.593 [2024-12-09 17:38:59.503145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.593 [2024-12-09 17:38:59.503177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.593 qpair failed and we were unable to recover it. 00:28:30.593 [2024-12-09 17:38:59.503315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.593 [2024-12-09 17:38:59.503357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.593 qpair failed and we were unable to recover it. 00:28:30.593 [2024-12-09 17:38:59.503518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.593 [2024-12-09 17:38:59.503544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.593 qpair failed and we were unable to recover it. 00:28:30.593 [2024-12-09 17:38:59.503729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.593 [2024-12-09 17:38:59.503763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.593 qpair failed and we were unable to recover it. 00:28:30.593 [2024-12-09 17:38:59.503958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.593 [2024-12-09 17:38:59.503991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.593 qpair failed and we were unable to recover it. 00:28:30.593 [2024-12-09 17:38:59.504171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.593 [2024-12-09 17:38:59.504205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.593 qpair failed and we were unable to recover it. 00:28:30.593 [2024-12-09 17:38:59.504393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.593 [2024-12-09 17:38:59.504419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.593 qpair failed and we were unable to recover it. 00:28:30.593 [2024-12-09 17:38:59.504591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.593 [2024-12-09 17:38:59.504637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.593 qpair failed and we were unable to recover it. 00:28:30.593 [2024-12-09 17:38:59.504755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.593 [2024-12-09 17:38:59.504787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.593 qpair failed and we were unable to recover it. 00:28:30.593 [2024-12-09 17:38:59.504916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.593 [2024-12-09 17:38:59.504948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.593 qpair failed and we were unable to recover it. 00:28:30.593 [2024-12-09 17:38:59.505266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.593 [2024-12-09 17:38:59.505340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.593 qpair failed and we were unable to recover it. 00:28:30.593 [2024-12-09 17:38:59.505590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.593 [2024-12-09 17:38:59.505656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.593 qpair failed and we were unable to recover it. 00:28:30.593 [2024-12-09 17:38:59.505898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.593 [2024-12-09 17:38:59.505970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.593 qpair failed and we were unable to recover it. 00:28:30.593 [2024-12-09 17:38:59.506184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.593 [2024-12-09 17:38:59.506229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.593 qpair failed and we were unable to recover it. 00:28:30.593 [2024-12-09 17:38:59.506433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.593 [2024-12-09 17:38:59.506459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.593 qpair failed and we were unable to recover it. 00:28:30.593 [2024-12-09 17:38:59.506623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.593 [2024-12-09 17:38:59.506649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.593 qpair failed and we were unable to recover it. 00:28:30.593 [2024-12-09 17:38:59.506745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.593 [2024-12-09 17:38:59.506772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.593 qpair failed and we were unable to recover it. 00:28:30.593 [2024-12-09 17:38:59.506948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.593 [2024-12-09 17:38:59.506993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.593 qpair failed and we were unable to recover it. 00:28:30.593 [2024-12-09 17:38:59.507175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.593 [2024-12-09 17:38:59.507209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.593 qpair failed and we were unable to recover it. 00:28:30.593 [2024-12-09 17:38:59.507337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.593 [2024-12-09 17:38:59.507371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.593 qpair failed and we were unable to recover it. 00:28:30.593 [2024-12-09 17:38:59.507474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.593 [2024-12-09 17:38:59.507518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.593 qpair failed and we were unable to recover it. 00:28:30.593 [2024-12-09 17:38:59.507685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.593 [2024-12-09 17:38:59.507712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.593 qpair failed and we were unable to recover it. 00:28:30.593 [2024-12-09 17:38:59.507816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.593 [2024-12-09 17:38:59.507843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.593 qpair failed and we were unable to recover it. 00:28:30.593 [2024-12-09 17:38:59.508000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.593 [2024-12-09 17:38:59.508031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.593 qpair failed and we were unable to recover it. 00:28:30.593 [2024-12-09 17:38:59.508128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.593 [2024-12-09 17:38:59.508155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.593 qpair failed and we were unable to recover it. 00:28:30.593 [2024-12-09 17:38:59.508325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.593 [2024-12-09 17:38:59.508353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.593 qpair failed and we were unable to recover it. 00:28:30.593 [2024-12-09 17:38:59.508531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.593 [2024-12-09 17:38:59.508557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.593 qpair failed and we were unable to recover it. 00:28:30.593 [2024-12-09 17:38:59.508670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.593 [2024-12-09 17:38:59.508696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.593 qpair failed and we were unable to recover it. 00:28:30.593 [2024-12-09 17:38:59.508812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.593 [2024-12-09 17:38:59.508838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.593 qpair failed and we were unable to recover it. 00:28:30.593 [2024-12-09 17:38:59.509001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.593 [2024-12-09 17:38:59.509028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.593 qpair failed and we were unable to recover it. 00:28:30.593 [2024-12-09 17:38:59.509281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.593 [2024-12-09 17:38:59.509309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.593 qpair failed and we were unable to recover it. 00:28:30.593 [2024-12-09 17:38:59.509477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.593 [2024-12-09 17:38:59.509504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.593 qpair failed and we were unable to recover it. 00:28:30.593 [2024-12-09 17:38:59.509733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.593 [2024-12-09 17:38:59.509767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.593 qpair failed and we were unable to recover it. 00:28:30.593 [2024-12-09 17:38:59.509870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.594 [2024-12-09 17:38:59.509902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.594 qpair failed and we were unable to recover it. 00:28:30.594 [2024-12-09 17:38:59.510087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.594 [2024-12-09 17:38:59.510120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.594 qpair failed and we were unable to recover it. 00:28:30.594 [2024-12-09 17:38:59.510417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.594 [2024-12-09 17:38:59.510444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.594 qpair failed and we were unable to recover it. 00:28:30.594 [2024-12-09 17:38:59.510605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.594 [2024-12-09 17:38:59.510631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.594 qpair failed and we were unable to recover it. 00:28:30.594 [2024-12-09 17:38:59.510824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.594 [2024-12-09 17:38:59.510851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.594 qpair failed and we were unable to recover it. 00:28:30.594 [2024-12-09 17:38:59.511021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.594 [2024-12-09 17:38:59.511053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.594 qpair failed and we were unable to recover it. 00:28:30.594 [2024-12-09 17:38:59.511245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.594 [2024-12-09 17:38:59.511280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.594 qpair failed and we were unable to recover it. 00:28:30.594 [2024-12-09 17:38:59.511405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.594 [2024-12-09 17:38:59.511438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.594 qpair failed and we were unable to recover it. 00:28:30.594 [2024-12-09 17:38:59.511681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.594 [2024-12-09 17:38:59.511714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.594 qpair failed and we were unable to recover it. 00:28:30.594 [2024-12-09 17:38:59.512016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.594 [2024-12-09 17:38:59.512051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.594 qpair failed and we were unable to recover it. 00:28:30.594 [2024-12-09 17:38:59.512157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.594 [2024-12-09 17:38:59.512191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.594 qpair failed and we were unable to recover it. 00:28:30.594 [2024-12-09 17:38:59.512385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.594 [2024-12-09 17:38:59.512417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.594 qpair failed and we were unable to recover it. 00:28:30.594 [2024-12-09 17:38:59.512536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.594 [2024-12-09 17:38:59.512569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.594 qpair failed and we were unable to recover it. 00:28:30.594 [2024-12-09 17:38:59.512741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.594 [2024-12-09 17:38:59.512775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.594 qpair failed and we were unable to recover it. 00:28:30.594 [2024-12-09 17:38:59.512952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.594 [2024-12-09 17:38:59.512984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.594 qpair failed and we were unable to recover it. 00:28:30.594 [2024-12-09 17:38:59.513170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.594 [2024-12-09 17:38:59.513204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.594 qpair failed and we were unable to recover it. 00:28:30.594 [2024-12-09 17:38:59.513405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.594 [2024-12-09 17:38:59.513438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.594 qpair failed and we were unable to recover it. 00:28:30.594 [2024-12-09 17:38:59.513555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.594 [2024-12-09 17:38:59.513590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.594 qpair failed and we were unable to recover it. 00:28:30.594 [2024-12-09 17:38:59.513692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.594 [2024-12-09 17:38:59.513722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.594 qpair failed and we were unable to recover it. 00:28:30.594 [2024-12-09 17:38:59.513888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.594 [2024-12-09 17:38:59.513918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.594 qpair failed and we were unable to recover it. 00:28:30.594 [2024-12-09 17:38:59.514037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.594 [2024-12-09 17:38:59.514067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.594 qpair failed and we were unable to recover it. 00:28:30.594 [2024-12-09 17:38:59.514190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.594 [2024-12-09 17:38:59.514250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.594 qpair failed and we were unable to recover it. 00:28:30.594 [2024-12-09 17:38:59.514426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.594 [2024-12-09 17:38:59.514458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.594 qpair failed and we were unable to recover it. 00:28:30.594 [2024-12-09 17:38:59.514634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.594 [2024-12-09 17:38:59.514664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.594 qpair failed and we were unable to recover it. 00:28:30.594 [2024-12-09 17:38:59.514775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.594 [2024-12-09 17:38:59.514806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.594 qpair failed and we were unable to recover it. 00:28:30.594 [2024-12-09 17:38:59.514973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.594 [2024-12-09 17:38:59.515004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.594 qpair failed and we were unable to recover it. 00:28:30.594 [2024-12-09 17:38:59.515174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.594 [2024-12-09 17:38:59.515205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.594 qpair failed and we were unable to recover it. 00:28:30.594 [2024-12-09 17:38:59.515448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.594 [2024-12-09 17:38:59.515480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.594 qpair failed and we were unable to recover it. 00:28:30.594 [2024-12-09 17:38:59.515590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.594 [2024-12-09 17:38:59.515623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.594 qpair failed and we were unable to recover it. 00:28:30.594 [2024-12-09 17:38:59.515862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.594 [2024-12-09 17:38:59.515895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.594 qpair failed and we were unable to recover it. 00:28:30.594 [2024-12-09 17:38:59.516082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.594 [2024-12-09 17:38:59.516115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.594 qpair failed and we were unable to recover it. 00:28:30.594 [2024-12-09 17:38:59.516334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.594 [2024-12-09 17:38:59.516367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.594 qpair failed and we were unable to recover it. 00:28:30.594 [2024-12-09 17:38:59.516536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.594 [2024-12-09 17:38:59.516566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.594 qpair failed and we were unable to recover it. 00:28:30.594 [2024-12-09 17:38:59.516676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.594 [2024-12-09 17:38:59.516704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.594 qpair failed and we were unable to recover it. 00:28:30.594 [2024-12-09 17:38:59.516835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.594 [2024-12-09 17:38:59.516866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.594 qpair failed and we were unable to recover it. 00:28:30.594 [2024-12-09 17:38:59.517030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.594 [2024-12-09 17:38:59.517059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.594 qpair failed and we were unable to recover it. 00:28:30.594 [2024-12-09 17:38:59.517161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.594 [2024-12-09 17:38:59.517192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.594 qpair failed and we were unable to recover it. 00:28:30.594 [2024-12-09 17:38:59.517367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.594 [2024-12-09 17:38:59.517397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.594 qpair failed and we were unable to recover it. 00:28:30.594 [2024-12-09 17:38:59.517510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.594 [2024-12-09 17:38:59.517540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.594 qpair failed and we were unable to recover it. 00:28:30.594 [2024-12-09 17:38:59.517756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.595 [2024-12-09 17:38:59.517789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.595 qpair failed and we were unable to recover it. 00:28:30.595 [2024-12-09 17:38:59.518055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.595 [2024-12-09 17:38:59.518088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.595 qpair failed and we were unable to recover it. 00:28:30.595 [2024-12-09 17:38:59.518279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.595 [2024-12-09 17:38:59.518314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.595 qpair failed and we were unable to recover it. 00:28:30.595 [2024-12-09 17:38:59.518504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.595 [2024-12-09 17:38:59.518537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.595 qpair failed and we were unable to recover it. 00:28:30.595 [2024-12-09 17:38:59.518789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.595 [2024-12-09 17:38:59.518822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.595 qpair failed and we were unable to recover it. 00:28:30.595 [2024-12-09 17:38:59.518937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.595 [2024-12-09 17:38:59.518969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.595 qpair failed and we were unable to recover it. 00:28:30.595 [2024-12-09 17:38:59.519093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.595 [2024-12-09 17:38:59.519134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.595 qpair failed and we were unable to recover it. 00:28:30.595 [2024-12-09 17:38:59.519249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.595 [2024-12-09 17:38:59.519281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.595 qpair failed and we were unable to recover it. 00:28:30.595 [2024-12-09 17:38:59.519458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.595 [2024-12-09 17:38:59.519488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.595 qpair failed and we were unable to recover it. 00:28:30.595 [2024-12-09 17:38:59.519673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.595 [2024-12-09 17:38:59.519704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.595 qpair failed and we were unable to recover it. 00:28:30.595 [2024-12-09 17:38:59.519875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.595 [2024-12-09 17:38:59.519905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.595 qpair failed and we were unable to recover it. 00:28:30.595 [2024-12-09 17:38:59.520039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.595 [2024-12-09 17:38:59.520069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.595 qpair failed and we were unable to recover it. 00:28:30.595 [2024-12-09 17:38:59.520253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.595 [2024-12-09 17:38:59.520286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.595 qpair failed and we were unable to recover it. 00:28:30.595 [2024-12-09 17:38:59.520429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.595 [2024-12-09 17:38:59.520458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.595 qpair failed and we were unable to recover it. 00:28:30.595 [2024-12-09 17:38:59.520649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.595 [2024-12-09 17:38:59.520682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.595 qpair failed and we were unable to recover it. 00:28:30.595 [2024-12-09 17:38:59.520785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.595 [2024-12-09 17:38:59.520816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.595 qpair failed and we were unable to recover it. 00:28:30.595 [2024-12-09 17:38:59.520921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.595 [2024-12-09 17:38:59.520954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.595 qpair failed and we were unable to recover it. 00:28:30.595 [2024-12-09 17:38:59.521075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.595 [2024-12-09 17:38:59.521107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.595 qpair failed and we were unable to recover it. 00:28:30.595 [2024-12-09 17:38:59.521298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.595 [2024-12-09 17:38:59.521337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.595 qpair failed and we were unable to recover it. 00:28:30.595 [2024-12-09 17:38:59.521472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.595 [2024-12-09 17:38:59.521505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.595 qpair failed and we were unable to recover it. 00:28:30.595 [2024-12-09 17:38:59.521689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.595 [2024-12-09 17:38:59.521722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.595 qpair failed and we were unable to recover it. 00:28:30.595 [2024-12-09 17:38:59.521827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.595 [2024-12-09 17:38:59.521858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.595 qpair failed and we were unable to recover it. 00:28:30.595 [2024-12-09 17:38:59.522066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.595 [2024-12-09 17:38:59.522100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.595 qpair failed and we were unable to recover it. 00:28:30.595 [2024-12-09 17:38:59.522233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.595 [2024-12-09 17:38:59.522266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.595 qpair failed and we were unable to recover it. 00:28:30.595 [2024-12-09 17:38:59.522462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.595 [2024-12-09 17:38:59.522495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.595 qpair failed and we were unable to recover it. 00:28:30.595 [2024-12-09 17:38:59.522613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.595 [2024-12-09 17:38:59.522645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.595 qpair failed and we were unable to recover it. 00:28:30.595 [2024-12-09 17:38:59.522830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.595 [2024-12-09 17:38:59.522864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.595 qpair failed and we were unable to recover it. 00:28:30.595 [2024-12-09 17:38:59.523088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.595 [2024-12-09 17:38:59.523121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.595 qpair failed and we were unable to recover it. 00:28:30.595 [2024-12-09 17:38:59.523251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.595 [2024-12-09 17:38:59.523284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.595 qpair failed and we were unable to recover it. 00:28:30.595 [2024-12-09 17:38:59.523551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.595 [2024-12-09 17:38:59.523584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.595 qpair failed and we were unable to recover it. 00:28:30.595 [2024-12-09 17:38:59.523850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.595 [2024-12-09 17:38:59.523883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.595 qpair failed and we were unable to recover it. 00:28:30.595 [2024-12-09 17:38:59.524006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.595 [2024-12-09 17:38:59.524038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.595 qpair failed and we were unable to recover it. 00:28:30.595 [2024-12-09 17:38:59.524182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.595 [2024-12-09 17:38:59.524215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.595 qpair failed and we were unable to recover it. 00:28:30.595 [2024-12-09 17:38:59.524412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.595 [2024-12-09 17:38:59.524445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.595 qpair failed and we were unable to recover it. 00:28:30.595 [2024-12-09 17:38:59.524633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.596 [2024-12-09 17:38:59.524667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.596 qpair failed and we were unable to recover it. 00:28:30.596 [2024-12-09 17:38:59.524852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.596 [2024-12-09 17:38:59.524887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.596 qpair failed and we were unable to recover it. 00:28:30.596 [2024-12-09 17:38:59.525001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.596 [2024-12-09 17:38:59.525034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.596 qpair failed and we were unable to recover it. 00:28:30.596 [2024-12-09 17:38:59.525141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.596 [2024-12-09 17:38:59.525176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.596 qpair failed and we were unable to recover it. 00:28:30.596 [2024-12-09 17:38:59.525383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.596 [2024-12-09 17:38:59.525417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.596 qpair failed and we were unable to recover it. 00:28:30.596 [2024-12-09 17:38:59.525548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.596 [2024-12-09 17:38:59.525580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.596 qpair failed and we were unable to recover it. 00:28:30.596 [2024-12-09 17:38:59.525709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.596 [2024-12-09 17:38:59.525742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.596 qpair failed and we were unable to recover it. 00:28:30.596 [2024-12-09 17:38:59.525941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.596 [2024-12-09 17:38:59.525975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.596 qpair failed and we were unable to recover it. 00:28:30.596 [2024-12-09 17:38:59.526091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.596 [2024-12-09 17:38:59.526123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.596 qpair failed and we were unable to recover it. 00:28:30.596 [2024-12-09 17:38:59.526243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.596 [2024-12-09 17:38:59.526277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.596 qpair failed and we were unable to recover it. 00:28:30.596 [2024-12-09 17:38:59.526449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.596 [2024-12-09 17:38:59.526482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.596 qpair failed and we were unable to recover it. 00:28:30.596 [2024-12-09 17:38:59.526685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.596 [2024-12-09 17:38:59.526718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.596 qpair failed and we were unable to recover it. 00:28:30.596 [2024-12-09 17:38:59.526909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.596 [2024-12-09 17:38:59.526942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.596 qpair failed and we were unable to recover it. 00:28:30.596 [2024-12-09 17:38:59.527130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.596 [2024-12-09 17:38:59.527163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.596 qpair failed and we were unable to recover it. 00:28:30.596 [2024-12-09 17:38:59.527356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.596 [2024-12-09 17:38:59.527389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.596 qpair failed and we were unable to recover it. 00:28:30.596 [2024-12-09 17:38:59.527564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.596 [2024-12-09 17:38:59.527597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.596 qpair failed and we were unable to recover it. 00:28:30.596 [2024-12-09 17:38:59.527798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.596 [2024-12-09 17:38:59.527832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.596 qpair failed and we were unable to recover it. 00:28:30.596 [2024-12-09 17:38:59.528074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.596 [2024-12-09 17:38:59.528107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.596 qpair failed and we were unable to recover it. 00:28:30.596 [2024-12-09 17:38:59.528284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.596 [2024-12-09 17:38:59.528319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.596 qpair failed and we were unable to recover it. 00:28:30.596 [2024-12-09 17:38:59.528428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.596 [2024-12-09 17:38:59.528459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.596 qpair failed and we were unable to recover it. 00:28:30.596 [2024-12-09 17:38:59.528651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.596 [2024-12-09 17:38:59.528684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.596 qpair failed and we were unable to recover it. 00:28:30.596 [2024-12-09 17:38:59.528953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.596 [2024-12-09 17:38:59.528986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.596 qpair failed and we were unable to recover it. 00:28:30.596 [2024-12-09 17:38:59.529115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.596 [2024-12-09 17:38:59.529147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.596 qpair failed and we were unable to recover it. 00:28:30.596 [2024-12-09 17:38:59.529338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.596 [2024-12-09 17:38:59.529374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.596 qpair failed and we were unable to recover it. 00:28:30.596 [2024-12-09 17:38:59.529482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.596 [2024-12-09 17:38:59.529521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.596 qpair failed and we were unable to recover it. 00:28:30.596 [2024-12-09 17:38:59.529645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.596 [2024-12-09 17:38:59.529677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.596 qpair failed and we were unable to recover it. 00:28:30.596 [2024-12-09 17:38:59.529921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.596 [2024-12-09 17:38:59.529954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.596 qpair failed and we were unable to recover it. 00:28:30.596 [2024-12-09 17:38:59.530147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.596 [2024-12-09 17:38:59.530180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.596 qpair failed and we were unable to recover it. 00:28:30.596 [2024-12-09 17:38:59.530389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.596 [2024-12-09 17:38:59.530423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.596 qpair failed and we were unable to recover it. 00:28:30.596 [2024-12-09 17:38:59.530618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.596 [2024-12-09 17:38:59.530651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.596 qpair failed and we were unable to recover it. 00:28:30.596 [2024-12-09 17:38:59.530828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.596 [2024-12-09 17:38:59.530861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.596 qpair failed and we were unable to recover it. 00:28:30.596 [2024-12-09 17:38:59.531032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.596 [2024-12-09 17:38:59.531065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.596 qpair failed and we were unable to recover it. 00:28:30.596 [2024-12-09 17:38:59.531197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.596 [2024-12-09 17:38:59.531238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.596 qpair failed and we were unable to recover it. 00:28:30.596 [2024-12-09 17:38:59.531367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.596 [2024-12-09 17:38:59.531399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.596 qpair failed and we were unable to recover it. 00:28:30.596 [2024-12-09 17:38:59.531599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.596 [2024-12-09 17:38:59.531631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.596 qpair failed and we were unable to recover it. 00:28:30.596 [2024-12-09 17:38:59.531818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.596 [2024-12-09 17:38:59.531851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.596 qpair failed and we were unable to recover it. 00:28:30.596 [2024-12-09 17:38:59.532050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.596 [2024-12-09 17:38:59.532082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.596 qpair failed and we were unable to recover it. 00:28:30.596 [2024-12-09 17:38:59.532275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.596 [2024-12-09 17:38:59.532309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.596 qpair failed and we were unable to recover it. 00:28:30.596 [2024-12-09 17:38:59.532572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.596 [2024-12-09 17:38:59.532604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.596 qpair failed and we were unable to recover it. 00:28:30.597 [2024-12-09 17:38:59.532737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.597 [2024-12-09 17:38:59.532771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.597 qpair failed and we were unable to recover it. 00:28:30.597 [2024-12-09 17:38:59.532948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.597 [2024-12-09 17:38:59.532981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.597 qpair failed and we were unable to recover it. 00:28:30.597 [2024-12-09 17:38:59.533104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.597 [2024-12-09 17:38:59.533136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.597 qpair failed and we were unable to recover it. 00:28:30.597 [2024-12-09 17:38:59.533402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.597 [2024-12-09 17:38:59.533437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.597 qpair failed and we were unable to recover it. 00:28:30.597 [2024-12-09 17:38:59.533653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.597 [2024-12-09 17:38:59.533686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.597 qpair failed and we were unable to recover it. 00:28:30.597 [2024-12-09 17:38:59.533866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.597 [2024-12-09 17:38:59.533897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.597 qpair failed and we were unable to recover it. 00:28:30.597 [2024-12-09 17:38:59.534008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.597 [2024-12-09 17:38:59.534040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.597 qpair failed and we were unable to recover it. 00:28:30.597 [2024-12-09 17:38:59.534281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.597 [2024-12-09 17:38:59.534316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.597 qpair failed and we were unable to recover it. 00:28:30.597 [2024-12-09 17:38:59.534432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.597 [2024-12-09 17:38:59.534464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.597 qpair failed and we were unable to recover it. 00:28:30.597 [2024-12-09 17:38:59.534728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.597 [2024-12-09 17:38:59.534762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.597 qpair failed and we were unable to recover it. 00:28:30.597 [2024-12-09 17:38:59.534888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.597 [2024-12-09 17:38:59.534919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.597 qpair failed and we were unable to recover it. 00:28:30.597 [2024-12-09 17:38:59.535181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.597 [2024-12-09 17:38:59.535215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.597 qpair failed and we were unable to recover it. 00:28:30.597 [2024-12-09 17:38:59.535347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.597 [2024-12-09 17:38:59.535380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.597 qpair failed and we were unable to recover it. 00:28:30.597 [2024-12-09 17:38:59.535559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.597 [2024-12-09 17:38:59.535592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.597 qpair failed and we were unable to recover it. 00:28:30.597 [2024-12-09 17:38:59.535708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.597 [2024-12-09 17:38:59.535743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.597 qpair failed and we were unable to recover it. 00:28:30.597 [2024-12-09 17:38:59.535914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.597 [2024-12-09 17:38:59.535947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.597 qpair failed and we were unable to recover it. 00:28:30.597 [2024-12-09 17:38:59.536066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.597 [2024-12-09 17:38:59.536099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.597 qpair failed and we were unable to recover it. 00:28:30.597 [2024-12-09 17:38:59.536288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.597 [2024-12-09 17:38:59.536323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.597 qpair failed and we were unable to recover it. 00:28:30.597 [2024-12-09 17:38:59.536562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.597 [2024-12-09 17:38:59.536595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.597 qpair failed and we were unable to recover it. 00:28:30.597 [2024-12-09 17:38:59.536886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.597 [2024-12-09 17:38:59.536918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.597 qpair failed and we were unable to recover it. 00:28:30.597 [2024-12-09 17:38:59.537025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.597 [2024-12-09 17:38:59.537059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.597 qpair failed and we were unable to recover it. 00:28:30.597 [2024-12-09 17:38:59.537186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.597 [2024-12-09 17:38:59.537236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.597 qpair failed and we were unable to recover it. 00:28:30.597 [2024-12-09 17:38:59.537549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.597 [2024-12-09 17:38:59.537583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.597 qpair failed and we were unable to recover it. 00:28:30.597 [2024-12-09 17:38:59.537826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.597 [2024-12-09 17:38:59.537858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.597 qpair failed and we were unable to recover it. 00:28:30.597 [2024-12-09 17:38:59.537987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.597 [2024-12-09 17:38:59.538020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.597 qpair failed and we were unable to recover it. 00:28:30.597 [2024-12-09 17:38:59.538214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.597 [2024-12-09 17:38:59.538265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.597 qpair failed and we were unable to recover it. 00:28:30.597 [2024-12-09 17:38:59.538456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.597 [2024-12-09 17:38:59.538488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.597 qpair failed and we were unable to recover it. 00:28:30.597 [2024-12-09 17:38:59.538680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.597 [2024-12-09 17:38:59.538712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.597 qpair failed and we were unable to recover it. 00:28:30.597 [2024-12-09 17:38:59.538833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.597 [2024-12-09 17:38:59.538868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.597 qpair failed and we were unable to recover it. 00:28:30.597 [2024-12-09 17:38:59.538983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.597 [2024-12-09 17:38:59.539016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.597 qpair failed and we were unable to recover it. 00:28:30.597 [2024-12-09 17:38:59.539298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.597 [2024-12-09 17:38:59.539333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.597 qpair failed and we were unable to recover it. 00:28:30.597 [2024-12-09 17:38:59.539515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.597 [2024-12-09 17:38:59.539547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.597 qpair failed and we were unable to recover it. 00:28:30.597 [2024-12-09 17:38:59.539822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.597 [2024-12-09 17:38:59.539856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.597 qpair failed and we were unable to recover it. 00:28:30.597 [2024-12-09 17:38:59.540142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.597 [2024-12-09 17:38:59.540175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.597 qpair failed and we were unable to recover it. 00:28:30.597 [2024-12-09 17:38:59.540377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.597 [2024-12-09 17:38:59.540412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.597 qpair failed and we were unable to recover it. 00:28:30.597 [2024-12-09 17:38:59.540599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.597 [2024-12-09 17:38:59.540631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.597 qpair failed and we were unable to recover it. 00:28:30.597 [2024-12-09 17:38:59.540842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.597 [2024-12-09 17:38:59.540874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.597 qpair failed and we were unable to recover it. 00:28:30.597 [2024-12-09 17:38:59.541065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.597 [2024-12-09 17:38:59.541097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.597 qpair failed and we were unable to recover it. 00:28:30.597 [2024-12-09 17:38:59.541276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.597 [2024-12-09 17:38:59.541310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.597 qpair failed and we were unable to recover it. 00:28:30.598 [2024-12-09 17:38:59.541495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.598 [2024-12-09 17:38:59.541529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.598 qpair failed and we were unable to recover it. 00:28:30.598 [2024-12-09 17:38:59.541764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.598 [2024-12-09 17:38:59.541797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.598 qpair failed and we were unable to recover it. 00:28:30.598 [2024-12-09 17:38:59.541979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.598 [2024-12-09 17:38:59.542013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.598 qpair failed and we were unable to recover it. 00:28:30.598 [2024-12-09 17:38:59.542139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.598 [2024-12-09 17:38:59.542173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.598 qpair failed and we were unable to recover it. 00:28:30.598 [2024-12-09 17:38:59.542371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.598 [2024-12-09 17:38:59.542404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.598 qpair failed and we were unable to recover it. 00:28:30.598 [2024-12-09 17:38:59.542644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.598 [2024-12-09 17:38:59.542677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.598 qpair failed and we were unable to recover it. 00:28:30.598 [2024-12-09 17:38:59.542914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.598 [2024-12-09 17:38:59.542947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.598 qpair failed and we were unable to recover it. 00:28:30.598 [2024-12-09 17:38:59.543052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.598 [2024-12-09 17:38:59.543085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.598 qpair failed and we were unable to recover it. 00:28:30.598 [2024-12-09 17:38:59.543328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.598 [2024-12-09 17:38:59.543363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.598 qpair failed and we were unable to recover it. 00:28:30.598 [2024-12-09 17:38:59.543552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.598 [2024-12-09 17:38:59.543585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.598 qpair failed and we were unable to recover it. 00:28:30.598 [2024-12-09 17:38:59.543765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.598 [2024-12-09 17:38:59.543798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.598 qpair failed and we were unable to recover it. 00:28:30.598 [2024-12-09 17:38:59.544008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.598 [2024-12-09 17:38:59.544041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.598 qpair failed and we were unable to recover it. 00:28:30.598 [2024-12-09 17:38:59.544160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.598 [2024-12-09 17:38:59.544193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.598 qpair failed and we were unable to recover it. 00:28:30.598 [2024-12-09 17:38:59.544401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.598 [2024-12-09 17:38:59.544434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.598 qpair failed and we were unable to recover it. 00:28:30.598 [2024-12-09 17:38:59.544678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.598 [2024-12-09 17:38:59.544711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.598 qpair failed and we were unable to recover it. 00:28:30.598 [2024-12-09 17:38:59.544953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.598 [2024-12-09 17:38:59.544985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.598 qpair failed and we were unable to recover it. 00:28:30.598 [2024-12-09 17:38:59.545195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.598 [2024-12-09 17:38:59.545237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.598 qpair failed and we were unable to recover it. 00:28:30.598 [2024-12-09 17:38:59.545371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.598 [2024-12-09 17:38:59.545403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.598 qpair failed and we were unable to recover it. 00:28:30.598 [2024-12-09 17:38:59.545538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.598 [2024-12-09 17:38:59.545572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.598 qpair failed and we were unable to recover it. 00:28:30.598 [2024-12-09 17:38:59.545755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.598 [2024-12-09 17:38:59.545788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.598 qpair failed and we were unable to recover it. 00:28:30.598 [2024-12-09 17:38:59.545904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.598 [2024-12-09 17:38:59.545936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.598 qpair failed and we were unable to recover it. 00:28:30.598 [2024-12-09 17:38:59.546197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.598 [2024-12-09 17:38:59.546239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.598 qpair failed and we were unable to recover it. 00:28:30.598 [2024-12-09 17:38:59.546486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.598 [2024-12-09 17:38:59.546519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.598 qpair failed and we were unable to recover it. 00:28:30.598 [2024-12-09 17:38:59.546709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.598 [2024-12-09 17:38:59.546742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.598 qpair failed and we were unable to recover it. 00:28:30.598 [2024-12-09 17:38:59.546849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.598 [2024-12-09 17:38:59.546883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.598 qpair failed and we were unable to recover it. 00:28:30.598 [2024-12-09 17:38:59.547145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.598 [2024-12-09 17:38:59.547178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.598 qpair failed and we were unable to recover it. 00:28:30.598 [2024-12-09 17:38:59.547381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.598 [2024-12-09 17:38:59.547421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.598 qpair failed and we were unable to recover it. 00:28:30.598 [2024-12-09 17:38:59.547544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.598 [2024-12-09 17:38:59.547576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.598 qpair failed and we were unable to recover it. 00:28:30.598 [2024-12-09 17:38:59.547817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.598 [2024-12-09 17:38:59.547850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.598 qpair failed and we were unable to recover it. 00:28:30.598 [2024-12-09 17:38:59.548092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.598 [2024-12-09 17:38:59.548123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.598 qpair failed and we were unable to recover it. 00:28:30.598 [2024-12-09 17:38:59.548307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.598 [2024-12-09 17:38:59.548340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.598 qpair failed and we were unable to recover it. 00:28:30.598 [2024-12-09 17:38:59.548512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.598 [2024-12-09 17:38:59.548544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.598 qpair failed and we were unable to recover it. 00:28:30.598 [2024-12-09 17:38:59.548788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.598 [2024-12-09 17:38:59.548821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.598 qpair failed and we were unable to recover it. 00:28:30.598 [2024-12-09 17:38:59.549073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.598 [2024-12-09 17:38:59.549107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.598 qpair failed and we were unable to recover it. 00:28:30.598 [2024-12-09 17:38:59.549279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.598 [2024-12-09 17:38:59.549314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.598 qpair failed and we were unable to recover it. 00:28:30.598 [2024-12-09 17:38:59.549554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.598 [2024-12-09 17:38:59.549587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.598 qpair failed and we were unable to recover it. 00:28:30.598 [2024-12-09 17:38:59.549801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.598 [2024-12-09 17:38:59.549834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.598 qpair failed and we were unable to recover it. 00:28:30.598 [2024-12-09 17:38:59.550075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.598 [2024-12-09 17:38:59.550107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.598 qpair failed and we were unable to recover it. 00:28:30.598 [2024-12-09 17:38:59.550285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.598 [2024-12-09 17:38:59.550320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.598 qpair failed and we were unable to recover it. 00:28:30.599 [2024-12-09 17:38:59.550552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.599 [2024-12-09 17:38:59.550584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.599 qpair failed and we were unable to recover it. 00:28:30.599 [2024-12-09 17:38:59.550859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.599 [2024-12-09 17:38:59.550892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.599 qpair failed and we were unable to recover it. 00:28:30.599 [2024-12-09 17:38:59.551087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.599 [2024-12-09 17:38:59.551120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.599 qpair failed and we were unable to recover it. 00:28:30.599 [2024-12-09 17:38:59.551308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.599 [2024-12-09 17:38:59.551343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.599 qpair failed and we were unable to recover it. 00:28:30.599 [2024-12-09 17:38:59.551547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.599 [2024-12-09 17:38:59.551579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.599 qpair failed and we were unable to recover it. 00:28:30.599 [2024-12-09 17:38:59.551771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.599 [2024-12-09 17:38:59.551805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.599 qpair failed and we were unable to recover it. 00:28:30.599 [2024-12-09 17:38:59.551990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.599 [2024-12-09 17:38:59.552023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.599 qpair failed and we were unable to recover it. 00:28:30.599 [2024-12-09 17:38:59.552141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.599 [2024-12-09 17:38:59.552174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.599 qpair failed and we were unable to recover it. 00:28:30.599 [2024-12-09 17:38:59.552294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.599 [2024-12-09 17:38:59.552328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.599 qpair failed and we were unable to recover it. 00:28:30.599 [2024-12-09 17:38:59.552508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.599 [2024-12-09 17:38:59.552541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.599 qpair failed and we were unable to recover it. 00:28:30.599 [2024-12-09 17:38:59.552665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.599 [2024-12-09 17:38:59.552697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.599 qpair failed and we were unable to recover it. 00:28:30.599 [2024-12-09 17:38:59.552825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.599 [2024-12-09 17:38:59.552859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.599 qpair failed and we were unable to recover it. 00:28:30.599 [2024-12-09 17:38:59.552979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.599 [2024-12-09 17:38:59.553011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.599 qpair failed and we were unable to recover it. 00:28:30.599 [2024-12-09 17:38:59.553252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.599 [2024-12-09 17:38:59.553286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.599 qpair failed and we were unable to recover it. 00:28:30.599 [2024-12-09 17:38:59.553466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.599 [2024-12-09 17:38:59.553499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.599 qpair failed and we were unable to recover it. 00:28:30.599 [2024-12-09 17:38:59.553607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.599 [2024-12-09 17:38:59.553640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.599 qpair failed and we were unable to recover it. 00:28:30.599 [2024-12-09 17:38:59.553851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.599 [2024-12-09 17:38:59.553885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.599 qpair failed and we were unable to recover it. 00:28:30.599 [2024-12-09 17:38:59.554070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.599 [2024-12-09 17:38:59.554102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.599 qpair failed and we were unable to recover it. 00:28:30.599 [2024-12-09 17:38:59.554233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.599 [2024-12-09 17:38:59.554266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.599 qpair failed and we were unable to recover it. 00:28:30.599 [2024-12-09 17:38:59.554477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.599 [2024-12-09 17:38:59.554510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.599 qpair failed and we were unable to recover it. 00:28:30.599 [2024-12-09 17:38:59.554684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.599 [2024-12-09 17:38:59.554717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.599 qpair failed and we were unable to recover it. 00:28:30.599 [2024-12-09 17:38:59.554843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.599 [2024-12-09 17:38:59.554875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.599 qpair failed and we were unable to recover it. 00:28:30.599 [2024-12-09 17:38:59.555116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.599 [2024-12-09 17:38:59.555149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.599 qpair failed and we were unable to recover it. 00:28:30.599 [2024-12-09 17:38:59.555365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.599 [2024-12-09 17:38:59.555400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.599 qpair failed and we were unable to recover it. 00:28:30.599 [2024-12-09 17:38:59.555584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.599 [2024-12-09 17:38:59.555616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.599 qpair failed and we were unable to recover it. 00:28:30.599 [2024-12-09 17:38:59.555788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.599 [2024-12-09 17:38:59.555820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.599 qpair failed and we were unable to recover it. 00:28:30.599 [2024-12-09 17:38:59.556012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.599 [2024-12-09 17:38:59.556045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.599 qpair failed and we were unable to recover it. 00:28:30.599 [2024-12-09 17:38:59.556150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.599 [2024-12-09 17:38:59.556188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.599 qpair failed and we were unable to recover it. 00:28:30.599 [2024-12-09 17:38:59.556373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.599 [2024-12-09 17:38:59.556446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.599 qpair failed and we were unable to recover it. 00:28:30.599 [2024-12-09 17:38:59.556652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.599 [2024-12-09 17:38:59.556689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.599 qpair failed and we were unable to recover it. 00:28:30.599 [2024-12-09 17:38:59.556876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.599 [2024-12-09 17:38:59.556910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.599 qpair failed and we were unable to recover it. 00:28:30.599 [2024-12-09 17:38:59.557079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.599 [2024-12-09 17:38:59.557113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.599 qpair failed and we were unable to recover it. 00:28:30.599 [2024-12-09 17:38:59.557253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.599 [2024-12-09 17:38:59.557289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.599 qpair failed and we were unable to recover it. 00:28:30.599 [2024-12-09 17:38:59.557559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.599 [2024-12-09 17:38:59.557593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.599 qpair failed and we were unable to recover it. 00:28:30.599 [2024-12-09 17:38:59.557787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.599 [2024-12-09 17:38:59.557820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.599 qpair failed and we were unable to recover it. 00:28:30.599 [2024-12-09 17:38:59.557998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.599 [2024-12-09 17:38:59.558032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.599 qpair failed and we were unable to recover it. 00:28:30.599 [2024-12-09 17:38:59.558235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.599 [2024-12-09 17:38:59.558269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.599 qpair failed and we were unable to recover it. 00:28:30.599 [2024-12-09 17:38:59.558461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.599 [2024-12-09 17:38:59.558494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.599 qpair failed and we were unable to recover it. 00:28:30.599 [2024-12-09 17:38:59.558634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.599 [2024-12-09 17:38:59.558667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.599 qpair failed and we were unable to recover it. 00:28:30.599 [2024-12-09 17:38:59.558781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.600 [2024-12-09 17:38:59.558813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.600 qpair failed and we were unable to recover it. 00:28:30.600 [2024-12-09 17:38:59.559013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.600 [2024-12-09 17:38:59.559046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.600 qpair failed and we were unable to recover it. 00:28:30.600 [2024-12-09 17:38:59.559252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.600 [2024-12-09 17:38:59.559286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.600 qpair failed and we were unable to recover it. 00:28:30.600 [2024-12-09 17:38:59.559473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.600 [2024-12-09 17:38:59.559507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.600 qpair failed and we were unable to recover it. 00:28:30.600 [2024-12-09 17:38:59.559633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.600 [2024-12-09 17:38:59.559666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.600 qpair failed and we were unable to recover it. 00:28:30.600 [2024-12-09 17:38:59.559861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.600 [2024-12-09 17:38:59.559894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.600 qpair failed and we were unable to recover it. 00:28:30.600 [2024-12-09 17:38:59.560091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.600 [2024-12-09 17:38:59.560124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.600 qpair failed and we were unable to recover it. 00:28:30.600 [2024-12-09 17:38:59.560299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.600 [2024-12-09 17:38:59.560333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.600 qpair failed and we were unable to recover it. 00:28:30.600 [2024-12-09 17:38:59.560573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.600 [2024-12-09 17:38:59.560606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.600 qpair failed and we were unable to recover it. 00:28:30.600 [2024-12-09 17:38:59.560835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.600 [2024-12-09 17:38:59.560867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.600 qpair failed and we were unable to recover it. 00:28:30.600 [2024-12-09 17:38:59.561133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.600 [2024-12-09 17:38:59.561165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.600 qpair failed and we were unable to recover it. 00:28:30.600 [2024-12-09 17:38:59.561278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.600 [2024-12-09 17:38:59.561312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.600 qpair failed and we were unable to recover it. 00:28:30.600 [2024-12-09 17:38:59.561497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.600 [2024-12-09 17:38:59.561529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.600 qpair failed and we were unable to recover it. 00:28:30.600 [2024-12-09 17:38:59.561663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.600 [2024-12-09 17:38:59.561696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.600 qpair failed and we were unable to recover it. 00:28:30.600 [2024-12-09 17:38:59.561953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.600 [2024-12-09 17:38:59.561985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.600 qpair failed and we were unable to recover it. 00:28:30.600 [2024-12-09 17:38:59.562164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.600 [2024-12-09 17:38:59.562197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.600 qpair failed and we were unable to recover it. 00:28:30.600 [2024-12-09 17:38:59.562392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.600 [2024-12-09 17:38:59.562425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.600 qpair failed and we were unable to recover it. 00:28:30.600 [2024-12-09 17:38:59.562665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.600 [2024-12-09 17:38:59.562698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.600 qpair failed and we were unable to recover it. 00:28:30.600 [2024-12-09 17:38:59.562839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.600 [2024-12-09 17:38:59.562872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.600 qpair failed and we were unable to recover it. 00:28:30.600 [2024-12-09 17:38:59.562990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.600 [2024-12-09 17:38:59.563022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.600 qpair failed and we were unable to recover it. 00:28:30.600 [2024-12-09 17:38:59.563232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.600 [2024-12-09 17:38:59.563267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.600 qpair failed and we were unable to recover it. 00:28:30.600 [2024-12-09 17:38:59.563452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.600 [2024-12-09 17:38:59.563485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.600 qpair failed and we were unable to recover it. 00:28:30.600 [2024-12-09 17:38:59.563699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.600 [2024-12-09 17:38:59.563732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.600 qpair failed and we were unable to recover it. 00:28:30.600 [2024-12-09 17:38:59.563861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.600 [2024-12-09 17:38:59.563895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.600 qpair failed and we were unable to recover it. 00:28:30.600 [2024-12-09 17:38:59.564028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.600 [2024-12-09 17:38:59.564061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.600 qpair failed and we were unable to recover it. 00:28:30.600 [2024-12-09 17:38:59.564260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.600 [2024-12-09 17:38:59.564295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.600 qpair failed and we were unable to recover it. 00:28:30.600 [2024-12-09 17:38:59.564543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.600 [2024-12-09 17:38:59.564577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.600 qpair failed and we were unable to recover it. 00:28:30.600 [2024-12-09 17:38:59.564760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.600 [2024-12-09 17:38:59.564792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.600 qpair failed and we were unable to recover it. 00:28:30.600 [2024-12-09 17:38:59.564904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.600 [2024-12-09 17:38:59.564943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.600 qpair failed and we were unable to recover it. 00:28:30.600 [2024-12-09 17:38:59.565121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.600 [2024-12-09 17:38:59.565155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.600 qpair failed and we were unable to recover it. 00:28:30.600 [2024-12-09 17:38:59.565278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.600 [2024-12-09 17:38:59.565312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.600 qpair failed and we were unable to recover it. 00:28:30.600 [2024-12-09 17:38:59.565484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.600 [2024-12-09 17:38:59.565517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.600 qpair failed and we were unable to recover it. 00:28:30.600 [2024-12-09 17:38:59.565657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.600 [2024-12-09 17:38:59.565690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.600 qpair failed and we were unable to recover it. 00:28:30.600 [2024-12-09 17:38:59.565873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.600 [2024-12-09 17:38:59.565906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.600 qpair failed and we were unable to recover it. 00:28:30.600 [2024-12-09 17:38:59.566175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.600 [2024-12-09 17:38:59.566209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.601 qpair failed and we were unable to recover it. 00:28:30.601 [2024-12-09 17:38:59.566484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.601 [2024-12-09 17:38:59.566517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.601 qpair failed and we were unable to recover it. 00:28:30.601 [2024-12-09 17:38:59.566696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.601 [2024-12-09 17:38:59.566729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.601 qpair failed and we were unable to recover it. 00:28:30.601 [2024-12-09 17:38:59.566967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.601 [2024-12-09 17:38:59.567000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.601 qpair failed and we were unable to recover it. 00:28:30.601 [2024-12-09 17:38:59.567195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.601 [2024-12-09 17:38:59.567242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.601 qpair failed and we were unable to recover it. 00:28:30.601 [2024-12-09 17:38:59.567421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.601 [2024-12-09 17:38:59.567454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.601 qpair failed and we were unable to recover it. 00:28:30.601 [2024-12-09 17:38:59.567663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.601 [2024-12-09 17:38:59.567695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.601 qpair failed and we were unable to recover it. 00:28:30.601 [2024-12-09 17:38:59.567799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.601 [2024-12-09 17:38:59.567833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.601 qpair failed and we were unable to recover it. 00:28:30.601 [2024-12-09 17:38:59.567951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.601 [2024-12-09 17:38:59.567984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.601 qpair failed and we were unable to recover it. 00:28:30.601 [2024-12-09 17:38:59.568240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.601 [2024-12-09 17:38:59.568276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.601 qpair failed and we were unable to recover it. 00:28:30.601 [2024-12-09 17:38:59.568400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.601 [2024-12-09 17:38:59.568433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.601 qpair failed and we were unable to recover it. 00:28:30.601 [2024-12-09 17:38:59.568544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.601 [2024-12-09 17:38:59.568577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.601 qpair failed and we were unable to recover it. 00:28:30.601 [2024-12-09 17:38:59.568791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.601 [2024-12-09 17:38:59.568824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.601 qpair failed and we were unable to recover it. 00:28:30.601 [2024-12-09 17:38:59.569090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.601 [2024-12-09 17:38:59.569122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.601 qpair failed and we were unable to recover it. 00:28:30.601 [2024-12-09 17:38:59.569239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.601 [2024-12-09 17:38:59.569274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.601 qpair failed and we were unable to recover it. 00:28:30.601 [2024-12-09 17:38:59.569399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.601 [2024-12-09 17:38:59.569432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.601 qpair failed and we were unable to recover it. 00:28:30.601 [2024-12-09 17:38:59.569617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.601 [2024-12-09 17:38:59.569649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.601 qpair failed and we were unable to recover it. 00:28:30.601 [2024-12-09 17:38:59.569858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.601 [2024-12-09 17:38:59.569890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.601 qpair failed and we were unable to recover it. 00:28:30.601 [2024-12-09 17:38:59.570017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.601 [2024-12-09 17:38:59.570050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.601 qpair failed and we were unable to recover it. 00:28:30.601 [2024-12-09 17:38:59.570244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.601 [2024-12-09 17:38:59.570278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.601 qpair failed and we were unable to recover it. 00:28:30.601 [2024-12-09 17:38:59.570455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.601 [2024-12-09 17:38:59.570487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.601 qpair failed and we were unable to recover it. 00:28:30.601 [2024-12-09 17:38:59.570752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.601 [2024-12-09 17:38:59.570784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.601 qpair failed and we were unable to recover it. 00:28:30.601 [2024-12-09 17:38:59.570971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.601 [2024-12-09 17:38:59.571003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.601 qpair failed and we were unable to recover it. 00:28:30.601 [2024-12-09 17:38:59.571246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.601 [2024-12-09 17:38:59.571282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.601 qpair failed and we were unable to recover it. 00:28:30.601 [2024-12-09 17:38:59.571461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.601 [2024-12-09 17:38:59.571493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.601 qpair failed and we were unable to recover it. 00:28:30.601 [2024-12-09 17:38:59.571688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.601 [2024-12-09 17:38:59.571720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.601 qpair failed and we were unable to recover it. 00:28:30.601 [2024-12-09 17:38:59.571904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.601 [2024-12-09 17:38:59.571937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.601 qpair failed and we were unable to recover it. 00:28:30.601 [2024-12-09 17:38:59.572184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.601 [2024-12-09 17:38:59.572226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.601 qpair failed and we were unable to recover it. 00:28:30.601 [2024-12-09 17:38:59.572381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.601 [2024-12-09 17:38:59.572414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.601 qpair failed and we were unable to recover it. 00:28:30.601 [2024-12-09 17:38:59.572653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.601 [2024-12-09 17:38:59.572686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.601 qpair failed and we were unable to recover it. 00:28:30.601 [2024-12-09 17:38:59.572859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.601 [2024-12-09 17:38:59.572893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.601 qpair failed and we were unable to recover it. 00:28:30.601 [2024-12-09 17:38:59.573094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.601 [2024-12-09 17:38:59.573127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.601 qpair failed and we were unable to recover it. 00:28:30.601 [2024-12-09 17:38:59.573250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.601 [2024-12-09 17:38:59.573284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.601 qpair failed and we were unable to recover it. 00:28:30.601 [2024-12-09 17:38:59.573552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.601 [2024-12-09 17:38:59.573585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.601 qpair failed and we were unable to recover it. 00:28:30.601 [2024-12-09 17:38:59.573758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.601 [2024-12-09 17:38:59.573797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.601 qpair failed and we were unable to recover it. 00:28:30.601 [2024-12-09 17:38:59.573971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.601 [2024-12-09 17:38:59.574004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.601 qpair failed and we were unable to recover it. 00:28:30.601 [2024-12-09 17:38:59.574175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.601 [2024-12-09 17:38:59.574209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.601 qpair failed and we were unable to recover it. 00:28:30.601 [2024-12-09 17:38:59.574418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.601 [2024-12-09 17:38:59.574450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.601 qpair failed and we were unable to recover it. 00:28:30.601 [2024-12-09 17:38:59.574576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.601 [2024-12-09 17:38:59.574609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.601 qpair failed and we were unable to recover it. 00:28:30.601 [2024-12-09 17:38:59.574797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.601 [2024-12-09 17:38:59.574830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.602 qpair failed and we were unable to recover it. 00:28:30.602 [2024-12-09 17:38:59.575037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.602 [2024-12-09 17:38:59.575070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.602 qpair failed and we were unable to recover it. 00:28:30.602 [2024-12-09 17:38:59.575206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.602 [2024-12-09 17:38:59.575247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.602 qpair failed and we were unable to recover it. 00:28:30.602 [2024-12-09 17:38:59.575489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.602 [2024-12-09 17:38:59.575522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.602 qpair failed and we were unable to recover it. 00:28:30.602 [2024-12-09 17:38:59.575711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.602 [2024-12-09 17:38:59.575744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.602 qpair failed and we were unable to recover it. 00:28:30.602 [2024-12-09 17:38:59.575986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.602 [2024-12-09 17:38:59.576019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.602 qpair failed and we were unable to recover it. 00:28:30.602 [2024-12-09 17:38:59.576203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.602 [2024-12-09 17:38:59.576243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.602 qpair failed and we were unable to recover it. 00:28:30.602 [2024-12-09 17:38:59.576429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.602 [2024-12-09 17:38:59.576463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.602 qpair failed and we were unable to recover it. 00:28:30.602 [2024-12-09 17:38:59.576671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.602 [2024-12-09 17:38:59.576703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.602 qpair failed and we were unable to recover it. 00:28:30.602 [2024-12-09 17:38:59.576890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.602 [2024-12-09 17:38:59.576923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.602 qpair failed and we were unable to recover it. 00:28:30.602 [2024-12-09 17:38:59.577076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.602 [2024-12-09 17:38:59.577109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.602 qpair failed and we were unable to recover it. 00:28:30.602 [2024-12-09 17:38:59.577359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.602 [2024-12-09 17:38:59.577394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.602 qpair failed and we were unable to recover it. 00:28:30.602 [2024-12-09 17:38:59.577576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.602 [2024-12-09 17:38:59.577609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.602 qpair failed and we were unable to recover it. 00:28:30.602 [2024-12-09 17:38:59.577736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.602 [2024-12-09 17:38:59.577769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.602 qpair failed and we were unable to recover it. 00:28:30.602 [2024-12-09 17:38:59.577891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.602 [2024-12-09 17:38:59.577924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.602 qpair failed and we were unable to recover it. 00:28:30.602 [2024-12-09 17:38:59.578103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.602 [2024-12-09 17:38:59.578136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.602 qpair failed and we were unable to recover it. 00:28:30.602 [2024-12-09 17:38:59.578311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.602 [2024-12-09 17:38:59.578346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.602 qpair failed and we were unable to recover it. 00:28:30.602 [2024-12-09 17:38:59.578589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.602 [2024-12-09 17:38:59.578621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.602 qpair failed and we were unable to recover it. 00:28:30.602 [2024-12-09 17:38:59.578819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.602 [2024-12-09 17:38:59.578852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.602 qpair failed and we were unable to recover it. 00:28:30.602 [2024-12-09 17:38:59.579117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.602 [2024-12-09 17:38:59.579149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.602 qpair failed and we were unable to recover it. 00:28:30.602 [2024-12-09 17:38:59.579348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.602 [2024-12-09 17:38:59.579381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.602 qpair failed and we were unable to recover it. 00:28:30.602 [2024-12-09 17:38:59.579553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.602 [2024-12-09 17:38:59.579586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.602 qpair failed and we were unable to recover it. 00:28:30.602 [2024-12-09 17:38:59.579770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.602 [2024-12-09 17:38:59.579841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.602 qpair failed and we were unable to recover it. 00:28:30.602 [2024-12-09 17:38:59.580083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.602 [2024-12-09 17:38:59.580126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.602 qpair failed and we were unable to recover it. 00:28:30.602 [2024-12-09 17:38:59.580367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.602 [2024-12-09 17:38:59.580412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.602 qpair failed and we were unable to recover it. 00:28:30.602 [2024-12-09 17:38:59.580629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.602 [2024-12-09 17:38:59.580671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.602 qpair failed and we were unable to recover it. 00:28:30.602 [2024-12-09 17:38:59.580829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.602 [2024-12-09 17:38:59.580873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.602 qpair failed and we were unable to recover it. 00:28:30.602 [2024-12-09 17:38:59.581075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.602 [2024-12-09 17:38:59.581110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.602 qpair failed and we were unable to recover it. 00:28:30.602 [2024-12-09 17:38:59.581328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.602 [2024-12-09 17:38:59.581363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.602 qpair failed and we were unable to recover it. 00:28:30.602 [2024-12-09 17:38:59.581556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.602 [2024-12-09 17:38:59.581590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.602 qpair failed and we were unable to recover it. 00:28:30.602 [2024-12-09 17:38:59.581806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.602 [2024-12-09 17:38:59.581840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.602 qpair failed and we were unable to recover it. 00:28:30.602 [2024-12-09 17:38:59.582037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.602 [2024-12-09 17:38:59.582071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.602 qpair failed and we were unable to recover it. 00:28:30.602 [2024-12-09 17:38:59.582183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.602 [2024-12-09 17:38:59.582226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.602 qpair failed and we were unable to recover it. 00:28:30.602 [2024-12-09 17:38:59.582407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.602 [2024-12-09 17:38:59.582441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.602 qpair failed and we were unable to recover it. 00:28:30.602 [2024-12-09 17:38:59.582682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.602 [2024-12-09 17:38:59.582715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.602 qpair failed and we were unable to recover it. 00:28:30.602 [2024-12-09 17:38:59.582914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.602 [2024-12-09 17:38:59.582957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.602 qpair failed and we were unable to recover it. 00:28:30.602 [2024-12-09 17:38:59.583145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.602 [2024-12-09 17:38:59.583179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.602 qpair failed and we were unable to recover it. 00:28:30.602 [2024-12-09 17:38:59.583364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.602 [2024-12-09 17:38:59.583398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.602 qpair failed and we were unable to recover it. 00:28:30.602 [2024-12-09 17:38:59.583569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.602 [2024-12-09 17:38:59.583602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.602 qpair failed and we were unable to recover it. 00:28:30.602 [2024-12-09 17:38:59.583811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.602 [2024-12-09 17:38:59.583844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.602 qpair failed and we were unable to recover it. 00:28:30.603 [2024-12-09 17:38:59.584038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.603 [2024-12-09 17:38:59.584072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.603 qpair failed and we were unable to recover it. 00:28:30.603 [2024-12-09 17:38:59.584208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.603 [2024-12-09 17:38:59.584248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.603 qpair failed and we were unable to recover it. 00:28:30.603 [2024-12-09 17:38:59.584425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.603 [2024-12-09 17:38:59.584459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.603 qpair failed and we were unable to recover it. 00:28:30.603 [2024-12-09 17:38:59.584594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.603 [2024-12-09 17:38:59.584628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.603 qpair failed and we were unable to recover it. 00:28:30.603 [2024-12-09 17:38:59.584803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.603 [2024-12-09 17:38:59.584836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.603 qpair failed and we were unable to recover it. 00:28:30.603 [2024-12-09 17:38:59.585025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.603 [2024-12-09 17:38:59.585059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.603 qpair failed and we were unable to recover it. 00:28:30.603 [2024-12-09 17:38:59.585303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.603 [2024-12-09 17:38:59.585338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.603 qpair failed and we were unable to recover it. 00:28:30.603 [2024-12-09 17:38:59.585533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.603 [2024-12-09 17:38:59.585566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.603 qpair failed and we were unable to recover it. 00:28:30.603 [2024-12-09 17:38:59.585754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.603 [2024-12-09 17:38:59.585788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.603 qpair failed and we were unable to recover it. 00:28:30.603 [2024-12-09 17:38:59.585967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.603 [2024-12-09 17:38:59.586001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.603 qpair failed and we were unable to recover it. 00:28:30.603 [2024-12-09 17:38:59.586189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.603 [2024-12-09 17:38:59.586232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.603 qpair failed and we were unable to recover it. 00:28:30.603 [2024-12-09 17:38:59.586352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.603 [2024-12-09 17:38:59.586385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.603 qpair failed and we were unable to recover it. 00:28:30.603 [2024-12-09 17:38:59.586569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.603 [2024-12-09 17:38:59.586602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.603 qpair failed and we were unable to recover it. 00:28:30.603 [2024-12-09 17:38:59.586736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.603 [2024-12-09 17:38:59.586769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.603 qpair failed and we were unable to recover it. 00:28:30.603 [2024-12-09 17:38:59.587030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.603 [2024-12-09 17:38:59.587063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.603 qpair failed and we were unable to recover it. 00:28:30.603 [2024-12-09 17:38:59.587255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.603 [2024-12-09 17:38:59.587291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.603 qpair failed and we were unable to recover it. 00:28:30.603 [2024-12-09 17:38:59.587419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.603 [2024-12-09 17:38:59.587453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.603 qpair failed and we were unable to recover it. 00:28:30.603 [2024-12-09 17:38:59.587734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.603 [2024-12-09 17:38:59.587768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.603 qpair failed and we were unable to recover it. 00:28:30.603 [2024-12-09 17:38:59.587956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.603 [2024-12-09 17:38:59.587990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.603 qpair failed and we were unable to recover it. 00:28:30.603 [2024-12-09 17:38:59.588275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.603 [2024-12-09 17:38:59.588310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.603 qpair failed and we were unable to recover it. 00:28:30.603 [2024-12-09 17:38:59.588436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.603 [2024-12-09 17:38:59.588470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.603 qpair failed and we were unable to recover it. 00:28:30.603 [2024-12-09 17:38:59.588656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.603 [2024-12-09 17:38:59.588689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.603 qpair failed and we were unable to recover it. 00:28:30.603 [2024-12-09 17:38:59.589001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.603 [2024-12-09 17:38:59.589074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.603 qpair failed and we were unable to recover it. 00:28:30.603 [2024-12-09 17:38:59.589292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.603 [2024-12-09 17:38:59.589333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.603 qpair failed and we were unable to recover it. 00:28:30.603 [2024-12-09 17:38:59.589466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.603 [2024-12-09 17:38:59.589500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.603 qpair failed and we were unable to recover it. 00:28:30.603 [2024-12-09 17:38:59.589756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.603 [2024-12-09 17:38:59.589788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.603 qpair failed and we were unable to recover it. 00:28:30.603 [2024-12-09 17:38:59.589961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.603 [2024-12-09 17:38:59.589993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.603 qpair failed and we were unable to recover it. 00:28:30.603 [2024-12-09 17:38:59.590169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.603 [2024-12-09 17:38:59.590202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.603 qpair failed and we were unable to recover it. 00:28:30.603 [2024-12-09 17:38:59.590400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.603 [2024-12-09 17:38:59.590433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.603 qpair failed and we were unable to recover it. 00:28:30.603 [2024-12-09 17:38:59.590666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.603 [2024-12-09 17:38:59.590699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.603 qpair failed and we were unable to recover it. 00:28:30.603 [2024-12-09 17:38:59.590824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.603 [2024-12-09 17:38:59.590856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.603 qpair failed and we were unable to recover it. 00:28:30.603 [2024-12-09 17:38:59.591042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.603 [2024-12-09 17:38:59.591073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.603 qpair failed and we were unable to recover it. 00:28:30.603 [2024-12-09 17:38:59.591258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.603 [2024-12-09 17:38:59.591290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.603 qpair failed and we were unable to recover it. 00:28:30.603 [2024-12-09 17:38:59.591435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.603 [2024-12-09 17:38:59.591467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.603 qpair failed and we were unable to recover it. 00:28:30.603 [2024-12-09 17:38:59.591777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.603 [2024-12-09 17:38:59.591810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.603 qpair failed and we were unable to recover it. 00:28:30.603 [2024-12-09 17:38:59.591931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.603 [2024-12-09 17:38:59.591963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.603 qpair failed and we were unable to recover it. 00:28:30.603 [2024-12-09 17:38:59.592238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.603 [2024-12-09 17:38:59.592272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.603 qpair failed and we were unable to recover it. 00:28:30.603 [2024-12-09 17:38:59.592454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.603 [2024-12-09 17:38:59.592487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.603 qpair failed and we were unable to recover it. 00:28:30.603 [2024-12-09 17:38:59.592739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.603 [2024-12-09 17:38:59.592771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.603 qpair failed and we were unable to recover it. 00:28:30.604 [2024-12-09 17:38:59.592959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.604 [2024-12-09 17:38:59.592991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.604 qpair failed and we were unable to recover it. 00:28:30.604 [2024-12-09 17:38:59.593097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.604 [2024-12-09 17:38:59.593130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.604 qpair failed and we were unable to recover it. 00:28:30.604 [2024-12-09 17:38:59.593306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.604 [2024-12-09 17:38:59.593340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.604 qpair failed and we were unable to recover it. 00:28:30.604 [2024-12-09 17:38:59.593464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.604 [2024-12-09 17:38:59.593496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.604 qpair failed and we were unable to recover it. 00:28:30.604 [2024-12-09 17:38:59.593622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.604 [2024-12-09 17:38:59.593655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.604 qpair failed and we were unable to recover it. 00:28:30.604 [2024-12-09 17:38:59.593891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.604 [2024-12-09 17:38:59.593925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.604 qpair failed and we were unable to recover it. 00:28:30.604 [2024-12-09 17:38:59.594099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.604 [2024-12-09 17:38:59.594132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.604 qpair failed and we were unable to recover it. 00:28:30.604 [2024-12-09 17:38:59.594305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.604 [2024-12-09 17:38:59.594338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.604 qpair failed and we were unable to recover it. 00:28:30.604 [2024-12-09 17:38:59.594533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.604 [2024-12-09 17:38:59.594566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.604 qpair failed and we were unable to recover it. 00:28:30.604 [2024-12-09 17:38:59.594748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.604 [2024-12-09 17:38:59.594780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.604 qpair failed and we were unable to recover it. 00:28:30.604 [2024-12-09 17:38:59.594954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.604 [2024-12-09 17:38:59.594992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.604 qpair failed and we were unable to recover it. 00:28:30.604 [2024-12-09 17:38:59.595163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.604 [2024-12-09 17:38:59.595195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.604 qpair failed and we were unable to recover it. 00:28:30.604 [2024-12-09 17:38:59.595380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.604 [2024-12-09 17:38:59.595412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.604 qpair failed and we were unable to recover it. 00:28:30.604 [2024-12-09 17:38:59.595609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.604 [2024-12-09 17:38:59.595641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.604 qpair failed and we were unable to recover it. 00:28:30.604 [2024-12-09 17:38:59.595905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.604 [2024-12-09 17:38:59.595938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.604 qpair failed and we were unable to recover it. 00:28:30.604 [2024-12-09 17:38:59.596143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.604 [2024-12-09 17:38:59.596175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.604 qpair failed and we were unable to recover it. 00:28:30.604 [2024-12-09 17:38:59.596393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.604 [2024-12-09 17:38:59.596426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.604 qpair failed and we were unable to recover it. 00:28:30.604 [2024-12-09 17:38:59.596561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.604 [2024-12-09 17:38:59.596594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.604 qpair failed and we were unable to recover it. 00:28:30.604 [2024-12-09 17:38:59.596811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.604 [2024-12-09 17:38:59.596843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.604 qpair failed and we were unable to recover it. 00:28:30.604 [2024-12-09 17:38:59.597093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.604 [2024-12-09 17:38:59.597126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.604 qpair failed and we were unable to recover it. 00:28:30.604 [2024-12-09 17:38:59.597303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.604 [2024-12-09 17:38:59.597337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.604 qpair failed and we were unable to recover it. 00:28:30.604 [2024-12-09 17:38:59.597456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.604 [2024-12-09 17:38:59.597487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.604 qpair failed and we were unable to recover it. 00:28:30.604 [2024-12-09 17:38:59.597608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.604 [2024-12-09 17:38:59.597641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.604 qpair failed and we were unable to recover it. 00:28:30.604 [2024-12-09 17:38:59.597826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.604 [2024-12-09 17:38:59.597859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.604 qpair failed and we were unable to recover it. 00:28:30.604 [2024-12-09 17:38:59.597986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.604 [2024-12-09 17:38:59.598019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.604 qpair failed and we were unable to recover it. 00:28:30.604 [2024-12-09 17:38:59.598150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.604 [2024-12-09 17:38:59.598183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.604 qpair failed and we were unable to recover it. 00:28:30.604 [2024-12-09 17:38:59.598450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.604 [2024-12-09 17:38:59.598484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.604 qpair failed and we were unable to recover it. 00:28:30.604 [2024-12-09 17:38:59.598795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.604 [2024-12-09 17:38:59.598828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.604 qpair failed and we were unable to recover it. 00:28:30.604 [2024-12-09 17:38:59.598944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.604 [2024-12-09 17:38:59.598977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.604 qpair failed and we were unable to recover it. 00:28:30.604 [2024-12-09 17:38:59.599162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.604 [2024-12-09 17:38:59.599194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.604 qpair failed and we were unable to recover it. 00:28:30.604 [2024-12-09 17:38:59.599381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.604 [2024-12-09 17:38:59.599414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.604 qpair failed and we were unable to recover it. 00:28:30.604 [2024-12-09 17:38:59.599623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.604 [2024-12-09 17:38:59.599656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.604 qpair failed and we were unable to recover it. 00:28:30.605 [2024-12-09 17:38:59.599849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.605 [2024-12-09 17:38:59.599880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.605 qpair failed and we were unable to recover it. 00:28:30.605 [2024-12-09 17:38:59.600068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.605 [2024-12-09 17:38:59.600100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.605 qpair failed and we were unable to recover it. 00:28:30.605 [2024-12-09 17:38:59.600236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.605 [2024-12-09 17:38:59.600270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.605 qpair failed and we were unable to recover it. 00:28:30.605 [2024-12-09 17:38:59.600523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.605 [2024-12-09 17:38:59.600556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.605 qpair failed and we were unable to recover it. 00:28:30.605 [2024-12-09 17:38:59.600693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.605 [2024-12-09 17:38:59.600726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.605 qpair failed and we were unable to recover it. 00:28:30.605 [2024-12-09 17:38:59.600854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.605 [2024-12-09 17:38:59.600893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.605 qpair failed and we were unable to recover it. 00:28:30.605 [2024-12-09 17:38:59.601137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.605 [2024-12-09 17:38:59.601169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.605 qpair failed and we were unable to recover it. 00:28:30.605 [2024-12-09 17:38:59.601355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.605 [2024-12-09 17:38:59.601390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.605 qpair failed and we were unable to recover it. 00:28:30.605 [2024-12-09 17:38:59.601683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.605 [2024-12-09 17:38:59.601716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.605 qpair failed and we were unable to recover it. 00:28:30.605 [2024-12-09 17:38:59.601836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.605 [2024-12-09 17:38:59.601869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.605 qpair failed and we were unable to recover it. 00:28:30.605 [2024-12-09 17:38:59.602008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.605 [2024-12-09 17:38:59.602041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.605 qpair failed and we were unable to recover it. 00:28:30.605 [2024-12-09 17:38:59.602248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.605 [2024-12-09 17:38:59.602282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.605 qpair failed and we were unable to recover it. 00:28:30.605 [2024-12-09 17:38:59.602469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.605 [2024-12-09 17:38:59.602502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.605 qpair failed and we were unable to recover it. 00:28:30.605 [2024-12-09 17:38:59.602742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.605 [2024-12-09 17:38:59.602775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.605 qpair failed and we were unable to recover it. 00:28:30.605 [2024-12-09 17:38:59.603012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.605 [2024-12-09 17:38:59.603043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.605 qpair failed and we were unable to recover it. 00:28:30.605 [2024-12-09 17:38:59.603229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.605 [2024-12-09 17:38:59.603265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.605 qpair failed and we were unable to recover it. 00:28:30.605 [2024-12-09 17:38:59.603553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.605 [2024-12-09 17:38:59.603585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.605 qpair failed and we were unable to recover it. 00:28:30.605 [2024-12-09 17:38:59.603769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.605 [2024-12-09 17:38:59.603801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.605 qpair failed and we were unable to recover it. 00:28:30.605 [2024-12-09 17:38:59.603925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.605 [2024-12-09 17:38:59.603958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.605 qpair failed and we were unable to recover it. 00:28:30.605 [2024-12-09 17:38:59.604240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.605 [2024-12-09 17:38:59.604277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.605 qpair failed and we were unable to recover it. 00:28:30.605 [2024-12-09 17:38:59.604402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.605 [2024-12-09 17:38:59.604434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.605 qpair failed and we were unable to recover it. 00:28:30.605 [2024-12-09 17:38:59.604567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.605 [2024-12-09 17:38:59.604600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.605 qpair failed and we were unable to recover it. 00:28:30.605 [2024-12-09 17:38:59.604794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.605 [2024-12-09 17:38:59.604827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.605 qpair failed and we were unable to recover it. 00:28:30.605 [2024-12-09 17:38:59.604998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.605 [2024-12-09 17:38:59.605030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.605 qpair failed and we were unable to recover it. 00:28:30.605 [2024-12-09 17:38:59.605295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.605 [2024-12-09 17:38:59.605328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.605 qpair failed and we were unable to recover it. 00:28:30.605 [2024-12-09 17:38:59.605518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.605 [2024-12-09 17:38:59.605551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.605 qpair failed and we were unable to recover it. 00:28:30.605 [2024-12-09 17:38:59.605672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.605 [2024-12-09 17:38:59.605704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.605 qpair failed and we were unable to recover it. 00:28:30.605 [2024-12-09 17:38:59.605943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.605 [2024-12-09 17:38:59.605976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.605 qpair failed and we were unable to recover it. 00:28:30.605 [2024-12-09 17:38:59.606162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.605 [2024-12-09 17:38:59.606195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.605 qpair failed and we were unable to recover it. 00:28:30.605 [2024-12-09 17:38:59.606394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.605 [2024-12-09 17:38:59.606428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.605 qpair failed and we were unable to recover it. 00:28:30.605 [2024-12-09 17:38:59.606530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.605 [2024-12-09 17:38:59.606563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.605 qpair failed and we were unable to recover it. 00:28:30.605 [2024-12-09 17:38:59.606752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.605 [2024-12-09 17:38:59.606784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.605 qpair failed and we were unable to recover it. 00:28:30.605 [2024-12-09 17:38:59.607087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.605 [2024-12-09 17:38:59.607119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.605 qpair failed and we were unable to recover it. 00:28:30.605 [2024-12-09 17:38:59.607317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.605 [2024-12-09 17:38:59.607352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.605 qpair failed and we were unable to recover it. 00:28:30.605 [2024-12-09 17:38:59.607470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.605 [2024-12-09 17:38:59.607503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.605 qpair failed and we were unable to recover it. 00:28:30.605 [2024-12-09 17:38:59.607634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.605 [2024-12-09 17:38:59.607666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.605 qpair failed and we were unable to recover it. 00:28:30.605 [2024-12-09 17:38:59.607852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.605 [2024-12-09 17:38:59.607884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.605 qpair failed and we were unable to recover it. 00:28:30.605 [2024-12-09 17:38:59.608103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.605 [2024-12-09 17:38:59.608136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.605 qpair failed and we were unable to recover it. 00:28:30.605 [2024-12-09 17:38:59.608270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.605 [2024-12-09 17:38:59.608324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.605 qpair failed and we were unable to recover it. 00:28:30.605 [2024-12-09 17:38:59.608455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.605 [2024-12-09 17:38:59.608488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.606 qpair failed and we were unable to recover it. 00:28:30.606 [2024-12-09 17:38:59.608619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.606 [2024-12-09 17:38:59.608652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.606 qpair failed and we were unable to recover it. 00:28:30.606 [2024-12-09 17:38:59.608767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.606 [2024-12-09 17:38:59.608800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.606 qpair failed and we were unable to recover it. 00:28:30.606 [2024-12-09 17:38:59.608912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.606 [2024-12-09 17:38:59.608945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.606 qpair failed and we were unable to recover it. 00:28:30.606 [2024-12-09 17:38:59.609065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.606 [2024-12-09 17:38:59.609098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.606 qpair failed and we were unable to recover it. 00:28:30.606 [2024-12-09 17:38:59.609291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.606 [2024-12-09 17:38:59.609323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.606 qpair failed and we were unable to recover it. 00:28:30.606 [2024-12-09 17:38:59.609563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.606 [2024-12-09 17:38:59.609595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.606 qpair failed and we were unable to recover it. 00:28:30.606 [2024-12-09 17:38:59.609786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.606 [2024-12-09 17:38:59.609825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.606 qpair failed and we were unable to recover it. 00:28:30.606 [2024-12-09 17:38:59.610064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.606 [2024-12-09 17:38:59.610097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.606 qpair failed and we were unable to recover it. 00:28:30.606 [2024-12-09 17:38:59.610316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.606 [2024-12-09 17:38:59.610349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.606 qpair failed and we were unable to recover it. 00:28:30.606 [2024-12-09 17:38:59.610548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.606 [2024-12-09 17:38:59.610580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.606 qpair failed and we were unable to recover it. 00:28:30.606 [2024-12-09 17:38:59.610763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.606 [2024-12-09 17:38:59.610795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.606 qpair failed and we were unable to recover it. 00:28:30.606 [2024-12-09 17:38:59.611003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.606 [2024-12-09 17:38:59.611036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.606 qpair failed and we were unable to recover it. 00:28:30.606 [2024-12-09 17:38:59.611154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.606 [2024-12-09 17:38:59.611187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.606 qpair failed and we were unable to recover it. 00:28:30.606 [2024-12-09 17:38:59.611372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.606 [2024-12-09 17:38:59.611407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.606 qpair failed and we were unable to recover it. 00:28:30.606 [2024-12-09 17:38:59.611617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.606 [2024-12-09 17:38:59.611650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.606 qpair failed and we were unable to recover it. 00:28:30.606 [2024-12-09 17:38:59.611913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.606 [2024-12-09 17:38:59.611945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.606 qpair failed and we were unable to recover it. 00:28:30.606 [2024-12-09 17:38:59.612076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.606 [2024-12-09 17:38:59.612109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.606 qpair failed and we were unable to recover it. 00:28:30.606 [2024-12-09 17:38:59.612348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.606 [2024-12-09 17:38:59.612382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.606 qpair failed and we were unable to recover it. 00:28:30.606 [2024-12-09 17:38:59.612575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.606 [2024-12-09 17:38:59.612607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.606 qpair failed and we were unable to recover it. 00:28:30.606 [2024-12-09 17:38:59.612802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.606 [2024-12-09 17:38:59.612835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.606 qpair failed and we were unable to recover it. 00:28:30.606 [2024-12-09 17:38:59.612961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.606 [2024-12-09 17:38:59.612995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.606 qpair failed and we were unable to recover it. 00:28:30.606 [2024-12-09 17:38:59.613192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.606 [2024-12-09 17:38:59.613233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.606 qpair failed and we were unable to recover it. 00:28:30.606 [2024-12-09 17:38:59.613410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.606 [2024-12-09 17:38:59.613443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.606 qpair failed and we were unable to recover it. 00:28:30.606 [2024-12-09 17:38:59.613683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.606 [2024-12-09 17:38:59.613716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.606 qpair failed and we were unable to recover it. 00:28:30.606 [2024-12-09 17:38:59.613839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.606 [2024-12-09 17:38:59.613871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.606 qpair failed and we were unable to recover it. 00:28:30.606 [2024-12-09 17:38:59.614046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.606 [2024-12-09 17:38:59.614080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.606 qpair failed and we were unable to recover it. 00:28:30.606 [2024-12-09 17:38:59.614206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.606 [2024-12-09 17:38:59.614262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.606 qpair failed and we were unable to recover it. 00:28:30.606 [2024-12-09 17:38:59.614436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.606 [2024-12-09 17:38:59.614468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.606 qpair failed and we were unable to recover it. 00:28:30.606 [2024-12-09 17:38:59.614657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.606 [2024-12-09 17:38:59.614690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.606 qpair failed and we were unable to recover it. 00:28:30.606 [2024-12-09 17:38:59.614827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.606 [2024-12-09 17:38:59.614860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.606 qpair failed and we were unable to recover it. 00:28:30.606 [2024-12-09 17:38:59.615045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.606 [2024-12-09 17:38:59.615077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.606 qpair failed and we were unable to recover it. 00:28:30.606 [2024-12-09 17:38:59.615208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.606 [2024-12-09 17:38:59.615253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.606 qpair failed and we were unable to recover it. 00:28:30.606 [2024-12-09 17:38:59.615491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.606 [2024-12-09 17:38:59.615524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.606 qpair failed and we were unable to recover it. 00:28:30.606 [2024-12-09 17:38:59.615704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.606 [2024-12-09 17:38:59.615737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.606 qpair failed and we were unable to recover it. 00:28:30.606 [2024-12-09 17:38:59.615917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.606 [2024-12-09 17:38:59.615950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.606 qpair failed and we were unable to recover it. 00:28:30.606 [2024-12-09 17:38:59.616156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.606 [2024-12-09 17:38:59.616187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.606 qpair failed and we were unable to recover it. 00:28:30.606 [2024-12-09 17:38:59.616449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.606 [2024-12-09 17:38:59.616482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.606 qpair failed and we were unable to recover it. 00:28:30.606 [2024-12-09 17:38:59.616768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.606 [2024-12-09 17:38:59.616800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.606 qpair failed and we were unable to recover it. 00:28:30.606 [2024-12-09 17:38:59.616919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.606 [2024-12-09 17:38:59.616952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.606 qpair failed and we were unable to recover it. 00:28:30.606 [2024-12-09 17:38:59.617144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.607 [2024-12-09 17:38:59.617176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.607 qpair failed and we were unable to recover it. 00:28:30.607 [2024-12-09 17:38:59.617390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.607 [2024-12-09 17:38:59.617423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.607 qpair failed and we were unable to recover it. 00:28:30.607 [2024-12-09 17:38:59.617625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.607 [2024-12-09 17:38:59.617658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.607 qpair failed and we were unable to recover it. 00:28:30.607 [2024-12-09 17:38:59.617804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.607 [2024-12-09 17:38:59.617836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.607 qpair failed and we were unable to recover it. 00:28:30.607 [2024-12-09 17:38:59.618102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.607 [2024-12-09 17:38:59.618134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.607 qpair failed and we were unable to recover it. 00:28:30.607 [2024-12-09 17:38:59.618313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.607 [2024-12-09 17:38:59.618349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.607 qpair failed and we were unable to recover it. 00:28:30.607 [2024-12-09 17:38:59.618616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.607 [2024-12-09 17:38:59.618649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.607 qpair failed and we were unable to recover it. 00:28:30.607 [2024-12-09 17:38:59.618838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.607 [2024-12-09 17:38:59.618870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.607 qpair failed and we were unable to recover it. 00:28:30.607 [2024-12-09 17:38:59.619067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.607 [2024-12-09 17:38:59.619100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.607 qpair failed and we were unable to recover it. 00:28:30.607 [2024-12-09 17:38:59.619281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.607 [2024-12-09 17:38:59.619314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.607 qpair failed and we were unable to recover it. 00:28:30.607 [2024-12-09 17:38:59.619560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.607 [2024-12-09 17:38:59.619591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.607 qpair failed and we were unable to recover it. 00:28:30.607 [2024-12-09 17:38:59.619779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.607 [2024-12-09 17:38:59.619812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.607 qpair failed and we were unable to recover it. 00:28:30.607 [2024-12-09 17:38:59.619932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.607 [2024-12-09 17:38:59.619964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.607 qpair failed and we were unable to recover it. 00:28:30.607 [2024-12-09 17:38:59.620145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.607 [2024-12-09 17:38:59.620177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.607 qpair failed and we were unable to recover it. 00:28:30.607 [2024-12-09 17:38:59.620421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.607 [2024-12-09 17:38:59.620454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.607 qpair failed and we were unable to recover it. 00:28:30.607 [2024-12-09 17:38:59.620624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.607 [2024-12-09 17:38:59.620656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.607 qpair failed and we were unable to recover it. 00:28:30.607 [2024-12-09 17:38:59.620908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.607 [2024-12-09 17:38:59.620941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.607 qpair failed and we were unable to recover it. 00:28:30.607 [2024-12-09 17:38:59.621127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.607 [2024-12-09 17:38:59.621159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.607 qpair failed and we were unable to recover it. 00:28:30.607 [2024-12-09 17:38:59.621306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.607 [2024-12-09 17:38:59.621340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.607 qpair failed and we were unable to recover it. 00:28:30.607 [2024-12-09 17:38:59.621523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.607 [2024-12-09 17:38:59.621555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.607 qpair failed and we were unable to recover it. 00:28:30.607 [2024-12-09 17:38:59.621827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.607 [2024-12-09 17:38:59.621858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.607 qpair failed and we were unable to recover it. 00:28:30.607 [2024-12-09 17:38:59.622029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.607 [2024-12-09 17:38:59.622062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.607 qpair failed and we were unable to recover it. 00:28:30.607 [2024-12-09 17:38:59.622196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.607 [2024-12-09 17:38:59.622271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.607 qpair failed and we were unable to recover it. 00:28:30.607 [2024-12-09 17:38:59.622386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.607 [2024-12-09 17:38:59.622417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.607 qpair failed and we were unable to recover it. 00:28:30.607 [2024-12-09 17:38:59.622588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.607 [2024-12-09 17:38:59.622620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.607 qpair failed and we were unable to recover it. 00:28:30.607 [2024-12-09 17:38:59.622754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.607 [2024-12-09 17:38:59.622787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.607 qpair failed and we were unable to recover it. 00:28:30.607 [2024-12-09 17:38:59.622964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.607 [2024-12-09 17:38:59.622997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.607 qpair failed and we were unable to recover it. 00:28:30.607 [2024-12-09 17:38:59.623272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.607 [2024-12-09 17:38:59.623306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.607 qpair failed and we were unable to recover it. 00:28:30.607 [2024-12-09 17:38:59.623489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.607 [2024-12-09 17:38:59.623521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.607 qpair failed and we were unable to recover it. 00:28:30.607 [2024-12-09 17:38:59.623635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.607 [2024-12-09 17:38:59.623667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.607 qpair failed and we were unable to recover it. 00:28:30.607 [2024-12-09 17:38:59.623808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.607 [2024-12-09 17:38:59.623840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.607 qpair failed and we were unable to recover it. 00:28:30.607 [2024-12-09 17:38:59.623945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.607 [2024-12-09 17:38:59.623977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.607 qpair failed and we were unable to recover it. 00:28:30.607 [2024-12-09 17:38:59.624096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.607 [2024-12-09 17:38:59.624128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.607 qpair failed and we were unable to recover it. 00:28:30.607 [2024-12-09 17:38:59.624398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.607 [2024-12-09 17:38:59.624431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.607 qpair failed and we were unable to recover it. 00:28:30.607 [2024-12-09 17:38:59.624561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.607 [2024-12-09 17:38:59.624594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.607 qpair failed and we were unable to recover it. 00:28:30.607 [2024-12-09 17:38:59.624843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.607 [2024-12-09 17:38:59.624880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.607 qpair failed and we were unable to recover it. 00:28:30.607 [2024-12-09 17:38:59.625008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.607 [2024-12-09 17:38:59.625040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.607 qpair failed and we were unable to recover it. 00:28:30.607 [2024-12-09 17:38:59.625229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.607 [2024-12-09 17:38:59.625265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.607 qpair failed and we were unable to recover it. 00:28:30.607 [2024-12-09 17:38:59.625377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.607 [2024-12-09 17:38:59.625409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.607 qpair failed and we were unable to recover it. 00:28:30.607 [2024-12-09 17:38:59.625517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.607 [2024-12-09 17:38:59.625550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.608 qpair failed and we were unable to recover it. 00:28:30.608 [2024-12-09 17:38:59.625736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.608 [2024-12-09 17:38:59.625768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.608 qpair failed and we were unable to recover it. 00:28:30.608 [2024-12-09 17:38:59.625889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.608 [2024-12-09 17:38:59.625922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.608 qpair failed and we were unable to recover it. 00:28:30.608 [2024-12-09 17:38:59.626036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.608 [2024-12-09 17:38:59.626068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.608 qpair failed and we were unable to recover it. 00:28:30.608 [2024-12-09 17:38:59.626259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.608 [2024-12-09 17:38:59.626293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.608 qpair failed and we were unable to recover it. 00:28:30.608 [2024-12-09 17:38:59.626535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.608 [2024-12-09 17:38:59.626566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.608 qpair failed and we were unable to recover it. 00:28:30.608 [2024-12-09 17:38:59.626827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.608 [2024-12-09 17:38:59.626860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.608 qpair failed and we were unable to recover it. 00:28:30.608 [2024-12-09 17:38:59.626977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.608 [2024-12-09 17:38:59.627010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.608 qpair failed and we were unable to recover it. 00:28:30.608 [2024-12-09 17:38:59.627272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.608 [2024-12-09 17:38:59.627305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.608 qpair failed and we were unable to recover it. 00:28:30.608 [2024-12-09 17:38:59.627477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.608 [2024-12-09 17:38:59.627510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.608 qpair failed and we were unable to recover it. 00:28:30.608 [2024-12-09 17:38:59.627720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.608 [2024-12-09 17:38:59.627754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.608 qpair failed and we were unable to recover it. 00:28:30.608 [2024-12-09 17:38:59.627879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.608 [2024-12-09 17:38:59.627911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.608 qpair failed and we were unable to recover it. 00:28:30.608 [2024-12-09 17:38:59.628183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.608 [2024-12-09 17:38:59.628224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.608 qpair failed and we were unable to recover it. 00:28:30.608 [2024-12-09 17:38:59.628425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.608 [2024-12-09 17:38:59.628458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.608 qpair failed and we were unable to recover it. 00:28:30.608 [2024-12-09 17:38:59.628581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.608 [2024-12-09 17:38:59.628613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.608 qpair failed and we were unable to recover it. 00:28:30.608 [2024-12-09 17:38:59.628788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.608 [2024-12-09 17:38:59.628820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.608 qpair failed and we were unable to recover it. 00:28:30.608 [2024-12-09 17:38:59.629005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.608 [2024-12-09 17:38:59.629037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.608 qpair failed and we were unable to recover it. 00:28:30.608 [2024-12-09 17:38:59.629233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.608 [2024-12-09 17:38:59.629267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.608 qpair failed and we were unable to recover it. 00:28:30.608 [2024-12-09 17:38:59.629389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.608 [2024-12-09 17:38:59.629421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.608 qpair failed and we were unable to recover it. 00:28:30.608 [2024-12-09 17:38:59.629640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.608 [2024-12-09 17:38:59.629673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.608 qpair failed and we were unable to recover it. 00:28:30.608 [2024-12-09 17:38:59.629853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.608 [2024-12-09 17:38:59.629885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.608 qpair failed and we were unable to recover it. 00:28:30.608 [2024-12-09 17:38:59.630087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.608 [2024-12-09 17:38:59.630119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.608 qpair failed and we were unable to recover it. 00:28:30.608 [2024-12-09 17:38:59.630252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.608 [2024-12-09 17:38:59.630287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.608 qpair failed and we were unable to recover it. 00:28:30.608 [2024-12-09 17:38:59.630400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.608 [2024-12-09 17:38:59.630430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.608 qpair failed and we were unable to recover it. 00:28:30.608 [2024-12-09 17:38:59.630630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.608 [2024-12-09 17:38:59.630663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.608 qpair failed and we were unable to recover it. 00:28:30.608 [2024-12-09 17:38:59.630851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.608 [2024-12-09 17:38:59.630883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.608 qpair failed and we were unable to recover it. 00:28:30.608 [2024-12-09 17:38:59.631055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.608 [2024-12-09 17:38:59.631088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.608 qpair failed and we were unable to recover it. 00:28:30.608 [2024-12-09 17:38:59.631354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.608 [2024-12-09 17:38:59.631388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.608 qpair failed and we were unable to recover it. 00:28:30.608 [2024-12-09 17:38:59.631580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.608 [2024-12-09 17:38:59.631613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.608 qpair failed and we were unable to recover it. 00:28:30.608 [2024-12-09 17:38:59.631795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.608 [2024-12-09 17:38:59.631827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.608 qpair failed and we were unable to recover it. 00:28:30.608 [2024-12-09 17:38:59.632015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.608 [2024-12-09 17:38:59.632047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.608 qpair failed and we were unable to recover it. 00:28:30.608 [2024-12-09 17:38:59.632183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.608 [2024-12-09 17:38:59.632226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.608 qpair failed and we were unable to recover it. 00:28:30.608 [2024-12-09 17:38:59.632367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.608 [2024-12-09 17:38:59.632400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.608 qpair failed and we were unable to recover it. 00:28:30.608 [2024-12-09 17:38:59.632591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.608 [2024-12-09 17:38:59.632624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.608 qpair failed and we were unable to recover it. 00:28:30.608 [2024-12-09 17:38:59.632821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.608 [2024-12-09 17:38:59.632854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.608 qpair failed and we were unable to recover it. 00:28:30.608 [2024-12-09 17:38:59.633029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.608 [2024-12-09 17:38:59.633060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.608 qpair failed and we were unable to recover it. 00:28:30.608 [2024-12-09 17:38:59.633252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.608 [2024-12-09 17:38:59.633285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.608 qpair failed and we were unable to recover it. 00:28:30.608 [2024-12-09 17:38:59.633456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.608 [2024-12-09 17:38:59.633494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.608 qpair failed and we were unable to recover it. 00:28:30.608 [2024-12-09 17:38:59.633666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.608 [2024-12-09 17:38:59.633698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.608 qpair failed and we were unable to recover it. 00:28:30.609 [2024-12-09 17:38:59.633820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.609 [2024-12-09 17:38:59.633853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.609 qpair failed and we were unable to recover it. 00:28:30.609 [2024-12-09 17:38:59.634092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.609 [2024-12-09 17:38:59.634125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.609 qpair failed and we were unable to recover it. 00:28:30.609 [2024-12-09 17:38:59.634371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.609 [2024-12-09 17:38:59.634405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.609 qpair failed and we were unable to recover it. 00:28:30.609 [2024-12-09 17:38:59.634697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.609 [2024-12-09 17:38:59.634729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.609 qpair failed and we were unable to recover it. 00:28:30.609 [2024-12-09 17:38:59.634939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.609 [2024-12-09 17:38:59.634971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.609 qpair failed and we were unable to recover it. 00:28:30.609 [2024-12-09 17:38:59.635166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.609 [2024-12-09 17:38:59.635199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.609 qpair failed and we were unable to recover it. 00:28:30.609 [2024-12-09 17:38:59.635473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.609 [2024-12-09 17:38:59.635506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.609 qpair failed and we were unable to recover it. 00:28:30.609 [2024-12-09 17:38:59.635691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.609 [2024-12-09 17:38:59.635724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.609 qpair failed and we were unable to recover it. 00:28:30.609 [2024-12-09 17:38:59.635926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.609 [2024-12-09 17:38:59.635959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.609 qpair failed and we were unable to recover it. 00:28:30.609 [2024-12-09 17:38:59.636151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.609 [2024-12-09 17:38:59.636183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.609 qpair failed and we were unable to recover it. 00:28:30.609 [2024-12-09 17:38:59.636390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.609 [2024-12-09 17:38:59.636423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.609 qpair failed and we were unable to recover it. 00:28:30.609 [2024-12-09 17:38:59.636589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.609 [2024-12-09 17:38:59.636621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.609 qpair failed and we were unable to recover it. 00:28:30.609 [2024-12-09 17:38:59.636755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.609 [2024-12-09 17:38:59.636788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.609 qpair failed and we were unable to recover it. 00:28:30.609 [2024-12-09 17:38:59.636976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.609 [2024-12-09 17:38:59.637008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.609 qpair failed and we were unable to recover it. 00:28:30.609 [2024-12-09 17:38:59.637110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.609 [2024-12-09 17:38:59.637141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.609 qpair failed and we were unable to recover it. 00:28:30.609 [2024-12-09 17:38:59.637328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.609 [2024-12-09 17:38:59.637361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.609 qpair failed and we were unable to recover it. 00:28:30.609 [2024-12-09 17:38:59.637497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.609 [2024-12-09 17:38:59.637530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.609 qpair failed and we were unable to recover it. 00:28:30.609 [2024-12-09 17:38:59.637638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.609 [2024-12-09 17:38:59.637670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.609 qpair failed and we were unable to recover it. 00:28:30.609 [2024-12-09 17:38:59.637855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.609 [2024-12-09 17:38:59.637888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.609 qpair failed and we were unable to recover it. 00:28:30.609 [2024-12-09 17:38:59.638068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.609 [2024-12-09 17:38:59.638100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.609 qpair failed and we were unable to recover it. 00:28:30.609 [2024-12-09 17:38:59.638338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.609 [2024-12-09 17:38:59.638371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.609 qpair failed and we were unable to recover it. 00:28:30.609 [2024-12-09 17:38:59.638573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.609 [2024-12-09 17:38:59.638604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.609 qpair failed and we were unable to recover it. 00:28:30.609 [2024-12-09 17:38:59.638791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.609 [2024-12-09 17:38:59.638824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.609 qpair failed and we were unable to recover it. 00:28:30.609 [2024-12-09 17:38:59.638927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.609 [2024-12-09 17:38:59.638959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.609 qpair failed and we were unable to recover it. 00:28:30.609 [2024-12-09 17:38:59.639097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.609 [2024-12-09 17:38:59.639129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.609 qpair failed and we were unable to recover it. 00:28:30.609 [2024-12-09 17:38:59.639306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.609 [2024-12-09 17:38:59.639345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.609 qpair failed and we were unable to recover it. 00:28:30.609 [2024-12-09 17:38:59.639525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.609 [2024-12-09 17:38:59.639556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.609 qpair failed and we were unable to recover it. 00:28:30.609 [2024-12-09 17:38:59.639750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.609 [2024-12-09 17:38:59.639783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.609 qpair failed and we were unable to recover it. 00:28:30.609 [2024-12-09 17:38:59.639905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.609 [2024-12-09 17:38:59.639938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.609 qpair failed and we were unable to recover it. 00:28:30.609 [2024-12-09 17:38:59.640176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.609 [2024-12-09 17:38:59.640209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.609 qpair failed and we were unable to recover it. 00:28:30.609 [2024-12-09 17:38:59.640483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.609 [2024-12-09 17:38:59.640515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.609 qpair failed and we were unable to recover it. 00:28:30.609 [2024-12-09 17:38:59.640637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.609 [2024-12-09 17:38:59.640671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.609 qpair failed and we were unable to recover it. 00:28:30.609 [2024-12-09 17:38:59.640797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.609 [2024-12-09 17:38:59.640830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.609 qpair failed and we were unable to recover it. 00:28:30.610 [2024-12-09 17:38:59.640958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.610 [2024-12-09 17:38:59.640990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.610 qpair failed and we were unable to recover it. 00:28:30.610 [2024-12-09 17:38:59.641201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.610 [2024-12-09 17:38:59.641243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.610 qpair failed and we were unable to recover it. 00:28:30.610 [2024-12-09 17:38:59.641366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.610 [2024-12-09 17:38:59.641398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.610 qpair failed and we were unable to recover it. 00:28:30.610 [2024-12-09 17:38:59.641578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.610 [2024-12-09 17:38:59.641610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.610 qpair failed and we were unable to recover it. 00:28:30.610 [2024-12-09 17:38:59.641731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.610 [2024-12-09 17:38:59.641764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.610 qpair failed and we were unable to recover it. 00:28:30.610 [2024-12-09 17:38:59.641889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.610 [2024-12-09 17:38:59.641921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.610 qpair failed and we were unable to recover it. 00:28:30.610 [2024-12-09 17:38:59.642105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.610 [2024-12-09 17:38:59.642138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.610 qpair failed and we were unable to recover it. 00:28:30.610 [2024-12-09 17:38:59.642313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.610 [2024-12-09 17:38:59.642348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.610 qpair failed and we were unable to recover it. 00:28:30.610 [2024-12-09 17:38:59.642476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.610 [2024-12-09 17:38:59.642508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.610 qpair failed and we were unable to recover it. 00:28:30.610 [2024-12-09 17:38:59.642629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.610 [2024-12-09 17:38:59.642661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.610 qpair failed and we were unable to recover it. 00:28:30.610 [2024-12-09 17:38:59.642778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.610 [2024-12-09 17:38:59.642810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.610 qpair failed and we were unable to recover it. 00:28:30.610 [2024-12-09 17:38:59.642929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.610 [2024-12-09 17:38:59.642961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.610 qpair failed and we were unable to recover it. 00:28:30.610 [2024-12-09 17:38:59.643154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.610 [2024-12-09 17:38:59.643186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.610 qpair failed and we were unable to recover it. 00:28:30.610 [2024-12-09 17:38:59.643390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.610 [2024-12-09 17:38:59.643423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.610 qpair failed and we were unable to recover it. 00:28:30.610 [2024-12-09 17:38:59.643548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.610 [2024-12-09 17:38:59.643580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.610 qpair failed and we were unable to recover it. 00:28:30.610 [2024-12-09 17:38:59.643700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.610 [2024-12-09 17:38:59.643733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.610 qpair failed and we were unable to recover it. 00:28:30.610 [2024-12-09 17:38:59.643912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.610 [2024-12-09 17:38:59.643944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.610 qpair failed and we were unable to recover it. 00:28:30.610 [2024-12-09 17:38:59.644064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.610 [2024-12-09 17:38:59.644097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.610 qpair failed and we were unable to recover it. 00:28:30.610 [2024-12-09 17:38:59.644235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.610 [2024-12-09 17:38:59.644270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.610 qpair failed and we were unable to recover it. 00:28:30.610 [2024-12-09 17:38:59.644404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.610 [2024-12-09 17:38:59.644435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.610 qpair failed and we were unable to recover it. 00:28:30.610 [2024-12-09 17:38:59.644612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.610 [2024-12-09 17:38:59.644644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.610 qpair failed and we were unable to recover it. 00:28:30.610 [2024-12-09 17:38:59.644826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.610 [2024-12-09 17:38:59.644858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.610 qpair failed and we were unable to recover it. 00:28:30.610 [2024-12-09 17:38:59.644973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.610 [2024-12-09 17:38:59.645005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.610 qpair failed and we were unable to recover it. 00:28:30.610 [2024-12-09 17:38:59.645121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.610 [2024-12-09 17:38:59.645153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.610 qpair failed and we were unable to recover it. 00:28:30.610 [2024-12-09 17:38:59.645279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.610 [2024-12-09 17:38:59.645314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.610 qpair failed and we were unable to recover it. 00:28:30.610 [2024-12-09 17:38:59.645556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.610 [2024-12-09 17:38:59.645587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.610 qpair failed and we were unable to recover it. 00:28:30.610 [2024-12-09 17:38:59.645763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.610 [2024-12-09 17:38:59.645796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.610 qpair failed and we were unable to recover it. 00:28:30.610 [2024-12-09 17:38:59.645914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.610 [2024-12-09 17:38:59.645948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.610 qpair failed and we were unable to recover it. 00:28:30.610 [2024-12-09 17:38:59.646181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.610 [2024-12-09 17:38:59.646212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.610 qpair failed and we were unable to recover it. 00:28:30.610 [2024-12-09 17:38:59.646352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.610 [2024-12-09 17:38:59.646385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.610 qpair failed and we were unable to recover it. 00:28:30.610 [2024-12-09 17:38:59.646492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.610 [2024-12-09 17:38:59.646524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.610 qpair failed and we were unable to recover it. 00:28:30.610 [2024-12-09 17:38:59.646625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.610 [2024-12-09 17:38:59.646657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.610 qpair failed and we were unable to recover it. 00:28:30.610 [2024-12-09 17:38:59.646768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.610 [2024-12-09 17:38:59.646800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.610 qpair failed and we were unable to recover it. 00:28:30.610 [2024-12-09 17:38:59.646916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.610 [2024-12-09 17:38:59.646958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.610 qpair failed and we were unable to recover it. 00:28:30.610 [2024-12-09 17:38:59.647155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.610 [2024-12-09 17:38:59.647187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.610 qpair failed and we were unable to recover it. 00:28:30.610 [2024-12-09 17:38:59.647469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.610 [2024-12-09 17:38:59.647502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.610 qpair failed and we were unable to recover it. 00:28:30.610 [2024-12-09 17:38:59.647766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.610 [2024-12-09 17:38:59.647798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.610 qpair failed and we were unable to recover it. 00:28:30.610 [2024-12-09 17:38:59.647917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.610 [2024-12-09 17:38:59.647950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.610 qpair failed and we were unable to recover it. 00:28:30.610 [2024-12-09 17:38:59.648070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.610 [2024-12-09 17:38:59.648103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.610 qpair failed and we were unable to recover it. 00:28:30.610 [2024-12-09 17:38:59.648288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.611 [2024-12-09 17:38:59.648321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.611 qpair failed and we were unable to recover it. 00:28:30.611 [2024-12-09 17:38:59.648436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.611 [2024-12-09 17:38:59.648469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.611 qpair failed and we were unable to recover it. 00:28:30.611 [2024-12-09 17:38:59.648745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.611 [2024-12-09 17:38:59.648778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.611 qpair failed and we were unable to recover it. 00:28:30.611 [2024-12-09 17:38:59.648906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.611 [2024-12-09 17:38:59.648937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.611 qpair failed and we were unable to recover it. 00:28:30.611 [2024-12-09 17:38:59.649199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.611 [2024-12-09 17:38:59.649239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.611 qpair failed and we were unable to recover it. 00:28:30.611 [2024-12-09 17:38:59.649366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.611 [2024-12-09 17:38:59.649398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.611 qpair failed and we were unable to recover it. 00:28:30.611 [2024-12-09 17:38:59.649505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.611 [2024-12-09 17:38:59.649535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.611 qpair failed and we were unable to recover it. 00:28:30.611 [2024-12-09 17:38:59.649712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.611 [2024-12-09 17:38:59.649744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.611 qpair failed and we were unable to recover it. 00:28:30.611 [2024-12-09 17:38:59.649893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.611 [2024-12-09 17:38:59.649926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.611 qpair failed and we were unable to recover it. 00:28:30.611 [2024-12-09 17:38:59.650041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.611 [2024-12-09 17:38:59.650073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.611 qpair failed and we were unable to recover it. 00:28:30.611 [2024-12-09 17:38:59.650187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.611 [2024-12-09 17:38:59.650242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.611 qpair failed and we were unable to recover it. 00:28:30.611 [2024-12-09 17:38:59.650423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.611 [2024-12-09 17:38:59.650456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.611 qpair failed and we were unable to recover it. 00:28:30.611 [2024-12-09 17:38:59.650653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.611 [2024-12-09 17:38:59.650687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.611 qpair failed and we were unable to recover it. 00:28:30.611 [2024-12-09 17:38:59.650877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.611 [2024-12-09 17:38:59.650909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.611 qpair failed and we were unable to recover it. 00:28:30.611 [2024-12-09 17:38:59.651032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.611 [2024-12-09 17:38:59.651064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.611 qpair failed and we were unable to recover it. 00:28:30.611 [2024-12-09 17:38:59.651237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.611 [2024-12-09 17:38:59.651271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.611 qpair failed and we were unable to recover it. 00:28:30.611 [2024-12-09 17:38:59.651462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.611 [2024-12-09 17:38:59.651494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.611 qpair failed and we were unable to recover it. 00:28:30.611 [2024-12-09 17:38:59.651734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.611 [2024-12-09 17:38:59.651766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.611 qpair failed and we were unable to recover it. 00:28:30.611 [2024-12-09 17:38:59.651883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.611 [2024-12-09 17:38:59.651914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.611 qpair failed and we were unable to recover it. 00:28:30.611 [2024-12-09 17:38:59.652154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.611 [2024-12-09 17:38:59.652186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.611 qpair failed and we were unable to recover it. 00:28:30.611 [2024-12-09 17:38:59.652383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.611 [2024-12-09 17:38:59.652415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.611 qpair failed and we were unable to recover it. 00:28:30.611 [2024-12-09 17:38:59.652620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.611 [2024-12-09 17:38:59.652657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.611 qpair failed and we were unable to recover it. 00:28:30.611 [2024-12-09 17:38:59.652866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.611 [2024-12-09 17:38:59.652899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.611 qpair failed and we were unable to recover it. 00:28:30.611 [2024-12-09 17:38:59.653105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.611 [2024-12-09 17:38:59.653138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.611 qpair failed and we were unable to recover it. 00:28:30.611 [2024-12-09 17:38:59.653251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.611 [2024-12-09 17:38:59.653285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.611 qpair failed and we were unable to recover it. 00:28:30.611 [2024-12-09 17:38:59.653422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.611 [2024-12-09 17:38:59.653454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.611 qpair failed and we were unable to recover it. 00:28:30.611 [2024-12-09 17:38:59.653591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.611 [2024-12-09 17:38:59.653622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.611 qpair failed and we were unable to recover it. 00:28:30.611 [2024-12-09 17:38:59.653795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.611 [2024-12-09 17:38:59.653827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.611 qpair failed and we were unable to recover it. 00:28:30.611 [2024-12-09 17:38:59.654019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.611 [2024-12-09 17:38:59.654053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.611 qpair failed and we were unable to recover it. 00:28:30.611 [2024-12-09 17:38:59.654186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.611 [2024-12-09 17:38:59.654225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.611 qpair failed and we were unable to recover it. 00:28:30.611 [2024-12-09 17:38:59.654343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.611 [2024-12-09 17:38:59.654375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.611 qpair failed and we were unable to recover it. 00:28:30.611 [2024-12-09 17:38:59.654560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.611 [2024-12-09 17:38:59.654592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.611 qpair failed and we were unable to recover it. 00:28:30.611 [2024-12-09 17:38:59.654827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.611 [2024-12-09 17:38:59.654861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.611 qpair failed and we were unable to recover it. 00:28:30.611 [2024-12-09 17:38:59.655065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.611 [2024-12-09 17:38:59.655097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.611 qpair failed and we were unable to recover it. 00:28:30.611 [2024-12-09 17:38:59.655224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.611 [2024-12-09 17:38:59.655257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.611 qpair failed and we were unable to recover it. 00:28:30.611 [2024-12-09 17:38:59.655457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.611 [2024-12-09 17:38:59.655490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.611 qpair failed and we were unable to recover it. 00:28:30.611 [2024-12-09 17:38:59.655684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.611 [2024-12-09 17:38:59.655715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.611 qpair failed and we were unable to recover it. 00:28:30.611 [2024-12-09 17:38:59.655886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.611 [2024-12-09 17:38:59.655918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.611 qpair failed and we were unable to recover it. 00:28:30.611 [2024-12-09 17:38:59.656100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.611 [2024-12-09 17:38:59.656134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.611 qpair failed and we were unable to recover it. 00:28:30.611 [2024-12-09 17:38:59.656411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.612 [2024-12-09 17:38:59.656443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.612 qpair failed and we were unable to recover it. 00:28:30.612 [2024-12-09 17:38:59.656630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.612 [2024-12-09 17:38:59.656662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.612 qpair failed and we were unable to recover it. 00:28:30.612 [2024-12-09 17:38:59.656780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.612 [2024-12-09 17:38:59.656812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.612 qpair failed and we were unable to recover it. 00:28:30.612 [2024-12-09 17:38:59.656940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.612 [2024-12-09 17:38:59.656971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.612 qpair failed and we were unable to recover it. 00:28:30.612 [2024-12-09 17:38:59.657152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.612 [2024-12-09 17:38:59.657184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.612 qpair failed and we were unable to recover it. 00:28:30.612 [2024-12-09 17:38:59.657460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.612 [2024-12-09 17:38:59.657494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.612 qpair failed and we were unable to recover it. 00:28:30.612 [2024-12-09 17:38:59.657710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.612 [2024-12-09 17:38:59.657742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.612 qpair failed and we were unable to recover it. 00:28:30.612 [2024-12-09 17:38:59.657925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.612 [2024-12-09 17:38:59.657956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.612 qpair failed and we were unable to recover it. 00:28:30.612 [2024-12-09 17:38:59.658130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.612 [2024-12-09 17:38:59.658162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.612 qpair failed and we were unable to recover it. 00:28:30.612 [2024-12-09 17:38:59.658432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.612 [2024-12-09 17:38:59.658466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.612 qpair failed and we were unable to recover it. 00:28:30.612 [2024-12-09 17:38:59.658670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.612 [2024-12-09 17:38:59.658702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.612 qpair failed and we were unable to recover it. 00:28:30.612 [2024-12-09 17:38:59.658880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.612 [2024-12-09 17:38:59.658912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.612 qpair failed and we were unable to recover it. 00:28:30.612 [2024-12-09 17:38:59.659098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.612 [2024-12-09 17:38:59.659130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.612 qpair failed and we were unable to recover it. 00:28:30.612 [2024-12-09 17:38:59.659256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.612 [2024-12-09 17:38:59.659290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.612 qpair failed and we were unable to recover it. 00:28:30.612 [2024-12-09 17:38:59.659491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.612 [2024-12-09 17:38:59.659523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.612 qpair failed and we were unable to recover it. 00:28:30.612 [2024-12-09 17:38:59.659712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.612 [2024-12-09 17:38:59.659744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.612 qpair failed and we were unable to recover it. 00:28:30.612 [2024-12-09 17:38:59.659981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.612 [2024-12-09 17:38:59.660013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.612 qpair failed and we were unable to recover it. 00:28:30.612 [2024-12-09 17:38:59.660118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.612 [2024-12-09 17:38:59.660147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.612 qpair failed and we were unable to recover it. 00:28:30.612 [2024-12-09 17:38:59.660255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.612 [2024-12-09 17:38:59.660288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.612 qpair failed and we were unable to recover it. 00:28:30.612 [2024-12-09 17:38:59.660412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.612 [2024-12-09 17:38:59.660445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.612 qpair failed and we were unable to recover it. 00:28:30.612 [2024-12-09 17:38:59.660547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.612 [2024-12-09 17:38:59.660578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.612 qpair failed and we were unable to recover it. 00:28:30.612 [2024-12-09 17:38:59.660781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.612 [2024-12-09 17:38:59.660813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.612 qpair failed and we were unable to recover it. 00:28:30.612 [2024-12-09 17:38:59.660954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.612 [2024-12-09 17:38:59.660987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.612 qpair failed and we were unable to recover it. 00:28:30.612 [2024-12-09 17:38:59.661162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.612 [2024-12-09 17:38:59.661199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.612 qpair failed and we were unable to recover it. 00:28:30.612 [2024-12-09 17:38:59.661331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.612 [2024-12-09 17:38:59.661364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.612 qpair failed and we were unable to recover it. 00:28:30.612 [2024-12-09 17:38:59.661550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.612 [2024-12-09 17:38:59.661582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.612 qpair failed and we were unable to recover it. 00:28:30.612 [2024-12-09 17:38:59.661834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.612 [2024-12-09 17:38:59.661867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.612 qpair failed and we were unable to recover it. 00:28:30.612 [2024-12-09 17:38:59.662059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.612 [2024-12-09 17:38:59.662092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.612 qpair failed and we were unable to recover it. 00:28:30.612 [2024-12-09 17:38:59.662357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.612 [2024-12-09 17:38:59.662390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.612 qpair failed and we were unable to recover it. 00:28:30.612 [2024-12-09 17:38:59.662600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.612 [2024-12-09 17:38:59.662633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.612 qpair failed and we were unable to recover it. 00:28:30.612 [2024-12-09 17:38:59.662847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.612 [2024-12-09 17:38:59.662879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.612 qpair failed and we were unable to recover it. 00:28:30.612 [2024-12-09 17:38:59.663070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.612 [2024-12-09 17:38:59.663102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.612 qpair failed and we were unable to recover it. 00:28:30.612 [2024-12-09 17:38:59.663225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.612 [2024-12-09 17:38:59.663260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.612 qpair failed and we were unable to recover it. 00:28:30.612 [2024-12-09 17:38:59.663455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.612 [2024-12-09 17:38:59.663487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.612 qpair failed and we were unable to recover it. 00:28:30.612 [2024-12-09 17:38:59.663664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.612 [2024-12-09 17:38:59.663696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.612 qpair failed and we were unable to recover it. 00:28:30.612 [2024-12-09 17:38:59.663831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.612 [2024-12-09 17:38:59.663864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.612 qpair failed and we were unable to recover it. 00:28:30.612 [2024-12-09 17:38:59.664052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.612 [2024-12-09 17:38:59.664085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.612 qpair failed and we were unable to recover it. 00:28:30.612 [2024-12-09 17:38:59.664294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.612 [2024-12-09 17:38:59.664327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.612 qpair failed and we were unable to recover it. 00:28:30.612 [2024-12-09 17:38:59.664522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.612 [2024-12-09 17:38:59.664554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.612 qpair failed and we were unable to recover it. 00:28:30.612 [2024-12-09 17:38:59.664725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.612 [2024-12-09 17:38:59.664758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.612 qpair failed and we were unable to recover it. 00:28:30.613 [2024-12-09 17:38:59.664965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.613 [2024-12-09 17:38:59.664997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.613 qpair failed and we were unable to recover it. 00:28:30.613 [2024-12-09 17:38:59.665238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.613 [2024-12-09 17:38:59.665272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.613 qpair failed and we were unable to recover it. 00:28:30.613 [2024-12-09 17:38:59.665455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.613 [2024-12-09 17:38:59.665487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.613 qpair failed and we were unable to recover it. 00:28:30.613 [2024-12-09 17:38:59.665597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.613 [2024-12-09 17:38:59.665629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.613 qpair failed and we were unable to recover it. 00:28:30.613 [2024-12-09 17:38:59.665799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.613 [2024-12-09 17:38:59.665832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.613 qpair failed and we were unable to recover it. 00:28:30.613 [2024-12-09 17:38:59.666015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.613 [2024-12-09 17:38:59.666048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.613 qpair failed and we were unable to recover it. 00:28:30.613 [2024-12-09 17:38:59.666262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.613 [2024-12-09 17:38:59.666295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.613 qpair failed and we were unable to recover it. 00:28:30.613 [2024-12-09 17:38:59.666490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.613 [2024-12-09 17:38:59.666524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.613 qpair failed and we were unable to recover it. 00:28:30.613 [2024-12-09 17:38:59.666786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.613 [2024-12-09 17:38:59.666819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.613 qpair failed and we were unable to recover it. 00:28:30.613 [2024-12-09 17:38:59.666928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.613 [2024-12-09 17:38:59.666960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.613 qpair failed and we were unable to recover it. 00:28:30.613 [2024-12-09 17:38:59.667200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.613 [2024-12-09 17:38:59.667247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.613 qpair failed and we were unable to recover it. 00:28:30.613 [2024-12-09 17:38:59.667460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.613 [2024-12-09 17:38:59.667494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.613 qpair failed and we were unable to recover it. 00:28:30.613 [2024-12-09 17:38:59.667685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.613 [2024-12-09 17:38:59.667716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.613 qpair failed and we were unable to recover it. 00:28:30.613 [2024-12-09 17:38:59.667905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.613 [2024-12-09 17:38:59.667937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.613 qpair failed and we were unable to recover it. 00:28:30.613 [2024-12-09 17:38:59.668044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.613 [2024-12-09 17:38:59.668077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.613 qpair failed and we were unable to recover it. 00:28:30.613 [2024-12-09 17:38:59.668267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.613 [2024-12-09 17:38:59.668301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.613 qpair failed and we were unable to recover it. 00:28:30.613 [2024-12-09 17:38:59.668473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.613 [2024-12-09 17:38:59.668504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.613 qpair failed and we were unable to recover it. 00:28:30.613 [2024-12-09 17:38:59.668744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.613 [2024-12-09 17:38:59.668777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.613 qpair failed and we were unable to recover it. 00:28:30.613 [2024-12-09 17:38:59.668907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.613 [2024-12-09 17:38:59.668940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.613 qpair failed and we were unable to recover it. 00:28:30.613 [2024-12-09 17:38:59.669131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.613 [2024-12-09 17:38:59.669163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.613 qpair failed and we were unable to recover it. 00:28:30.613 [2024-12-09 17:38:59.669343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.613 [2024-12-09 17:38:59.669376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.613 qpair failed and we were unable to recover it. 00:28:30.613 [2024-12-09 17:38:59.669617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.613 [2024-12-09 17:38:59.669650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.613 qpair failed and we were unable to recover it. 00:28:30.613 [2024-12-09 17:38:59.669760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.613 [2024-12-09 17:38:59.669792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.613 qpair failed and we were unable to recover it. 00:28:30.613 [2024-12-09 17:38:59.669958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.613 [2024-12-09 17:38:59.669990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.613 qpair failed and we were unable to recover it. 00:28:30.613 [2024-12-09 17:38:59.670255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.613 [2024-12-09 17:38:59.670327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.613 qpair failed and we were unable to recover it. 00:28:30.613 [2024-12-09 17:38:59.670532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.613 [2024-12-09 17:38:59.670568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.613 qpair failed and we were unable to recover it. 00:28:30.613 [2024-12-09 17:38:59.670712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.613 [2024-12-09 17:38:59.670745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.613 qpair failed and we were unable to recover it. 00:28:30.613 [2024-12-09 17:38:59.670962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.613 [2024-12-09 17:38:59.670995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.613 qpair failed and we were unable to recover it. 00:28:30.613 [2024-12-09 17:38:59.671185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.613 [2024-12-09 17:38:59.671230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.613 qpair failed and we were unable to recover it. 00:28:30.613 [2024-12-09 17:38:59.671418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.613 [2024-12-09 17:38:59.671451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.613 qpair failed and we were unable to recover it. 00:28:30.613 [2024-12-09 17:38:59.671575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.613 [2024-12-09 17:38:59.671608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.613 qpair failed and we were unable to recover it. 00:28:30.613 [2024-12-09 17:38:59.671797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.613 [2024-12-09 17:38:59.671831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.613 qpair failed and we were unable to recover it. 00:28:30.613 [2024-12-09 17:38:59.672004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.613 [2024-12-09 17:38:59.672038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.613 qpair failed and we were unable to recover it. 00:28:30.614 [2024-12-09 17:38:59.672239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.614 [2024-12-09 17:38:59.672274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.614 qpair failed and we were unable to recover it. 00:28:30.614 [2024-12-09 17:38:59.672454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.614 [2024-12-09 17:38:59.672487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.614 qpair failed and we were unable to recover it. 00:28:30.614 [2024-12-09 17:38:59.672683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.614 [2024-12-09 17:38:59.672717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.614 qpair failed and we were unable to recover it. 00:28:30.614 [2024-12-09 17:38:59.672893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.614 [2024-12-09 17:38:59.672926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.614 qpair failed and we were unable to recover it. 00:28:30.614 [2024-12-09 17:38:59.673040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.614 [2024-12-09 17:38:59.673084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.614 qpair failed and we were unable to recover it. 00:28:30.614 [2024-12-09 17:38:59.673263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.614 [2024-12-09 17:38:59.673298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.614 qpair failed and we were unable to recover it. 00:28:30.614 [2024-12-09 17:38:59.673499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.614 [2024-12-09 17:38:59.673533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.614 qpair failed and we were unable to recover it. 00:28:30.614 [2024-12-09 17:38:59.673643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.614 [2024-12-09 17:38:59.673676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.614 qpair failed and we were unable to recover it. 00:28:30.614 [2024-12-09 17:38:59.673944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.614 [2024-12-09 17:38:59.673978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.614 qpair failed and we were unable to recover it. 00:28:30.614 [2024-12-09 17:38:59.674164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.614 [2024-12-09 17:38:59.674197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.614 qpair failed and we were unable to recover it. 00:28:30.614 [2024-12-09 17:38:59.674391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.614 [2024-12-09 17:38:59.674424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.614 qpair failed and we were unable to recover it. 00:28:30.614 [2024-12-09 17:38:59.674546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.614 [2024-12-09 17:38:59.674579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.614 qpair failed and we were unable to recover it. 00:28:30.614 [2024-12-09 17:38:59.674785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.614 [2024-12-09 17:38:59.674819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.614 qpair failed and we were unable to recover it. 00:28:30.614 [2024-12-09 17:38:59.675023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.614 [2024-12-09 17:38:59.675056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.614 qpair failed and we were unable to recover it. 00:28:30.614 [2024-12-09 17:38:59.675243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.614 [2024-12-09 17:38:59.675277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.614 qpair failed and we were unable to recover it. 00:28:30.614 [2024-12-09 17:38:59.675471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.614 [2024-12-09 17:38:59.675505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.614 qpair failed and we were unable to recover it. 00:28:30.614 [2024-12-09 17:38:59.675776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.614 [2024-12-09 17:38:59.675808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.614 qpair failed and we were unable to recover it. 00:28:30.614 [2024-12-09 17:38:59.676004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.614 [2024-12-09 17:38:59.676038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.614 qpair failed and we were unable to recover it. 00:28:30.614 [2024-12-09 17:38:59.676286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.614 [2024-12-09 17:38:59.676321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.614 qpair failed and we were unable to recover it. 00:28:30.614 [2024-12-09 17:38:59.676456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.614 [2024-12-09 17:38:59.676489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.614 qpair failed and we were unable to recover it. 00:28:30.614 [2024-12-09 17:38:59.676613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.614 [2024-12-09 17:38:59.676647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.614 qpair failed and we were unable to recover it. 00:28:30.614 [2024-12-09 17:38:59.676766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.614 [2024-12-09 17:38:59.676799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.614 qpair failed and we were unable to recover it. 00:28:30.614 [2024-12-09 17:38:59.676927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.614 [2024-12-09 17:38:59.676961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.614 qpair failed and we were unable to recover it. 00:28:30.614 [2024-12-09 17:38:59.677135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.614 [2024-12-09 17:38:59.677168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.614 qpair failed and we were unable to recover it. 00:28:30.614 [2024-12-09 17:38:59.677445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.614 [2024-12-09 17:38:59.677478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.614 qpair failed and we were unable to recover it. 00:28:30.614 [2024-12-09 17:38:59.677722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.614 [2024-12-09 17:38:59.677757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.614 qpair failed and we were unable to recover it. 00:28:30.614 [2024-12-09 17:38:59.677884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.614 [2024-12-09 17:38:59.677918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.614 qpair failed and we were unable to recover it. 00:28:30.614 [2024-12-09 17:38:59.678130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.614 [2024-12-09 17:38:59.678163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.614 qpair failed and we were unable to recover it. 00:28:30.614 [2024-12-09 17:38:59.678413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.614 [2024-12-09 17:38:59.678448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.614 qpair failed and we were unable to recover it. 00:28:30.614 [2024-12-09 17:38:59.678665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.614 [2024-12-09 17:38:59.678699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.614 qpair failed and we were unable to recover it. 00:28:30.614 [2024-12-09 17:38:59.678825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.614 [2024-12-09 17:38:59.678859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.614 qpair failed and we were unable to recover it. 00:28:30.614 [2024-12-09 17:38:59.679199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.614 [2024-12-09 17:38:59.679283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.614 qpair failed and we were unable to recover it. 00:28:30.614 [2024-12-09 17:38:59.679564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.614 [2024-12-09 17:38:59.679637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.614 qpair failed and we were unable to recover it. 00:28:30.614 [2024-12-09 17:38:59.679845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.614 [2024-12-09 17:38:59.679882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.614 qpair failed and we were unable to recover it. 00:28:30.614 [2024-12-09 17:38:59.680118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.614 [2024-12-09 17:38:59.680150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.614 qpair failed and we were unable to recover it. 00:28:30.614 [2024-12-09 17:38:59.680276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.614 [2024-12-09 17:38:59.680309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.614 qpair failed and we were unable to recover it. 00:28:30.614 [2024-12-09 17:38:59.680442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.614 [2024-12-09 17:38:59.680476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.614 qpair failed and we were unable to recover it. 00:28:30.614 [2024-12-09 17:38:59.680747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.614 [2024-12-09 17:38:59.680780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.614 qpair failed and we were unable to recover it. 00:28:30.614 [2024-12-09 17:38:59.680906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.614 [2024-12-09 17:38:59.680938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.614 qpair failed and we were unable to recover it. 00:28:30.615 [2024-12-09 17:38:59.681181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.615 [2024-12-09 17:38:59.681214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.615 qpair failed and we were unable to recover it. 00:28:30.615 [2024-12-09 17:38:59.681410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.615 [2024-12-09 17:38:59.681444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.615 qpair failed and we were unable to recover it. 00:28:30.615 [2024-12-09 17:38:59.681704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.615 [2024-12-09 17:38:59.681736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.615 qpair failed and we were unable to recover it. 00:28:30.615 [2024-12-09 17:38:59.681919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.615 [2024-12-09 17:38:59.681952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.615 qpair failed and we were unable to recover it. 00:28:30.615 [2024-12-09 17:38:59.682237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.615 [2024-12-09 17:38:59.682271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.615 qpair failed and we were unable to recover it. 00:28:30.615 [2024-12-09 17:38:59.682454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.615 [2024-12-09 17:38:59.682486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.615 qpair failed and we were unable to recover it. 00:28:30.615 [2024-12-09 17:38:59.682665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.615 [2024-12-09 17:38:59.682697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.615 qpair failed and we were unable to recover it. 00:28:30.615 [2024-12-09 17:38:59.682948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.615 [2024-12-09 17:38:59.682980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.615 qpair failed and we were unable to recover it. 00:28:30.615 [2024-12-09 17:38:59.683118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.615 [2024-12-09 17:38:59.683151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.615 qpair failed and we were unable to recover it. 00:28:30.615 [2024-12-09 17:38:59.683422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.615 [2024-12-09 17:38:59.683455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.615 qpair failed and we were unable to recover it. 00:28:30.615 [2024-12-09 17:38:59.683575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.615 [2024-12-09 17:38:59.683608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.615 qpair failed and we were unable to recover it. 00:28:30.615 [2024-12-09 17:38:59.683853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.615 [2024-12-09 17:38:59.683885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.615 qpair failed and we were unable to recover it. 00:28:30.615 [2024-12-09 17:38:59.684059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.615 [2024-12-09 17:38:59.684090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.615 qpair failed and we were unable to recover it. 00:28:30.615 [2024-12-09 17:38:59.684278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.615 [2024-12-09 17:38:59.684311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.615 qpair failed and we were unable to recover it. 00:28:30.615 [2024-12-09 17:38:59.684510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.615 [2024-12-09 17:38:59.684543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.615 qpair failed and we were unable to recover it. 00:28:30.615 [2024-12-09 17:38:59.684734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.615 [2024-12-09 17:38:59.684766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.615 qpair failed and we were unable to recover it. 00:28:30.615 [2024-12-09 17:38:59.684951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.615 [2024-12-09 17:38:59.684985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.615 qpair failed and we were unable to recover it. 00:28:30.615 [2024-12-09 17:38:59.685230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.615 [2024-12-09 17:38:59.685263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.615 qpair failed and we were unable to recover it. 00:28:30.615 [2024-12-09 17:38:59.685404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.615 [2024-12-09 17:38:59.685436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.615 qpair failed and we were unable to recover it. 00:28:30.615 [2024-12-09 17:38:59.685605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.615 [2024-12-09 17:38:59.685642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.615 qpair failed and we were unable to recover it. 00:28:30.615 [2024-12-09 17:38:59.685821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.615 [2024-12-09 17:38:59.685853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.615 qpair failed and we were unable to recover it. 00:28:30.615 [2024-12-09 17:38:59.686045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.615 [2024-12-09 17:38:59.686076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.615 qpair failed and we were unable to recover it. 00:28:30.615 [2024-12-09 17:38:59.686248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.615 [2024-12-09 17:38:59.686282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.615 qpair failed and we were unable to recover it. 00:28:30.615 [2024-12-09 17:38:59.686467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.615 [2024-12-09 17:38:59.686500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.615 qpair failed and we were unable to recover it. 00:28:30.615 [2024-12-09 17:38:59.686678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.615 [2024-12-09 17:38:59.686709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.615 qpair failed and we were unable to recover it. 00:28:30.615 [2024-12-09 17:38:59.686898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.615 [2024-12-09 17:38:59.686931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.615 qpair failed and we were unable to recover it. 00:28:30.615 [2024-12-09 17:38:59.687059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.615 [2024-12-09 17:38:59.687088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.615 qpair failed and we were unable to recover it. 00:28:30.615 [2024-12-09 17:38:59.687209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.615 [2024-12-09 17:38:59.687248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.615 qpair failed and we were unable to recover it. 00:28:30.615 [2024-12-09 17:38:59.687421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.615 [2024-12-09 17:38:59.687450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.615 qpair failed and we were unable to recover it. 00:28:30.615 [2024-12-09 17:38:59.687560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.615 [2024-12-09 17:38:59.687590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.615 qpair failed and we were unable to recover it. 00:28:30.615 [2024-12-09 17:38:59.687830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.615 [2024-12-09 17:38:59.687860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.615 qpair failed and we were unable to recover it. 00:28:30.615 [2024-12-09 17:38:59.688122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.615 [2024-12-09 17:38:59.688151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.615 qpair failed and we were unable to recover it. 00:28:30.615 [2024-12-09 17:38:59.688277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.615 [2024-12-09 17:38:59.688307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.615 qpair failed and we were unable to recover it. 00:28:30.615 [2024-12-09 17:38:59.688436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.615 [2024-12-09 17:38:59.688465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.615 qpair failed and we were unable to recover it. 00:28:30.615 [2024-12-09 17:38:59.688594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.615 [2024-12-09 17:38:59.688624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.615 qpair failed and we were unable to recover it. 00:28:30.615 [2024-12-09 17:38:59.688862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.615 [2024-12-09 17:38:59.688890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.615 qpair failed and we were unable to recover it. 00:28:30.615 [2024-12-09 17:38:59.689087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.615 [2024-12-09 17:38:59.689116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.615 qpair failed and we were unable to recover it. 00:28:30.615 [2024-12-09 17:38:59.689301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.615 [2024-12-09 17:38:59.689332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.615 qpair failed and we were unable to recover it. 00:28:30.615 [2024-12-09 17:38:59.689457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.615 [2024-12-09 17:38:59.689486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.615 qpair failed and we were unable to recover it. 00:28:30.615 [2024-12-09 17:38:59.689692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.615 [2024-12-09 17:38:59.689722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.616 qpair failed and we were unable to recover it. 00:28:30.616 [2024-12-09 17:38:59.689824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.616 [2024-12-09 17:38:59.689853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.616 qpair failed and we were unable to recover it. 00:28:30.616 [2024-12-09 17:38:59.690113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.616 [2024-12-09 17:38:59.690142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.616 qpair failed and we were unable to recover it. 00:28:30.616 [2024-12-09 17:38:59.690416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.616 [2024-12-09 17:38:59.690447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.616 qpair failed and we were unable to recover it. 00:28:30.616 [2024-12-09 17:38:59.690622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.616 [2024-12-09 17:38:59.690654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.616 qpair failed and we were unable to recover it. 00:28:30.616 [2024-12-09 17:38:59.690821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.616 [2024-12-09 17:38:59.690851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.616 qpair failed and we were unable to recover it. 00:28:30.616 [2024-12-09 17:38:59.691025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.616 [2024-12-09 17:38:59.691056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.616 qpair failed and we were unable to recover it. 00:28:30.616 [2024-12-09 17:38:59.691240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.616 [2024-12-09 17:38:59.691271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.616 qpair failed and we were unable to recover it. 00:28:30.616 [2024-12-09 17:38:59.691392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.616 [2024-12-09 17:38:59.691422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.616 qpair failed and we were unable to recover it. 00:28:30.616 [2024-12-09 17:38:59.691548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.616 [2024-12-09 17:38:59.691578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.616 qpair failed and we were unable to recover it. 00:28:30.616 [2024-12-09 17:38:59.691761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.616 [2024-12-09 17:38:59.691791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.616 qpair failed and we were unable to recover it. 00:28:30.616 [2024-12-09 17:38:59.691901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.616 [2024-12-09 17:38:59.691931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.616 qpair failed and we were unable to recover it. 00:28:30.616 [2024-12-09 17:38:59.692107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.616 [2024-12-09 17:38:59.692136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.616 qpair failed and we were unable to recover it. 00:28:30.616 [2024-12-09 17:38:59.692259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.616 [2024-12-09 17:38:59.692291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.616 qpair failed and we were unable to recover it. 00:28:30.616 [2024-12-09 17:38:59.692485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.616 [2024-12-09 17:38:59.692515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.616 qpair failed and we were unable to recover it. 00:28:30.616 [2024-12-09 17:38:59.692710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.616 [2024-12-09 17:38:59.692741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.616 qpair failed and we were unable to recover it. 00:28:30.616 [2024-12-09 17:38:59.692919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.616 [2024-12-09 17:38:59.692949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.616 qpair failed and we were unable to recover it. 00:28:30.616 [2024-12-09 17:38:59.693210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.616 [2024-12-09 17:38:59.693249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.616 qpair failed and we were unable to recover it. 00:28:30.616 [2024-12-09 17:38:59.693442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.616 [2024-12-09 17:38:59.693473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.616 qpair failed and we were unable to recover it. 00:28:30.616 [2024-12-09 17:38:59.693656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.616 [2024-12-09 17:38:59.693685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.616 qpair failed and we were unable to recover it. 00:28:30.616 [2024-12-09 17:38:59.693926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.616 [2024-12-09 17:38:59.693956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.616 qpair failed and we were unable to recover it. 00:28:30.616 [2024-12-09 17:38:59.694255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.616 [2024-12-09 17:38:59.694302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.616 qpair failed and we were unable to recover it. 00:28:30.616 [2024-12-09 17:38:59.694421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.616 [2024-12-09 17:38:59.694456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.616 qpair failed and we were unable to recover it. 00:28:30.616 [2024-12-09 17:38:59.694649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.616 [2024-12-09 17:38:59.694683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.616 qpair failed and we were unable to recover it. 00:28:30.616 [2024-12-09 17:38:59.694808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.616 [2024-12-09 17:38:59.694841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.616 qpair failed and we were unable to recover it. 00:28:30.616 [2024-12-09 17:38:59.694979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.616 [2024-12-09 17:38:59.695012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.616 qpair failed and we were unable to recover it. 00:28:30.616 [2024-12-09 17:38:59.695209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.616 [2024-12-09 17:38:59.695254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.616 qpair failed and we were unable to recover it. 00:28:30.616 [2024-12-09 17:38:59.695448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.616 [2024-12-09 17:38:59.695482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.616 qpair failed and we were unable to recover it. 00:28:30.616 [2024-12-09 17:38:59.695615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.616 [2024-12-09 17:38:59.695649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.616 qpair failed and we were unable to recover it. 00:28:30.616 [2024-12-09 17:38:59.695828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.616 [2024-12-09 17:38:59.695862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.616 qpair failed and we were unable to recover it. 00:28:30.616 [2024-12-09 17:38:59.695978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.616 [2024-12-09 17:38:59.696011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.616 qpair failed and we were unable to recover it. 00:28:30.616 [2024-12-09 17:38:59.696185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.616 [2024-12-09 17:38:59.696226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.616 qpair failed and we were unable to recover it. 00:28:30.616 [2024-12-09 17:38:59.696351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.616 [2024-12-09 17:38:59.696386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.616 qpair failed and we were unable to recover it. 00:28:30.616 [2024-12-09 17:38:59.696628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.616 [2024-12-09 17:38:59.696663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.616 qpair failed and we were unable to recover it. 00:28:30.616 [2024-12-09 17:38:59.696854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.616 [2024-12-09 17:38:59.696897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.616 qpair failed and we were unable to recover it. 00:28:30.616 [2024-12-09 17:38:59.697091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.616 [2024-12-09 17:38:59.697124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.616 qpair failed and we were unable to recover it. 00:28:30.616 [2024-12-09 17:38:59.697318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.616 [2024-12-09 17:38:59.697353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.616 qpair failed and we were unable to recover it. 00:28:30.616 [2024-12-09 17:38:59.697530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.616 [2024-12-09 17:38:59.697563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.616 qpair failed and we were unable to recover it. 00:28:30.616 [2024-12-09 17:38:59.697691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.616 [2024-12-09 17:38:59.697725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.616 qpair failed and we were unable to recover it. 00:28:30.616 [2024-12-09 17:38:59.697897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.616 [2024-12-09 17:38:59.697929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.616 qpair failed and we were unable to recover it. 00:28:30.617 [2024-12-09 17:38:59.698194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.617 [2024-12-09 17:38:59.698237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.617 qpair failed and we were unable to recover it. 00:28:30.617 [2024-12-09 17:38:59.698441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.617 [2024-12-09 17:38:59.698474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.617 qpair failed and we were unable to recover it. 00:28:30.617 [2024-12-09 17:38:59.698674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.617 [2024-12-09 17:38:59.698708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.617 qpair failed and we were unable to recover it. 00:28:30.617 [2024-12-09 17:38:59.698909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.617 [2024-12-09 17:38:59.698942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.617 qpair failed and we were unable to recover it. 00:28:30.617 [2024-12-09 17:38:59.699154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.617 [2024-12-09 17:38:59.699187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.617 qpair failed and we were unable to recover it. 00:28:30.617 [2024-12-09 17:38:59.699440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.617 [2024-12-09 17:38:59.699476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.617 qpair failed and we were unable to recover it. 00:28:30.617 [2024-12-09 17:38:59.699617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.617 [2024-12-09 17:38:59.699650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.617 qpair failed and we were unable to recover it. 00:28:30.617 [2024-12-09 17:38:59.699776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.617 [2024-12-09 17:38:59.699807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.617 qpair failed and we were unable to recover it. 00:28:30.617 [2024-12-09 17:38:59.700013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.617 [2024-12-09 17:38:59.700045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.617 qpair failed and we were unable to recover it. 00:28:30.617 [2024-12-09 17:38:59.700295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.617 [2024-12-09 17:38:59.700328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.617 qpair failed and we were unable to recover it. 00:28:30.617 [2024-12-09 17:38:59.700441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.617 [2024-12-09 17:38:59.700473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.617 qpair failed and we were unable to recover it. 00:28:30.617 [2024-12-09 17:38:59.700599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.617 [2024-12-09 17:38:59.700631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.617 qpair failed and we were unable to recover it. 00:28:30.617 [2024-12-09 17:38:59.700824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.617 [2024-12-09 17:38:59.700856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.617 qpair failed and we were unable to recover it. 00:28:30.617 [2024-12-09 17:38:59.701044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.617 [2024-12-09 17:38:59.701076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.617 qpair failed and we were unable to recover it. 00:28:30.617 [2024-12-09 17:38:59.701209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.617 [2024-12-09 17:38:59.701250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.617 qpair failed and we were unable to recover it. 00:28:30.617 [2024-12-09 17:38:59.701428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.617 [2024-12-09 17:38:59.701461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.617 qpair failed and we were unable to recover it. 00:28:30.617 [2024-12-09 17:38:59.701634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.617 [2024-12-09 17:38:59.701666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.617 qpair failed and we were unable to recover it. 00:28:30.617 [2024-12-09 17:38:59.701927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.617 [2024-12-09 17:38:59.701960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.617 qpair failed and we were unable to recover it. 00:28:30.617 [2024-12-09 17:38:59.702197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.617 [2024-12-09 17:38:59.702250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.617 qpair failed and we were unable to recover it. 00:28:30.617 [2024-12-09 17:38:59.702372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.617 [2024-12-09 17:38:59.702403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.617 qpair failed and we were unable to recover it. 00:28:30.617 [2024-12-09 17:38:59.702603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.617 [2024-12-09 17:38:59.702635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.617 qpair failed and we were unable to recover it. 00:28:30.617 [2024-12-09 17:38:59.702826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.617 [2024-12-09 17:38:59.702869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.617 qpair failed and we were unable to recover it. 00:28:30.617 [2024-12-09 17:38:59.703052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.617 [2024-12-09 17:38:59.703083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.617 qpair failed and we were unable to recover it. 00:28:30.617 [2024-12-09 17:38:59.703205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.617 [2024-12-09 17:38:59.703247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.617 qpair failed and we were unable to recover it. 00:28:30.617 [2024-12-09 17:38:59.703372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.617 [2024-12-09 17:38:59.703405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.617 qpair failed and we were unable to recover it. 00:28:30.617 [2024-12-09 17:38:59.703582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.617 [2024-12-09 17:38:59.703614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.617 qpair failed and we were unable to recover it. 00:28:30.617 [2024-12-09 17:38:59.703833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.617 [2024-12-09 17:38:59.703865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.617 qpair failed and we were unable to recover it. 00:28:30.617 [2024-12-09 17:38:59.704055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.617 [2024-12-09 17:38:59.704087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.617 qpair failed and we were unable to recover it. 00:28:30.617 [2024-12-09 17:38:59.704289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.617 [2024-12-09 17:38:59.704322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.617 qpair failed and we were unable to recover it. 00:28:30.617 [2024-12-09 17:38:59.704497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.617 [2024-12-09 17:38:59.704529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.617 qpair failed and we were unable to recover it. 00:28:30.617 [2024-12-09 17:38:59.704717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.617 [2024-12-09 17:38:59.704749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.617 qpair failed and we were unable to recover it. 00:28:30.617 [2024-12-09 17:38:59.704855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.617 [2024-12-09 17:38:59.704887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.617 qpair failed and we were unable to recover it. 00:28:30.617 [2024-12-09 17:38:59.705072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.617 [2024-12-09 17:38:59.705104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.617 qpair failed and we were unable to recover it. 00:28:30.617 [2024-12-09 17:38:59.705288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.617 [2024-12-09 17:38:59.705321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.617 qpair failed and we were unable to recover it. 00:28:30.617 [2024-12-09 17:38:59.705438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.617 [2024-12-09 17:38:59.705469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.617 qpair failed and we were unable to recover it. 00:28:30.617 [2024-12-09 17:38:59.705761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.617 [2024-12-09 17:38:59.705797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.617 qpair failed and we were unable to recover it. 00:28:30.617 [2024-12-09 17:38:59.705936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.617 [2024-12-09 17:38:59.705969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.617 qpair failed and we were unable to recover it. 00:28:30.617 [2024-12-09 17:38:59.706234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.617 [2024-12-09 17:38:59.706269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.617 qpair failed and we were unable to recover it. 00:28:30.617 [2024-12-09 17:38:59.706512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.617 [2024-12-09 17:38:59.706547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.617 qpair failed and we were unable to recover it. 00:28:30.617 [2024-12-09 17:38:59.706732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.618 [2024-12-09 17:38:59.706765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.618 qpair failed and we were unable to recover it. 00:28:30.618 [2024-12-09 17:38:59.707007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.618 [2024-12-09 17:38:59.707040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.618 qpair failed and we were unable to recover it. 00:28:30.618 [2024-12-09 17:38:59.707170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.618 [2024-12-09 17:38:59.707203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.618 qpair failed and we were unable to recover it. 00:28:30.618 [2024-12-09 17:38:59.707469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.618 [2024-12-09 17:38:59.707504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.618 qpair failed and we were unable to recover it. 00:28:30.618 [2024-12-09 17:38:59.707615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.618 [2024-12-09 17:38:59.707646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.618 qpair failed and we were unable to recover it. 00:28:30.618 [2024-12-09 17:38:59.707827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.618 [2024-12-09 17:38:59.707860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.618 qpair failed and we were unable to recover it. 00:28:30.618 [2024-12-09 17:38:59.708049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.618 [2024-12-09 17:38:59.708083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.618 qpair failed and we were unable to recover it. 00:28:30.618 [2024-12-09 17:38:59.708208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.618 [2024-12-09 17:38:59.708250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.618 qpair failed and we were unable to recover it. 00:28:30.618 [2024-12-09 17:38:59.708371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.618 [2024-12-09 17:38:59.708404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.618 qpair failed and we were unable to recover it. 00:28:30.618 [2024-12-09 17:38:59.708588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.618 [2024-12-09 17:38:59.708628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.618 qpair failed and we were unable to recover it. 00:28:30.618 [2024-12-09 17:38:59.708872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.618 [2024-12-09 17:38:59.708905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.618 qpair failed and we were unable to recover it. 00:28:30.618 [2024-12-09 17:38:59.709095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.618 [2024-12-09 17:38:59.709128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.618 qpair failed and we were unable to recover it. 00:28:30.618 [2024-12-09 17:38:59.709243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.618 [2024-12-09 17:38:59.709277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.618 qpair failed and we were unable to recover it. 00:28:30.618 [2024-12-09 17:38:59.709478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.618 [2024-12-09 17:38:59.709511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.618 qpair failed and we were unable to recover it. 00:28:30.618 [2024-12-09 17:38:59.709707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.618 [2024-12-09 17:38:59.709740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.618 qpair failed and we were unable to recover it. 00:28:30.618 [2024-12-09 17:38:59.709927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.618 [2024-12-09 17:38:59.709960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.618 qpair failed and we were unable to recover it. 00:28:30.618 [2024-12-09 17:38:59.710236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.618 [2024-12-09 17:38:59.710271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.618 qpair failed and we were unable to recover it. 00:28:30.618 [2024-12-09 17:38:59.710376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.618 [2024-12-09 17:38:59.710409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.618 qpair failed and we were unable to recover it. 00:28:30.618 [2024-12-09 17:38:59.710592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.618 [2024-12-09 17:38:59.710624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.618 qpair failed and we were unable to recover it. 00:28:30.618 [2024-12-09 17:38:59.710816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.618 [2024-12-09 17:38:59.710849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.618 qpair failed and we were unable to recover it. 00:28:30.618 [2024-12-09 17:38:59.711035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.618 [2024-12-09 17:38:59.711069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.618 qpair failed and we were unable to recover it. 00:28:30.618 [2024-12-09 17:38:59.711175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.618 [2024-12-09 17:38:59.711208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.618 qpair failed and we were unable to recover it. 00:28:30.618 [2024-12-09 17:38:59.711340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.618 [2024-12-09 17:38:59.711372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.618 qpair failed and we were unable to recover it. 00:28:30.618 [2024-12-09 17:38:59.711565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.618 [2024-12-09 17:38:59.711599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.618 qpair failed and we were unable to recover it. 00:28:30.618 [2024-12-09 17:38:59.711738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.618 [2024-12-09 17:38:59.711771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.618 qpair failed and we were unable to recover it. 00:28:30.618 [2024-12-09 17:38:59.711946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.618 [2024-12-09 17:38:59.711979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.618 qpair failed and we were unable to recover it. 00:28:30.618 [2024-12-09 17:38:59.712228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.618 [2024-12-09 17:38:59.712263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.618 qpair failed and we were unable to recover it. 00:28:30.618 [2024-12-09 17:38:59.712493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.618 [2024-12-09 17:38:59.712526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.618 qpair failed and we were unable to recover it. 00:28:30.618 [2024-12-09 17:38:59.712652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.618 [2024-12-09 17:38:59.712685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.618 qpair failed and we were unable to recover it. 00:28:30.618 [2024-12-09 17:38:59.712859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.618 [2024-12-09 17:38:59.712891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.618 qpair failed and we were unable to recover it. 00:28:30.618 [2024-12-09 17:38:59.713010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.618 [2024-12-09 17:38:59.713043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.618 qpair failed and we were unable to recover it. 00:28:30.618 [2024-12-09 17:38:59.713171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.618 [2024-12-09 17:38:59.713204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.618 qpair failed and we were unable to recover it. 00:28:30.618 [2024-12-09 17:38:59.713483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.618 [2024-12-09 17:38:59.713516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.618 qpair failed and we were unable to recover it. 00:28:30.618 [2024-12-09 17:38:59.713692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.618 [2024-12-09 17:38:59.713724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.618 qpair failed and we were unable to recover it. 00:28:30.618 [2024-12-09 17:38:59.713914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.619 [2024-12-09 17:38:59.713947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.619 qpair failed and we were unable to recover it. 00:28:30.619 [2024-12-09 17:38:59.714125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.619 [2024-12-09 17:38:59.714158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.619 qpair failed and we were unable to recover it. 00:28:30.619 [2024-12-09 17:38:59.714394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.619 [2024-12-09 17:38:59.714429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.619 qpair failed and we were unable to recover it. 00:28:30.619 [2024-12-09 17:38:59.714699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.619 [2024-12-09 17:38:59.714732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.619 qpair failed and we were unable to recover it. 00:28:30.619 [2024-12-09 17:38:59.714932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.619 [2024-12-09 17:38:59.714966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.619 qpair failed and we were unable to recover it. 00:28:30.619 [2024-12-09 17:38:59.715139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.619 [2024-12-09 17:38:59.715172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.619 qpair failed and we were unable to recover it. 00:28:30.619 [2024-12-09 17:38:59.715339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.619 [2024-12-09 17:38:59.715373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.619 qpair failed and we were unable to recover it. 00:28:30.619 [2024-12-09 17:38:59.715588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.619 [2024-12-09 17:38:59.715621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.619 qpair failed and we were unable to recover it. 00:28:30.619 [2024-12-09 17:38:59.715805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.619 [2024-12-09 17:38:59.715838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.619 qpair failed and we were unable to recover it. 00:28:30.619 [2024-12-09 17:38:59.715959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.619 [2024-12-09 17:38:59.715992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.619 qpair failed and we were unable to recover it. 00:28:30.619 [2024-12-09 17:38:59.716253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.619 [2024-12-09 17:38:59.716288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.619 qpair failed and we were unable to recover it. 00:28:30.619 [2024-12-09 17:38:59.716420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.619 [2024-12-09 17:38:59.716453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.619 qpair failed and we were unable to recover it. 00:28:30.619 [2024-12-09 17:38:59.716645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.619 [2024-12-09 17:38:59.716677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.619 qpair failed and we were unable to recover it. 00:28:30.619 [2024-12-09 17:38:59.716866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.619 [2024-12-09 17:38:59.716899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.619 qpair failed and we were unable to recover it. 00:28:30.619 [2024-12-09 17:38:59.717078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.619 [2024-12-09 17:38:59.717111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.619 qpair failed and we were unable to recover it. 00:28:30.619 [2024-12-09 17:38:59.717228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.619 [2024-12-09 17:38:59.717268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.619 qpair failed and we were unable to recover it. 00:28:30.619 [2024-12-09 17:38:59.717451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.619 [2024-12-09 17:38:59.717484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.619 qpair failed and we were unable to recover it. 00:28:30.619 [2024-12-09 17:38:59.717690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.619 [2024-12-09 17:38:59.717724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.619 qpair failed and we were unable to recover it. 00:28:30.619 [2024-12-09 17:38:59.717844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.619 [2024-12-09 17:38:59.717877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.619 qpair failed and we were unable to recover it. 00:28:30.619 [2024-12-09 17:38:59.718014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.619 [2024-12-09 17:38:59.718047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.619 qpair failed and we were unable to recover it. 00:28:30.619 [2024-12-09 17:38:59.718302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.619 [2024-12-09 17:38:59.718335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.619 qpair failed and we were unable to recover it. 00:28:30.619 [2024-12-09 17:38:59.718508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.619 [2024-12-09 17:38:59.718541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.619 qpair failed and we were unable to recover it. 00:28:30.619 [2024-12-09 17:38:59.718671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.619 [2024-12-09 17:38:59.718705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.619 qpair failed and we were unable to recover it. 00:28:30.619 [2024-12-09 17:38:59.718815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.619 [2024-12-09 17:38:59.718849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.619 qpair failed and we were unable to recover it. 00:28:30.619 [2024-12-09 17:38:59.719036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.619 [2024-12-09 17:38:59.719068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.619 qpair failed and we were unable to recover it. 00:28:30.619 [2024-12-09 17:38:59.719276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.619 [2024-12-09 17:38:59.719309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.619 qpair failed and we were unable to recover it. 00:28:30.619 [2024-12-09 17:38:59.719485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.619 [2024-12-09 17:38:59.719519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.619 qpair failed and we were unable to recover it. 00:28:30.619 [2024-12-09 17:38:59.719636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.619 [2024-12-09 17:38:59.719668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.619 qpair failed and we were unable to recover it. 00:28:30.619 [2024-12-09 17:38:59.719778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.619 [2024-12-09 17:38:59.719811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.619 qpair failed and we were unable to recover it. 00:28:30.619 [2024-12-09 17:38:59.720010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.619 [2024-12-09 17:38:59.720043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.619 qpair failed and we were unable to recover it. 00:28:30.619 [2024-12-09 17:38:59.720237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.619 [2024-12-09 17:38:59.720271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.619 qpair failed and we were unable to recover it. 00:28:30.619 [2024-12-09 17:38:59.720472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.619 [2024-12-09 17:38:59.720505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.619 qpair failed and we were unable to recover it. 00:28:30.619 [2024-12-09 17:38:59.720767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.619 [2024-12-09 17:38:59.720801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.619 qpair failed and we were unable to recover it. 00:28:30.619 [2024-12-09 17:38:59.720915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.619 [2024-12-09 17:38:59.720948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.619 qpair failed and we were unable to recover it. 00:28:30.619 [2024-12-09 17:38:59.721199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.619 [2024-12-09 17:38:59.721243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.619 qpair failed and we were unable to recover it. 00:28:30.619 [2024-12-09 17:38:59.721377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.619 [2024-12-09 17:38:59.721410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.619 qpair failed and we were unable to recover it. 00:28:30.619 [2024-12-09 17:38:59.721586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.619 [2024-12-09 17:38:59.721620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.619 qpair failed and we were unable to recover it. 00:28:30.619 [2024-12-09 17:38:59.721756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.619 [2024-12-09 17:38:59.721789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.619 qpair failed and we were unable to recover it. 00:28:30.619 [2024-12-09 17:38:59.722000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.619 [2024-12-09 17:38:59.722033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.619 qpair failed and we were unable to recover it. 00:28:30.620 [2024-12-09 17:38:59.722249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.620 [2024-12-09 17:38:59.722285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.620 qpair failed and we were unable to recover it. 00:28:30.620 [2024-12-09 17:38:59.722527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.620 [2024-12-09 17:38:59.722561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.620 qpair failed and we were unable to recover it. 00:28:30.620 [2024-12-09 17:38:59.722804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.620 [2024-12-09 17:38:59.722838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.620 qpair failed and we were unable to recover it. 00:28:30.620 [2024-12-09 17:38:59.722975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.620 [2024-12-09 17:38:59.723009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.620 qpair failed and we were unable to recover it. 00:28:30.620 [2024-12-09 17:38:59.723267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.620 [2024-12-09 17:38:59.723302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.620 qpair failed and we were unable to recover it. 00:28:30.620 [2024-12-09 17:38:59.723444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.620 [2024-12-09 17:38:59.723478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.620 qpair failed and we were unable to recover it. 00:28:30.620 [2024-12-09 17:38:59.723610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.620 [2024-12-09 17:38:59.723642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.620 qpair failed and we were unable to recover it. 00:28:30.620 [2024-12-09 17:38:59.723772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.620 [2024-12-09 17:38:59.723806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.620 qpair failed and we were unable to recover it. 00:28:30.620 [2024-12-09 17:38:59.724018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.620 [2024-12-09 17:38:59.724051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.620 qpair failed and we were unable to recover it. 00:28:30.620 [2024-12-09 17:38:59.724235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.620 [2024-12-09 17:38:59.724270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.620 qpair failed and we were unable to recover it. 00:28:30.620 [2024-12-09 17:38:59.724388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.620 [2024-12-09 17:38:59.724422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.620 qpair failed and we were unable to recover it. 00:28:30.620 [2024-12-09 17:38:59.724693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.620 [2024-12-09 17:38:59.724727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.620 qpair failed and we were unable to recover it. 00:28:30.620 [2024-12-09 17:38:59.724906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.620 [2024-12-09 17:38:59.724939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.620 qpair failed and we were unable to recover it. 00:28:30.620 [2024-12-09 17:38:59.725146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.620 [2024-12-09 17:38:59.725179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.620 qpair failed and we were unable to recover it. 00:28:30.620 [2024-12-09 17:38:59.725446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.620 [2024-12-09 17:38:59.725480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.620 qpair failed and we were unable to recover it. 00:28:30.620 [2024-12-09 17:38:59.725685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.620 [2024-12-09 17:38:59.725719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.620 qpair failed and we were unable to recover it. 00:28:30.620 [2024-12-09 17:38:59.725863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.620 [2024-12-09 17:38:59.725902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.620 qpair failed and we were unable to recover it. 00:28:30.620 [2024-12-09 17:38:59.726075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.620 [2024-12-09 17:38:59.726108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.620 qpair failed and we were unable to recover it. 00:28:30.620 [2024-12-09 17:38:59.726309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.620 [2024-12-09 17:38:59.726343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.620 qpair failed and we were unable to recover it. 00:28:30.620 [2024-12-09 17:38:59.726563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.620 [2024-12-09 17:38:59.726596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.620 qpair failed and we were unable to recover it. 00:28:30.620 [2024-12-09 17:38:59.726780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.620 [2024-12-09 17:38:59.726814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.620 qpair failed and we were unable to recover it. 00:28:30.620 [2024-12-09 17:38:59.727055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.620 [2024-12-09 17:38:59.727088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.620 qpair failed and we were unable to recover it. 00:28:30.620 [2024-12-09 17:38:59.727229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.620 [2024-12-09 17:38:59.727263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.620 qpair failed and we were unable to recover it. 00:28:30.620 [2024-12-09 17:38:59.727531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.620 [2024-12-09 17:38:59.727565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.620 qpair failed and we were unable to recover it. 00:28:30.620 [2024-12-09 17:38:59.727753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.620 [2024-12-09 17:38:59.727788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.620 qpair failed and we were unable to recover it. 00:28:30.620 [2024-12-09 17:38:59.727979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.620 [2024-12-09 17:38:59.728013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.620 qpair failed and we were unable to recover it. 00:28:30.620 [2024-12-09 17:38:59.728190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.620 [2024-12-09 17:38:59.728235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.620 qpair failed and we were unable to recover it. 00:28:30.620 [2024-12-09 17:38:59.728426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.620 [2024-12-09 17:38:59.728460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.620 qpair failed and we were unable to recover it. 00:28:30.620 [2024-12-09 17:38:59.728647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.620 [2024-12-09 17:38:59.728681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.620 qpair failed and we were unable to recover it. 00:28:30.620 [2024-12-09 17:38:59.728855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.620 [2024-12-09 17:38:59.728888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.620 qpair failed and we were unable to recover it. 00:28:30.620 [2024-12-09 17:38:59.729152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.620 [2024-12-09 17:38:59.729185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.620 qpair failed and we were unable to recover it. 00:28:30.620 [2024-12-09 17:38:59.729376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.620 [2024-12-09 17:38:59.729410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.620 qpair failed and we were unable to recover it. 00:28:30.620 [2024-12-09 17:38:59.729667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.620 [2024-12-09 17:38:59.729700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.620 qpair failed and we were unable to recover it. 00:28:30.620 [2024-12-09 17:38:59.729838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.620 [2024-12-09 17:38:59.729872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.620 qpair failed and we were unable to recover it. 00:28:30.620 [2024-12-09 17:38:59.730055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.620 [2024-12-09 17:38:59.730089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.620 qpair failed and we were unable to recover it. 00:28:30.620 [2024-12-09 17:38:59.730354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.620 [2024-12-09 17:38:59.730388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.620 qpair failed and we were unable to recover it. 00:28:30.620 [2024-12-09 17:38:59.730521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.620 [2024-12-09 17:38:59.730554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.620 qpair failed and we were unable to recover it. 00:28:30.620 [2024-12-09 17:38:59.730670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.621 [2024-12-09 17:38:59.730703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.621 qpair failed and we were unable to recover it. 00:28:30.621 [2024-12-09 17:38:59.730907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.621 [2024-12-09 17:38:59.730941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.621 qpair failed and we were unable to recover it. 00:28:30.621 [2024-12-09 17:38:59.731180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.621 [2024-12-09 17:38:59.731213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.621 qpair failed and we were unable to recover it. 00:28:30.621 [2024-12-09 17:38:59.731416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.621 [2024-12-09 17:38:59.731451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.621 qpair failed and we were unable to recover it. 00:28:30.621 [2024-12-09 17:38:59.731588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.621 [2024-12-09 17:38:59.731621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.621 qpair failed and we were unable to recover it. 00:28:30.621 [2024-12-09 17:38:59.731895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.621 [2024-12-09 17:38:59.731927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.621 qpair failed and we were unable to recover it. 00:28:30.621 [2024-12-09 17:38:59.732050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.621 [2024-12-09 17:38:59.732084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.621 qpair failed and we were unable to recover it. 00:28:30.621 [2024-12-09 17:38:59.732334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.621 [2024-12-09 17:38:59.732369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.621 qpair failed and we were unable to recover it. 00:28:30.621 [2024-12-09 17:38:59.732560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.621 [2024-12-09 17:38:59.732593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.621 qpair failed and we were unable to recover it. 00:28:30.621 [2024-12-09 17:38:59.732781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.621 [2024-12-09 17:38:59.732814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.621 qpair failed and we were unable to recover it. 00:28:30.621 [2024-12-09 17:38:59.733061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.621 [2024-12-09 17:38:59.733093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.621 qpair failed and we were unable to recover it. 00:28:30.621 [2024-12-09 17:38:59.733290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.621 [2024-12-09 17:38:59.733324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.621 qpair failed and we were unable to recover it. 00:28:30.621 [2024-12-09 17:38:59.733455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.621 [2024-12-09 17:38:59.733489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.621 qpair failed and we were unable to recover it. 00:28:30.621 [2024-12-09 17:38:59.733600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.621 [2024-12-09 17:38:59.733633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.621 qpair failed and we were unable to recover it. 00:28:30.621 [2024-12-09 17:38:59.733908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.621 [2024-12-09 17:38:59.733942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.621 qpair failed and we were unable to recover it. 00:28:30.621 [2024-12-09 17:38:59.734121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.621 [2024-12-09 17:38:59.734154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.621 qpair failed and we were unable to recover it. 00:28:30.621 [2024-12-09 17:38:59.734281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.621 [2024-12-09 17:38:59.734316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.621 qpair failed and we were unable to recover it. 00:28:30.621 [2024-12-09 17:38:59.734447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.621 [2024-12-09 17:38:59.734480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.621 qpair failed and we were unable to recover it. 00:28:30.621 [2024-12-09 17:38:59.734761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.621 [2024-12-09 17:38:59.734803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.621 qpair failed and we were unable to recover it. 00:28:30.621 [2024-12-09 17:38:59.734998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.621 [2024-12-09 17:38:59.735039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.621 qpair failed and we were unable to recover it. 00:28:30.621 [2024-12-09 17:38:59.735236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.621 [2024-12-09 17:38:59.735271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.621 qpair failed and we were unable to recover it. 00:28:30.621 [2024-12-09 17:38:59.735385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.621 [2024-12-09 17:38:59.735434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.621 qpair failed and we were unable to recover it. 00:28:30.621 [2024-12-09 17:38:59.735626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.621 [2024-12-09 17:38:59.735660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.621 qpair failed and we were unable to recover it. 00:28:30.621 [2024-12-09 17:38:59.735838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.621 [2024-12-09 17:38:59.735871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.621 qpair failed and we were unable to recover it. 00:28:30.621 [2024-12-09 17:38:59.736153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.621 [2024-12-09 17:38:59.736186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.621 qpair failed and we were unable to recover it. 00:28:30.621 [2024-12-09 17:38:59.736332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.621 [2024-12-09 17:38:59.736366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.621 qpair failed and we were unable to recover it. 00:28:30.621 [2024-12-09 17:38:59.736547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.621 [2024-12-09 17:38:59.736596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.621 qpair failed and we were unable to recover it. 00:28:30.621 [2024-12-09 17:38:59.736815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.621 [2024-12-09 17:38:59.736848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.621 qpair failed and we were unable to recover it. 00:28:30.621 [2024-12-09 17:38:59.737109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.621 [2024-12-09 17:38:59.737143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.621 qpair failed and we were unable to recover it. 00:28:30.621 [2024-12-09 17:38:59.737349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.621 [2024-12-09 17:38:59.737390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.621 qpair failed and we were unable to recover it. 00:28:30.621 [2024-12-09 17:38:59.737499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.621 [2024-12-09 17:38:59.737531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.621 qpair failed and we were unable to recover it. 00:28:30.621 [2024-12-09 17:38:59.737753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.621 [2024-12-09 17:38:59.737787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.621 qpair failed and we were unable to recover it. 00:28:30.621 [2024-12-09 17:38:59.738027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.621 [2024-12-09 17:38:59.738061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.621 qpair failed and we were unable to recover it. 00:28:30.621 [2024-12-09 17:38:59.738276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.621 [2024-12-09 17:38:59.738312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.621 qpair failed and we were unable to recover it. 00:28:30.621 [2024-12-09 17:38:59.738524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.621 [2024-12-09 17:38:59.738562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.621 qpair failed and we were unable to recover it. 00:28:30.621 [2024-12-09 17:38:59.738685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.621 [2024-12-09 17:38:59.738718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.621 qpair failed and we were unable to recover it. 00:28:30.621 [2024-12-09 17:38:59.738901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.900 [2024-12-09 17:38:59.738935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.900 qpair failed and we were unable to recover it. 00:28:30.900 [2024-12-09 17:38:59.739072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.900 [2024-12-09 17:38:59.739105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.900 qpair failed and we were unable to recover it. 00:28:30.900 [2024-12-09 17:38:59.739282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.901 [2024-12-09 17:38:59.739317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.901 qpair failed and we were unable to recover it. 00:28:30.901 [2024-12-09 17:38:59.739459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.901 [2024-12-09 17:38:59.739494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.901 qpair failed and we were unable to recover it. 00:28:30.901 [2024-12-09 17:38:59.739614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.901 [2024-12-09 17:38:59.739648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.901 qpair failed and we were unable to recover it. 00:28:30.901 [2024-12-09 17:38:59.739850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.901 [2024-12-09 17:38:59.739897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.901 qpair failed and we were unable to recover it. 00:28:30.901 [2024-12-09 17:38:59.740060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.901 [2024-12-09 17:38:59.740107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.901 qpair failed and we were unable to recover it. 00:28:30.901 [2024-12-09 17:38:59.740257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.901 [2024-12-09 17:38:59.740306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.901 qpair failed and we were unable to recover it. 00:28:30.901 [2024-12-09 17:38:59.740594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.901 [2024-12-09 17:38:59.740644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.901 qpair failed and we were unable to recover it. 00:28:30.901 [2024-12-09 17:38:59.740869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.901 [2024-12-09 17:38:59.740915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.901 qpair failed and we were unable to recover it. 00:28:30.901 [2024-12-09 17:38:59.741193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.901 [2024-12-09 17:38:59.741263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.901 qpair failed and we were unable to recover it. 00:28:30.901 [2024-12-09 17:38:59.741507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.901 [2024-12-09 17:38:59.741552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.901 qpair failed and we were unable to recover it. 00:28:30.901 [2024-12-09 17:38:59.741769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.901 [2024-12-09 17:38:59.741814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.901 qpair failed and we were unable to recover it. 00:28:30.901 [2024-12-09 17:38:59.742051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.901 [2024-12-09 17:38:59.742100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.901 qpair failed and we were unable to recover it. 00:28:30.901 [2024-12-09 17:38:59.742371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.901 [2024-12-09 17:38:59.742422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.901 qpair failed and we were unable to recover it. 00:28:30.901 [2024-12-09 17:38:59.742715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.901 [2024-12-09 17:38:59.742764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.901 qpair failed and we were unable to recover it. 00:28:30.901 [2024-12-09 17:38:59.742974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.901 [2024-12-09 17:38:59.743022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.901 qpair failed and we were unable to recover it. 00:28:30.901 [2024-12-09 17:38:59.743235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.901 [2024-12-09 17:38:59.743285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.901 qpair failed and we were unable to recover it. 00:28:30.901 [2024-12-09 17:38:59.743444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.901 [2024-12-09 17:38:59.743489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.901 qpair failed and we were unable to recover it. 00:28:30.901 [2024-12-09 17:38:59.743644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.901 [2024-12-09 17:38:59.743691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.901 qpair failed and we were unable to recover it. 00:28:30.901 [2024-12-09 17:38:59.743890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.901 [2024-12-09 17:38:59.743938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.901 qpair failed and we were unable to recover it. 00:28:30.901 [2024-12-09 17:38:59.744213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.901 [2024-12-09 17:38:59.744278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.901 qpair failed and we were unable to recover it. 00:28:30.901 [2024-12-09 17:38:59.744543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.901 [2024-12-09 17:38:59.744581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.901 qpair failed and we were unable to recover it. 00:28:30.901 [2024-12-09 17:38:59.744869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.901 [2024-12-09 17:38:59.744903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.901 qpair failed and we were unable to recover it. 00:28:30.901 [2024-12-09 17:38:59.745049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.901 [2024-12-09 17:38:59.745083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.901 qpair failed and we were unable to recover it. 00:28:30.901 [2024-12-09 17:38:59.745272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.901 [2024-12-09 17:38:59.745309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.901 qpair failed and we were unable to recover it. 00:28:30.901 [2024-12-09 17:38:59.745485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.901 [2024-12-09 17:38:59.745518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.901 qpair failed and we were unable to recover it. 00:28:30.901 [2024-12-09 17:38:59.745806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.901 [2024-12-09 17:38:59.745840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.901 qpair failed and we were unable to recover it. 00:28:30.901 [2024-12-09 17:38:59.746019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.901 [2024-12-09 17:38:59.746053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.901 qpair failed and we were unable to recover it. 00:28:30.901 [2024-12-09 17:38:59.746255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.901 [2024-12-09 17:38:59.746290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.901 qpair failed and we were unable to recover it. 00:28:30.901 [2024-12-09 17:38:59.746467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.901 [2024-12-09 17:38:59.746502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.901 qpair failed and we were unable to recover it. 00:28:30.901 [2024-12-09 17:38:59.746773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.901 [2024-12-09 17:38:59.746806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.901 qpair failed and we were unable to recover it. 00:28:30.901 [2024-12-09 17:38:59.746924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.901 [2024-12-09 17:38:59.746958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.901 qpair failed and we were unable to recover it. 00:28:30.901 [2024-12-09 17:38:59.747156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.901 [2024-12-09 17:38:59.747190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.901 qpair failed and we were unable to recover it. 00:28:30.901 [2024-12-09 17:38:59.747399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.901 [2024-12-09 17:38:59.747433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.901 qpair failed and we were unable to recover it. 00:28:30.901 [2024-12-09 17:38:59.747607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.901 [2024-12-09 17:38:59.747641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.901 qpair failed and we were unable to recover it. 00:28:30.901 [2024-12-09 17:38:59.747761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.901 [2024-12-09 17:38:59.747795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.901 qpair failed and we were unable to recover it. 00:28:30.901 [2024-12-09 17:38:59.747978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.901 [2024-12-09 17:38:59.748012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.901 qpair failed and we were unable to recover it. 00:28:30.901 [2024-12-09 17:38:59.748155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.902 [2024-12-09 17:38:59.748189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.902 qpair failed and we were unable to recover it. 00:28:30.902 [2024-12-09 17:38:59.748395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.902 [2024-12-09 17:38:59.748430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.902 qpair failed and we were unable to recover it. 00:28:30.902 [2024-12-09 17:38:59.748701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.902 [2024-12-09 17:38:59.748734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.902 qpair failed and we were unable to recover it. 00:28:30.902 [2024-12-09 17:38:59.748852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.902 [2024-12-09 17:38:59.748886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.902 qpair failed and we were unable to recover it. 00:28:30.902 [2024-12-09 17:38:59.749027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.902 [2024-12-09 17:38:59.749060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.902 qpair failed and we were unable to recover it. 00:28:30.902 [2024-12-09 17:38:59.749299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.902 [2024-12-09 17:38:59.749334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.902 qpair failed and we were unable to recover it. 00:28:30.902 [2024-12-09 17:38:59.749474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.902 [2024-12-09 17:38:59.749506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.902 qpair failed and we were unable to recover it. 00:28:30.902 [2024-12-09 17:38:59.749694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.902 [2024-12-09 17:38:59.749728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.902 qpair failed and we were unable to recover it. 00:28:30.902 [2024-12-09 17:38:59.749902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.902 [2024-12-09 17:38:59.749935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.902 qpair failed and we were unable to recover it. 00:28:30.902 [2024-12-09 17:38:59.750198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.902 [2024-12-09 17:38:59.750249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.902 qpair failed and we were unable to recover it. 00:28:30.902 [2024-12-09 17:38:59.750496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.902 [2024-12-09 17:38:59.750528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.902 qpair failed and we were unable to recover it. 00:28:30.902 [2024-12-09 17:38:59.750716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.902 [2024-12-09 17:38:59.750750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.902 qpair failed and we were unable to recover it. 00:28:30.902 [2024-12-09 17:38:59.750942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.902 [2024-12-09 17:38:59.750980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.902 qpair failed and we were unable to recover it. 00:28:30.902 [2024-12-09 17:38:59.751123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.902 [2024-12-09 17:38:59.751160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.902 qpair failed and we were unable to recover it. 00:28:30.902 [2024-12-09 17:38:59.751295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.902 [2024-12-09 17:38:59.751329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.902 qpair failed and we were unable to recover it. 00:28:30.902 [2024-12-09 17:38:59.751517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.902 [2024-12-09 17:38:59.751550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.902 qpair failed and we were unable to recover it. 00:28:30.902 [2024-12-09 17:38:59.751737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.902 [2024-12-09 17:38:59.751771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.902 qpair failed and we were unable to recover it. 00:28:30.902 [2024-12-09 17:38:59.751977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.902 [2024-12-09 17:38:59.752010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.902 qpair failed and we were unable to recover it. 00:28:30.902 [2024-12-09 17:38:59.752275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.902 [2024-12-09 17:38:59.752309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.902 qpair failed and we were unable to recover it. 00:28:30.902 [2024-12-09 17:38:59.752492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.902 [2024-12-09 17:38:59.752525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.902 qpair failed and we were unable to recover it. 00:28:30.902 [2024-12-09 17:38:59.752768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.902 [2024-12-09 17:38:59.752802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.902 qpair failed and we were unable to recover it. 00:28:30.902 [2024-12-09 17:38:59.752988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.902 [2024-12-09 17:38:59.753021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.902 qpair failed and we were unable to recover it. 00:28:30.902 [2024-12-09 17:38:59.753230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.902 [2024-12-09 17:38:59.753265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.902 qpair failed and we were unable to recover it. 00:28:30.902 [2024-12-09 17:38:59.753456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.902 [2024-12-09 17:38:59.753489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.902 qpair failed and we were unable to recover it. 00:28:30.902 [2024-12-09 17:38:59.753665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.902 [2024-12-09 17:38:59.753699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.902 qpair failed and we were unable to recover it. 00:28:30.902 [2024-12-09 17:38:59.753884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.902 [2024-12-09 17:38:59.753917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.902 qpair failed and we were unable to recover it. 00:28:30.902 [2024-12-09 17:38:59.754113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.902 [2024-12-09 17:38:59.754146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.902 qpair failed and we were unable to recover it. 00:28:30.902 [2024-12-09 17:38:59.754268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.902 [2024-12-09 17:38:59.754303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.902 qpair failed and we were unable to recover it. 00:28:30.902 [2024-12-09 17:38:59.754497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.902 [2024-12-09 17:38:59.754529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.902 qpair failed and we were unable to recover it. 00:28:30.902 [2024-12-09 17:38:59.754702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.902 [2024-12-09 17:38:59.754734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.902 qpair failed and we were unable to recover it. 00:28:30.902 [2024-12-09 17:38:59.754975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.902 [2024-12-09 17:38:59.755008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.902 qpair failed and we were unable to recover it. 00:28:30.902 [2024-12-09 17:38:59.755273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.902 [2024-12-09 17:38:59.755308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.902 qpair failed and we were unable to recover it. 00:28:30.902 [2024-12-09 17:38:59.755419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.902 [2024-12-09 17:38:59.755451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.902 qpair failed and we were unable to recover it. 00:28:30.902 [2024-12-09 17:38:59.755648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.902 [2024-12-09 17:38:59.755681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.902 qpair failed and we were unable to recover it. 00:28:30.902 [2024-12-09 17:38:59.755891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.902 [2024-12-09 17:38:59.755924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.902 qpair failed and we were unable to recover it. 00:28:30.902 [2024-12-09 17:38:59.756107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.902 [2024-12-09 17:38:59.756140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.902 qpair failed and we were unable to recover it. 00:28:30.902 [2024-12-09 17:38:59.756404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.902 [2024-12-09 17:38:59.756438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.902 qpair failed and we were unable to recover it. 00:28:30.902 [2024-12-09 17:38:59.756559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.902 [2024-12-09 17:38:59.756592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.902 qpair failed and we were unable to recover it. 00:28:30.902 [2024-12-09 17:38:59.756853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.902 [2024-12-09 17:38:59.756886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.902 qpair failed and we were unable to recover it. 00:28:30.903 [2024-12-09 17:38:59.757151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.903 [2024-12-09 17:38:59.757184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.903 qpair failed and we were unable to recover it. 00:28:30.903 [2024-12-09 17:38:59.757321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.903 [2024-12-09 17:38:59.757355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.903 qpair failed and we were unable to recover it. 00:28:30.903 [2024-12-09 17:38:59.757644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.903 [2024-12-09 17:38:59.757676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.903 qpair failed and we were unable to recover it. 00:28:30.903 [2024-12-09 17:38:59.757850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.903 [2024-12-09 17:38:59.757883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.903 qpair failed and we were unable to recover it. 00:28:30.903 [2024-12-09 17:38:59.758075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.903 [2024-12-09 17:38:59.758108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.903 qpair failed and we were unable to recover it. 00:28:30.903 [2024-12-09 17:38:59.758278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.903 [2024-12-09 17:38:59.758313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.903 qpair failed and we were unable to recover it. 00:28:30.903 [2024-12-09 17:38:59.758580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.903 [2024-12-09 17:38:59.758612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.903 qpair failed and we were unable to recover it. 00:28:30.903 [2024-12-09 17:38:59.758792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.903 [2024-12-09 17:38:59.758824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.903 qpair failed and we were unable to recover it. 00:28:30.903 [2024-12-09 17:38:59.759075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.903 [2024-12-09 17:38:59.759108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.903 qpair failed and we were unable to recover it. 00:28:30.903 [2024-12-09 17:38:59.759280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.903 [2024-12-09 17:38:59.759313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.903 qpair failed and we were unable to recover it. 00:28:30.903 [2024-12-09 17:38:59.759559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.903 [2024-12-09 17:38:59.759592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.903 qpair failed and we were unable to recover it. 00:28:30.903 [2024-12-09 17:38:59.759732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.903 [2024-12-09 17:38:59.759765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.903 qpair failed and we were unable to recover it. 00:28:30.903 [2024-12-09 17:38:59.759950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.903 [2024-12-09 17:38:59.759983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.903 qpair failed and we were unable to recover it. 00:28:30.903 [2024-12-09 17:38:59.760160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.903 [2024-12-09 17:38:59.760198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.903 qpair failed and we were unable to recover it. 00:28:30.903 [2024-12-09 17:38:59.760328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.903 [2024-12-09 17:38:59.760361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.903 qpair failed and we were unable to recover it. 00:28:30.903 [2024-12-09 17:38:59.760488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.903 [2024-12-09 17:38:59.760521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.903 qpair failed and we were unable to recover it. 00:28:30.903 [2024-12-09 17:38:59.760715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.903 [2024-12-09 17:38:59.760748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.903 qpair failed and we were unable to recover it. 00:28:30.903 [2024-12-09 17:38:59.760995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.903 [2024-12-09 17:38:59.761027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.903 qpair failed and we were unable to recover it. 00:28:30.903 [2024-12-09 17:38:59.761208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.903 [2024-12-09 17:38:59.761250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.903 qpair failed and we were unable to recover it. 00:28:30.903 [2024-12-09 17:38:59.761435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.903 [2024-12-09 17:38:59.761468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.903 qpair failed and we were unable to recover it. 00:28:30.903 [2024-12-09 17:38:59.761591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.903 [2024-12-09 17:38:59.761623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.903 qpair failed and we were unable to recover it. 00:28:30.903 [2024-12-09 17:38:59.761736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.903 [2024-12-09 17:38:59.761767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.903 qpair failed and we were unable to recover it. 00:28:30.903 [2024-12-09 17:38:59.762009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.903 [2024-12-09 17:38:59.762042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.903 qpair failed and we were unable to recover it. 00:28:30.903 [2024-12-09 17:38:59.762305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.903 [2024-12-09 17:38:59.762340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.903 qpair failed and we were unable to recover it. 00:28:30.903 [2024-12-09 17:38:59.762585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.903 [2024-12-09 17:38:59.762618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.903 qpair failed and we were unable to recover it. 00:28:30.903 [2024-12-09 17:38:59.762813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.903 [2024-12-09 17:38:59.762847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.903 qpair failed and we were unable to recover it. 00:28:30.903 [2024-12-09 17:38:59.763050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.903 [2024-12-09 17:38:59.763083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.903 qpair failed and we were unable to recover it. 00:28:30.903 [2024-12-09 17:38:59.763267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.903 [2024-12-09 17:38:59.763302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.903 qpair failed and we were unable to recover it. 00:28:30.903 [2024-12-09 17:38:59.763439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.903 [2024-12-09 17:38:59.763472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.903 qpair failed and we were unable to recover it. 00:28:30.903 [2024-12-09 17:38:59.763741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.903 [2024-12-09 17:38:59.763774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.903 qpair failed and we were unable to recover it. 00:28:30.903 [2024-12-09 17:38:59.763888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.903 [2024-12-09 17:38:59.763920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.903 qpair failed and we were unable to recover it. 00:28:30.903 [2024-12-09 17:38:59.764107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.903 [2024-12-09 17:38:59.764140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.903 qpair failed and we were unable to recover it. 00:28:30.903 [2024-12-09 17:38:59.764352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.903 [2024-12-09 17:38:59.764386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.903 qpair failed and we were unable to recover it. 00:28:30.903 [2024-12-09 17:38:59.764501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.903 [2024-12-09 17:38:59.764533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.903 qpair failed and we were unable to recover it. 00:28:30.903 [2024-12-09 17:38:59.764704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.903 [2024-12-09 17:38:59.764737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.903 qpair failed and we were unable to recover it. 00:28:30.903 [2024-12-09 17:38:59.764911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.903 [2024-12-09 17:38:59.764945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.903 qpair failed and we were unable to recover it. 00:28:30.903 [2024-12-09 17:38:59.765143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.903 [2024-12-09 17:38:59.765176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.903 qpair failed and we were unable to recover it. 00:28:30.903 [2024-12-09 17:38:59.765357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.903 [2024-12-09 17:38:59.765391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.903 qpair failed and we were unable to recover it. 00:28:30.903 [2024-12-09 17:38:59.765576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.904 [2024-12-09 17:38:59.765609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.904 qpair failed and we were unable to recover it. 00:28:30.904 [2024-12-09 17:38:59.765727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.904 [2024-12-09 17:38:59.765760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.904 qpair failed and we were unable to recover it. 00:28:30.904 [2024-12-09 17:38:59.765892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.904 [2024-12-09 17:38:59.765925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.904 qpair failed and we were unable to recover it. 00:28:30.904 [2024-12-09 17:38:59.766170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.904 [2024-12-09 17:38:59.766202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.904 qpair failed and we were unable to recover it. 00:28:30.904 [2024-12-09 17:38:59.766391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.904 [2024-12-09 17:38:59.766425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.904 qpair failed and we were unable to recover it. 00:28:30.904 [2024-12-09 17:38:59.766685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.904 [2024-12-09 17:38:59.766716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.904 qpair failed and we were unable to recover it. 00:28:30.904 [2024-12-09 17:38:59.766931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.904 [2024-12-09 17:38:59.766964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.904 qpair failed and we were unable to recover it. 00:28:30.904 [2024-12-09 17:38:59.767260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.904 [2024-12-09 17:38:59.767296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.904 qpair failed and we were unable to recover it. 00:28:30.904 [2024-12-09 17:38:59.767485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.904 [2024-12-09 17:38:59.767516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.904 qpair failed and we were unable to recover it. 00:28:30.904 [2024-12-09 17:38:59.767649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.904 [2024-12-09 17:38:59.767682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.904 qpair failed and we were unable to recover it. 00:28:30.904 [2024-12-09 17:38:59.767872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.904 [2024-12-09 17:38:59.767905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.904 qpair failed and we were unable to recover it. 00:28:30.904 [2024-12-09 17:38:59.768152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.904 [2024-12-09 17:38:59.768184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.904 qpair failed and we were unable to recover it. 00:28:30.904 [2024-12-09 17:38:59.768377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.904 [2024-12-09 17:38:59.768411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.904 qpair failed and we were unable to recover it. 00:28:30.904 [2024-12-09 17:38:59.768656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.904 [2024-12-09 17:38:59.768689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.904 qpair failed and we were unable to recover it. 00:28:30.904 [2024-12-09 17:38:59.768879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.904 [2024-12-09 17:38:59.768912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.904 qpair failed and we were unable to recover it. 00:28:30.904 [2024-12-09 17:38:59.769151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.904 [2024-12-09 17:38:59.769189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.904 qpair failed and we were unable to recover it. 00:28:30.904 [2024-12-09 17:38:59.769451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.904 [2024-12-09 17:38:59.769484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.904 qpair failed and we were unable to recover it. 00:28:30.904 [2024-12-09 17:38:59.769615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.904 [2024-12-09 17:38:59.769648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.904 qpair failed and we were unable to recover it. 00:28:30.904 [2024-12-09 17:38:59.769830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.904 [2024-12-09 17:38:59.769863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.904 qpair failed and we were unable to recover it. 00:28:30.904 [2024-12-09 17:38:59.770143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.904 [2024-12-09 17:38:59.770175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.904 qpair failed and we were unable to recover it. 00:28:30.904 [2024-12-09 17:38:59.770381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.904 [2024-12-09 17:38:59.770416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.904 qpair failed and we were unable to recover it. 00:28:30.904 [2024-12-09 17:38:59.770541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.904 [2024-12-09 17:38:59.770573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.904 qpair failed and we were unable to recover it. 00:28:30.904 [2024-12-09 17:38:59.770681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.904 [2024-12-09 17:38:59.770713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.904 qpair failed and we were unable to recover it. 00:28:30.904 [2024-12-09 17:38:59.770945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.904 [2024-12-09 17:38:59.770977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.904 qpair failed and we were unable to recover it. 00:28:30.904 [2024-12-09 17:38:59.771105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.904 [2024-12-09 17:38:59.771138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.904 qpair failed and we were unable to recover it. 00:28:30.904 [2024-12-09 17:38:59.771312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.904 [2024-12-09 17:38:59.771345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.904 qpair failed and we were unable to recover it. 00:28:30.904 [2024-12-09 17:38:59.771584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.904 [2024-12-09 17:38:59.771617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.904 qpair failed and we were unable to recover it. 00:28:30.904 [2024-12-09 17:38:59.771803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.904 [2024-12-09 17:38:59.771836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.904 qpair failed and we were unable to recover it. 00:28:30.904 [2024-12-09 17:38:59.772019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.904 [2024-12-09 17:38:59.772050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.904 qpair failed and we were unable to recover it. 00:28:30.904 [2024-12-09 17:38:59.772251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.904 [2024-12-09 17:38:59.772286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.904 qpair failed and we were unable to recover it. 00:28:30.904 [2024-12-09 17:38:59.772416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.904 [2024-12-09 17:38:59.772448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.904 qpair failed and we were unable to recover it. 00:28:30.904 [2024-12-09 17:38:59.772554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.904 [2024-12-09 17:38:59.772586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.904 qpair failed and we were unable to recover it. 00:28:30.904 [2024-12-09 17:38:59.772759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.904 [2024-12-09 17:38:59.772792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.904 qpair failed and we were unable to recover it. 00:28:30.904 [2024-12-09 17:38:59.773006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.904 [2024-12-09 17:38:59.773039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.904 qpair failed and we were unable to recover it. 00:28:30.904 [2024-12-09 17:38:59.773178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.904 [2024-12-09 17:38:59.773210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.904 qpair failed and we were unable to recover it. 00:28:30.904 [2024-12-09 17:38:59.773346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.904 [2024-12-09 17:38:59.773379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.904 qpair failed and we were unable to recover it. 00:28:30.904 [2024-12-09 17:38:59.773559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.904 [2024-12-09 17:38:59.773593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.904 qpair failed and we were unable to recover it. 00:28:30.904 [2024-12-09 17:38:59.773778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.904 [2024-12-09 17:38:59.773810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.904 qpair failed and we were unable to recover it. 00:28:30.904 [2024-12-09 17:38:59.774074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.904 [2024-12-09 17:38:59.774107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.904 qpair failed and we were unable to recover it. 00:28:30.905 [2024-12-09 17:38:59.774343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.905 [2024-12-09 17:38:59.774376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.905 qpair failed and we were unable to recover it. 00:28:30.905 [2024-12-09 17:38:59.774487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.905 [2024-12-09 17:38:59.774518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.905 qpair failed and we were unable to recover it. 00:28:30.905 [2024-12-09 17:38:59.774715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.905 [2024-12-09 17:38:59.774749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.905 qpair failed and we were unable to recover it. 00:28:30.905 [2024-12-09 17:38:59.774943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.905 [2024-12-09 17:38:59.774976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.905 qpair failed and we were unable to recover it. 00:28:30.905 [2024-12-09 17:38:59.775242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.905 [2024-12-09 17:38:59.775276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.905 qpair failed and we were unable to recover it. 00:28:30.905 [2024-12-09 17:38:59.775466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.905 [2024-12-09 17:38:59.775499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.905 qpair failed and we were unable to recover it. 00:28:30.905 [2024-12-09 17:38:59.775685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.905 [2024-12-09 17:38:59.775718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.905 qpair failed and we were unable to recover it. 00:28:30.905 [2024-12-09 17:38:59.775855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.905 [2024-12-09 17:38:59.775889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.905 qpair failed and we were unable to recover it. 00:28:30.905 [2024-12-09 17:38:59.776004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.905 [2024-12-09 17:38:59.776036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.905 qpair failed and we were unable to recover it. 00:28:30.905 [2024-12-09 17:38:59.776157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.905 [2024-12-09 17:38:59.776188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.905 qpair failed and we were unable to recover it. 00:28:30.905 [2024-12-09 17:38:59.776376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.905 [2024-12-09 17:38:59.776409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.905 qpair failed and we were unable to recover it. 00:28:30.905 [2024-12-09 17:38:59.776584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.905 [2024-12-09 17:38:59.776617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.905 qpair failed and we were unable to recover it. 00:28:30.905 [2024-12-09 17:38:59.776900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.905 [2024-12-09 17:38:59.776932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.905 qpair failed and we were unable to recover it. 00:28:30.905 [2024-12-09 17:38:59.777229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.905 [2024-12-09 17:38:59.777264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.905 qpair failed and we were unable to recover it. 00:28:30.905 [2024-12-09 17:38:59.777470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.905 [2024-12-09 17:38:59.777502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.905 qpair failed and we were unable to recover it. 00:28:30.905 [2024-12-09 17:38:59.777642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.905 [2024-12-09 17:38:59.777675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.905 qpair failed and we were unable to recover it. 00:28:30.905 [2024-12-09 17:38:59.777867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.905 [2024-12-09 17:38:59.777906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.905 qpair failed and we were unable to recover it. 00:28:30.905 [2024-12-09 17:38:59.778124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.905 [2024-12-09 17:38:59.778157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.905 qpair failed and we were unable to recover it. 00:28:30.905 [2024-12-09 17:38:59.778375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.905 [2024-12-09 17:38:59.778408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.905 qpair failed and we were unable to recover it. 00:28:30.905 [2024-12-09 17:38:59.778649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.905 [2024-12-09 17:38:59.778682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.905 qpair failed and we were unable to recover it. 00:28:30.905 [2024-12-09 17:38:59.778806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.905 [2024-12-09 17:38:59.778840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.905 qpair failed and we were unable to recover it. 00:28:30.905 [2024-12-09 17:38:59.779107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.905 [2024-12-09 17:38:59.779140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.905 qpair failed and we were unable to recover it. 00:28:30.905 [2024-12-09 17:38:59.779281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.905 [2024-12-09 17:38:59.779318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.905 qpair failed and we were unable to recover it. 00:28:30.905 [2024-12-09 17:38:59.779515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.905 [2024-12-09 17:38:59.779548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.905 qpair failed and we were unable to recover it. 00:28:30.905 [2024-12-09 17:38:59.779810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.905 [2024-12-09 17:38:59.779843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.905 qpair failed and we were unable to recover it. 00:28:30.905 [2024-12-09 17:38:59.780031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.905 [2024-12-09 17:38:59.780065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.905 qpair failed and we were unable to recover it. 00:28:30.905 [2024-12-09 17:38:59.780172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.905 [2024-12-09 17:38:59.780205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.905 qpair failed and we were unable to recover it. 00:28:30.905 [2024-12-09 17:38:59.780387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.905 [2024-12-09 17:38:59.780421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.905 qpair failed and we were unable to recover it. 00:28:30.905 [2024-12-09 17:38:59.780608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.905 [2024-12-09 17:38:59.780642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.905 qpair failed and we were unable to recover it. 00:28:30.905 [2024-12-09 17:38:59.780774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.905 [2024-12-09 17:38:59.780807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.905 qpair failed and we were unable to recover it. 00:28:30.905 [2024-12-09 17:38:59.780952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.905 [2024-12-09 17:38:59.780986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.905 qpair failed and we were unable to recover it. 00:28:30.905 [2024-12-09 17:38:59.781160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.905 [2024-12-09 17:38:59.781192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.905 qpair failed and we were unable to recover it. 00:28:30.905 [2024-12-09 17:38:59.781416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.905 [2024-12-09 17:38:59.781449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.905 qpair failed and we were unable to recover it. 00:28:30.905 [2024-12-09 17:38:59.781694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.906 [2024-12-09 17:38:59.781727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.906 qpair failed and we were unable to recover it. 00:28:30.906 [2024-12-09 17:38:59.781901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.906 [2024-12-09 17:38:59.781933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.906 qpair failed and we were unable to recover it. 00:28:30.906 [2024-12-09 17:38:59.782114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.906 [2024-12-09 17:38:59.782147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.906 qpair failed and we were unable to recover it. 00:28:30.906 [2024-12-09 17:38:59.782334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.906 [2024-12-09 17:38:59.782368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.906 qpair failed and we were unable to recover it. 00:28:30.906 [2024-12-09 17:38:59.782512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.906 [2024-12-09 17:38:59.782544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.906 qpair failed and we were unable to recover it. 00:28:30.906 [2024-12-09 17:38:59.782788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.906 [2024-12-09 17:38:59.782822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.906 qpair failed and we were unable to recover it. 00:28:30.906 [2024-12-09 17:38:59.783064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.906 [2024-12-09 17:38:59.783098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.906 qpair failed and we were unable to recover it. 00:28:30.906 [2024-12-09 17:38:59.783292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.906 [2024-12-09 17:38:59.783326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.906 qpair failed and we were unable to recover it. 00:28:30.906 [2024-12-09 17:38:59.783501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.906 [2024-12-09 17:38:59.783533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.906 qpair failed and we were unable to recover it. 00:28:30.906 [2024-12-09 17:38:59.783718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.906 [2024-12-09 17:38:59.783751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.906 qpair failed and we were unable to recover it. 00:28:30.906 [2024-12-09 17:38:59.783864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.906 [2024-12-09 17:38:59.783896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.906 qpair failed and we were unable to recover it. 00:28:30.906 [2024-12-09 17:38:59.784079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.906 [2024-12-09 17:38:59.784112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.906 qpair failed and we were unable to recover it. 00:28:30.906 [2024-12-09 17:38:59.784249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.906 [2024-12-09 17:38:59.784283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.906 qpair failed and we were unable to recover it. 00:28:30.906 [2024-12-09 17:38:59.784485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.906 [2024-12-09 17:38:59.784517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.906 qpair failed and we were unable to recover it. 00:28:30.906 [2024-12-09 17:38:59.784695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.906 [2024-12-09 17:38:59.784728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.906 qpair failed and we were unable to recover it. 00:28:30.906 [2024-12-09 17:38:59.784850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.906 [2024-12-09 17:38:59.784884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.906 qpair failed and we were unable to recover it. 00:28:30.906 [2024-12-09 17:38:59.785144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.906 [2024-12-09 17:38:59.785177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.906 qpair failed and we were unable to recover it. 00:28:30.906 [2024-12-09 17:38:59.785304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.906 [2024-12-09 17:38:59.785337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.906 qpair failed and we were unable to recover it. 00:28:30.906 [2024-12-09 17:38:59.785603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.906 [2024-12-09 17:38:59.785636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.906 qpair failed and we were unable to recover it. 00:28:30.906 [2024-12-09 17:38:59.785844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.906 [2024-12-09 17:38:59.785877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.906 qpair failed and we were unable to recover it. 00:28:30.906 [2024-12-09 17:38:59.786072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.906 [2024-12-09 17:38:59.786106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.906 qpair failed and we were unable to recover it. 00:28:30.906 [2024-12-09 17:38:59.786300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.906 [2024-12-09 17:38:59.786334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.906 qpair failed and we were unable to recover it. 00:28:30.906 [2024-12-09 17:38:59.786526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.906 [2024-12-09 17:38:59.786559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.906 qpair failed and we were unable to recover it. 00:28:30.906 [2024-12-09 17:38:59.786738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.906 [2024-12-09 17:38:59.786775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.906 qpair failed and we were unable to recover it. 00:28:30.906 [2024-12-09 17:38:59.787016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.906 [2024-12-09 17:38:59.787050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.906 qpair failed and we were unable to recover it. 00:28:30.906 [2024-12-09 17:38:59.787314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.906 [2024-12-09 17:38:59.787349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.906 qpair failed and we were unable to recover it. 00:28:30.906 [2024-12-09 17:38:59.787592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.906 [2024-12-09 17:38:59.787624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.906 qpair failed and we were unable to recover it. 00:28:30.906 [2024-12-09 17:38:59.787803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.906 [2024-12-09 17:38:59.787836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.906 qpair failed and we were unable to recover it. 00:28:30.906 [2024-12-09 17:38:59.788047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.906 [2024-12-09 17:38:59.788080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.906 qpair failed and we were unable to recover it. 00:28:30.906 [2024-12-09 17:38:59.788281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.906 [2024-12-09 17:38:59.788316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.906 qpair failed and we were unable to recover it. 00:28:30.906 [2024-12-09 17:38:59.788559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.906 [2024-12-09 17:38:59.788592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.906 qpair failed and we were unable to recover it. 00:28:30.906 [2024-12-09 17:38:59.788815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.906 [2024-12-09 17:38:59.788849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.906 qpair failed and we were unable to recover it. 00:28:30.906 [2024-12-09 17:38:59.788971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.906 [2024-12-09 17:38:59.789004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.906 qpair failed and we were unable to recover it. 00:28:30.906 [2024-12-09 17:38:59.789292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.906 [2024-12-09 17:38:59.789326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.906 qpair failed and we were unable to recover it. 00:28:30.906 [2024-12-09 17:38:59.789595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.906 [2024-12-09 17:38:59.789629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.906 qpair failed and we were unable to recover it. 00:28:30.906 [2024-12-09 17:38:59.789824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.906 [2024-12-09 17:38:59.789858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.906 qpair failed and we were unable to recover it. 00:28:30.906 [2024-12-09 17:38:59.789995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.906 [2024-12-09 17:38:59.790028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.906 qpair failed and we were unable to recover it. 00:28:30.906 [2024-12-09 17:38:59.790246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.906 [2024-12-09 17:38:59.790281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.906 qpair failed and we were unable to recover it. 00:28:30.906 [2024-12-09 17:38:59.790547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.906 [2024-12-09 17:38:59.790580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.906 qpair failed and we were unable to recover it. 00:28:30.906 [2024-12-09 17:38:59.790753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.907 [2024-12-09 17:38:59.790786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.907 qpair failed and we were unable to recover it. 00:28:30.907 [2024-12-09 17:38:59.791029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.907 [2024-12-09 17:38:59.791061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.907 qpair failed and we were unable to recover it. 00:28:30.907 [2024-12-09 17:38:59.791269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.907 [2024-12-09 17:38:59.791304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.907 qpair failed and we were unable to recover it. 00:28:30.907 [2024-12-09 17:38:59.791595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.907 [2024-12-09 17:38:59.791630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.907 qpair failed and we were unable to recover it. 00:28:30.907 [2024-12-09 17:38:59.791819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.907 [2024-12-09 17:38:59.791853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.907 qpair failed and we were unable to recover it. 00:28:30.907 [2024-12-09 17:38:59.791976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.907 [2024-12-09 17:38:59.792010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.907 qpair failed and we were unable to recover it. 00:28:30.907 [2024-12-09 17:38:59.792224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.907 [2024-12-09 17:38:59.792259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.907 qpair failed and we were unable to recover it. 00:28:30.907 [2024-12-09 17:38:59.792473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.907 [2024-12-09 17:38:59.792508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.907 qpair failed and we were unable to recover it. 00:28:30.907 [2024-12-09 17:38:59.792621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.907 [2024-12-09 17:38:59.792654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.907 qpair failed and we were unable to recover it. 00:28:30.907 [2024-12-09 17:38:59.792867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.907 [2024-12-09 17:38:59.792901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.907 qpair failed and we were unable to recover it. 00:28:30.907 [2024-12-09 17:38:59.793047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.907 [2024-12-09 17:38:59.793080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.907 qpair failed and we were unable to recover it. 00:28:30.907 [2024-12-09 17:38:59.793194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.907 [2024-12-09 17:38:59.793234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.907 qpair failed and we were unable to recover it. 00:28:30.907 [2024-12-09 17:38:59.793370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.907 [2024-12-09 17:38:59.793402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.907 qpair failed and we were unable to recover it. 00:28:30.907 [2024-12-09 17:38:59.793586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.907 [2024-12-09 17:38:59.793619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.907 qpair failed and we were unable to recover it. 00:28:30.907 [2024-12-09 17:38:59.793752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.907 [2024-12-09 17:38:59.793786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.907 qpair failed and we were unable to recover it. 00:28:30.907 [2024-12-09 17:38:59.793980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.907 [2024-12-09 17:38:59.794014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.907 qpair failed and we were unable to recover it. 00:28:30.907 [2024-12-09 17:38:59.794196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.907 [2024-12-09 17:38:59.794240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.907 qpair failed and we were unable to recover it. 00:28:30.907 [2024-12-09 17:38:59.794430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.907 [2024-12-09 17:38:59.794463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.907 qpair failed and we were unable to recover it. 00:28:30.907 [2024-12-09 17:38:59.794708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.907 [2024-12-09 17:38:59.794742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.907 qpair failed and we were unable to recover it. 00:28:30.907 [2024-12-09 17:38:59.795021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.907 [2024-12-09 17:38:59.795054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.907 qpair failed and we were unable to recover it. 00:28:30.907 [2024-12-09 17:38:59.795277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.907 [2024-12-09 17:38:59.795313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.907 qpair failed and we were unable to recover it. 00:28:30.907 [2024-12-09 17:38:59.795490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.907 [2024-12-09 17:38:59.795524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.907 qpair failed and we were unable to recover it. 00:28:30.907 [2024-12-09 17:38:59.795658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.907 [2024-12-09 17:38:59.795691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.907 qpair failed and we were unable to recover it. 00:28:30.907 [2024-12-09 17:38:59.795822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.907 [2024-12-09 17:38:59.795855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.907 qpair failed and we were unable to recover it. 00:28:30.907 [2024-12-09 17:38:59.796066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.907 [2024-12-09 17:38:59.796106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.907 qpair failed and we were unable to recover it. 00:28:30.907 [2024-12-09 17:38:59.796227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.907 [2024-12-09 17:38:59.796261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.907 qpair failed and we were unable to recover it. 00:28:30.907 [2024-12-09 17:38:59.796529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.907 [2024-12-09 17:38:59.796563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.907 qpair failed and we were unable to recover it. 00:28:30.907 [2024-12-09 17:38:59.796707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.907 [2024-12-09 17:38:59.796741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.907 qpair failed and we were unable to recover it. 00:28:30.907 [2024-12-09 17:38:59.796921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.907 [2024-12-09 17:38:59.796955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.907 qpair failed and we were unable to recover it. 00:28:30.907 [2024-12-09 17:38:59.797129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.907 [2024-12-09 17:38:59.797163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.907 qpair failed and we were unable to recover it. 00:28:30.907 [2024-12-09 17:38:59.797381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.907 [2024-12-09 17:38:59.797414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.907 qpair failed and we were unable to recover it. 00:28:30.907 [2024-12-09 17:38:59.797596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.907 [2024-12-09 17:38:59.797629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.907 qpair failed and we were unable to recover it. 00:28:30.907 [2024-12-09 17:38:59.797874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.907 [2024-12-09 17:38:59.797908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.907 qpair failed and we were unable to recover it. 00:28:30.907 [2024-12-09 17:38:59.798189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.907 [2024-12-09 17:38:59.798230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.907 qpair failed and we were unable to recover it. 00:28:30.907 [2024-12-09 17:38:59.798422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.907 [2024-12-09 17:38:59.798455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.907 qpair failed and we were unable to recover it. 00:28:30.907 [2024-12-09 17:38:59.798682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.907 [2024-12-09 17:38:59.798715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.907 qpair failed and we were unable to recover it. 00:28:30.907 [2024-12-09 17:38:59.798891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.907 [2024-12-09 17:38:59.798925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.907 qpair failed and we were unable to recover it. 00:28:30.907 [2024-12-09 17:38:59.799189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.907 [2024-12-09 17:38:59.799231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.907 qpair failed and we were unable to recover it. 00:28:30.907 [2024-12-09 17:38:59.799455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.907 [2024-12-09 17:38:59.799489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.907 qpair failed and we were unable to recover it. 00:28:30.908 [2024-12-09 17:38:59.799685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.908 [2024-12-09 17:38:59.799719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.908 qpair failed and we were unable to recover it. 00:28:30.908 [2024-12-09 17:38:59.799866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.908 [2024-12-09 17:38:59.799899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.908 qpair failed and we were unable to recover it. 00:28:30.908 [2024-12-09 17:38:59.800098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.908 [2024-12-09 17:38:59.800132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.908 qpair failed and we were unable to recover it. 00:28:30.908 [2024-12-09 17:38:59.800321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.908 [2024-12-09 17:38:59.800362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.908 qpair failed and we were unable to recover it. 00:28:30.908 [2024-12-09 17:38:59.800561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.908 [2024-12-09 17:38:59.800594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.908 qpair failed and we were unable to recover it. 00:28:30.908 [2024-12-09 17:38:59.800722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.908 [2024-12-09 17:38:59.800755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.908 qpair failed and we were unable to recover it. 00:28:30.908 [2024-12-09 17:38:59.800933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.908 [2024-12-09 17:38:59.800976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.908 qpair failed and we were unable to recover it. 00:28:30.908 [2024-12-09 17:38:59.801233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.908 [2024-12-09 17:38:59.801271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.908 qpair failed and we were unable to recover it. 00:28:30.908 [2024-12-09 17:38:59.801458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.908 [2024-12-09 17:38:59.801491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.908 qpair failed and we were unable to recover it. 00:28:30.908 [2024-12-09 17:38:59.801606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.908 [2024-12-09 17:38:59.801639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.908 qpair failed and we were unable to recover it. 00:28:30.908 [2024-12-09 17:38:59.801776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.908 [2024-12-09 17:38:59.801809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.908 qpair failed and we were unable to recover it. 00:28:30.908 [2024-12-09 17:38:59.802012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.908 [2024-12-09 17:38:59.802051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.908 qpair failed and we were unable to recover it. 00:28:30.908 [2024-12-09 17:38:59.802179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.908 [2024-12-09 17:38:59.802214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.908 qpair failed and we were unable to recover it. 00:28:30.908 [2024-12-09 17:38:59.802419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.908 [2024-12-09 17:38:59.802454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.908 qpair failed and we were unable to recover it. 00:28:30.908 [2024-12-09 17:38:59.802648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.908 [2024-12-09 17:38:59.802682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.908 qpair failed and we were unable to recover it. 00:28:30.908 [2024-12-09 17:38:59.802884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.908 [2024-12-09 17:38:59.802918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.908 qpair failed and we were unable to recover it. 00:28:30.908 [2024-12-09 17:38:59.803103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.908 [2024-12-09 17:38:59.803137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.908 qpair failed and we were unable to recover it. 00:28:30.908 [2024-12-09 17:38:59.803427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.908 [2024-12-09 17:38:59.803462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.908 qpair failed and we were unable to recover it. 00:28:30.908 [2024-12-09 17:38:59.803600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.908 [2024-12-09 17:38:59.803633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.908 qpair failed and we were unable to recover it. 00:28:30.908 [2024-12-09 17:38:59.803808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.908 [2024-12-09 17:38:59.803842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.908 qpair failed and we were unable to recover it. 00:28:30.908 [2024-12-09 17:38:59.804023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.908 [2024-12-09 17:38:59.804056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.908 qpair failed and we were unable to recover it. 00:28:30.908 [2024-12-09 17:38:59.804322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.908 [2024-12-09 17:38:59.804357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.908 qpair failed and we were unable to recover it. 00:28:30.908 [2024-12-09 17:38:59.804482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.908 [2024-12-09 17:38:59.804516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.908 qpair failed and we were unable to recover it. 00:28:30.908 [2024-12-09 17:38:59.804695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.908 [2024-12-09 17:38:59.804729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.908 qpair failed and we were unable to recover it. 00:28:30.908 [2024-12-09 17:38:59.804916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.908 [2024-12-09 17:38:59.804950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.908 qpair failed and we were unable to recover it. 00:28:30.908 [2024-12-09 17:38:59.805193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.908 [2024-12-09 17:38:59.805240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.908 qpair failed and we were unable to recover it. 00:28:30.908 [2024-12-09 17:38:59.805483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.908 [2024-12-09 17:38:59.805517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.908 qpair failed and we were unable to recover it. 00:28:30.908 [2024-12-09 17:38:59.805711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.908 [2024-12-09 17:38:59.805746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.908 qpair failed and we were unable to recover it. 00:28:30.908 [2024-12-09 17:38:59.805875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.908 [2024-12-09 17:38:59.805908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.908 qpair failed and we were unable to recover it. 00:28:30.908 [2024-12-09 17:38:59.806152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.908 [2024-12-09 17:38:59.806186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.908 qpair failed and we were unable to recover it. 00:28:30.908 [2024-12-09 17:38:59.806374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.908 [2024-12-09 17:38:59.806409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.908 qpair failed and we were unable to recover it. 00:28:30.908 [2024-12-09 17:38:59.806540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.908 [2024-12-09 17:38:59.806573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.908 qpair failed and we were unable to recover it. 00:28:30.908 [2024-12-09 17:38:59.806772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.908 [2024-12-09 17:38:59.806806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.908 qpair failed and we were unable to recover it. 00:28:30.908 [2024-12-09 17:38:59.806928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.908 [2024-12-09 17:38:59.806960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.908 qpair failed and we were unable to recover it. 00:28:30.908 [2024-12-09 17:38:59.807152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.908 [2024-12-09 17:38:59.807185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.908 qpair failed and we were unable to recover it. 00:28:30.908 [2024-12-09 17:38:59.807325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.908 [2024-12-09 17:38:59.807358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.908 qpair failed and we were unable to recover it. 00:28:30.908 [2024-12-09 17:38:59.807614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.908 [2024-12-09 17:38:59.807648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.908 qpair failed and we were unable to recover it. 00:28:30.908 [2024-12-09 17:38:59.807835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.908 [2024-12-09 17:38:59.807869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.908 qpair failed and we were unable to recover it. 00:28:30.908 [2024-12-09 17:38:59.808130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.908 [2024-12-09 17:38:59.808164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.908 qpair failed and we were unable to recover it. 00:28:30.909 [2024-12-09 17:38:59.808365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.909 [2024-12-09 17:38:59.808400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.909 qpair failed and we were unable to recover it. 00:28:30.909 [2024-12-09 17:38:59.808615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.909 [2024-12-09 17:38:59.808648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.909 qpair failed and we were unable to recover it. 00:28:30.909 [2024-12-09 17:38:59.808837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.909 [2024-12-09 17:38:59.808870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.909 qpair failed and we were unable to recover it. 00:28:30.909 [2024-12-09 17:38:59.809158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.909 [2024-12-09 17:38:59.809191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.909 qpair failed and we were unable to recover it. 00:28:30.909 [2024-12-09 17:38:59.809380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.909 [2024-12-09 17:38:59.809413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.909 qpair failed and we were unable to recover it. 00:28:30.909 [2024-12-09 17:38:59.809529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.909 [2024-12-09 17:38:59.809563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.909 qpair failed and we were unable to recover it. 00:28:30.909 [2024-12-09 17:38:59.809749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.909 [2024-12-09 17:38:59.809782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.909 qpair failed and we were unable to recover it. 00:28:30.909 [2024-12-09 17:38:59.809890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.909 [2024-12-09 17:38:59.809923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.909 qpair failed and we were unable to recover it. 00:28:30.909 [2024-12-09 17:38:59.810098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.909 [2024-12-09 17:38:59.810132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.909 qpair failed and we were unable to recover it. 00:28:30.909 [2024-12-09 17:38:59.810262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.909 [2024-12-09 17:38:59.810298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.909 qpair failed and we were unable to recover it. 00:28:30.909 [2024-12-09 17:38:59.810541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.909 [2024-12-09 17:38:59.810574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.909 qpair failed and we were unable to recover it. 00:28:30.909 [2024-12-09 17:38:59.810700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.909 [2024-12-09 17:38:59.810734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.909 qpair failed and we were unable to recover it. 00:28:30.909 [2024-12-09 17:38:59.810924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.909 [2024-12-09 17:38:59.810958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.909 qpair failed and we were unable to recover it. 00:28:30.909 [2024-12-09 17:38:59.811138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.909 [2024-12-09 17:38:59.811171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.909 qpair failed and we were unable to recover it. 00:28:30.909 [2024-12-09 17:38:59.811375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.909 [2024-12-09 17:38:59.811409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.909 qpair failed and we were unable to recover it. 00:28:30.909 [2024-12-09 17:38:59.811651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.909 [2024-12-09 17:38:59.811684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.909 qpair failed and we were unable to recover it. 00:28:30.909 [2024-12-09 17:38:59.811805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.909 [2024-12-09 17:38:59.811839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.909 qpair failed and we were unable to recover it. 00:28:30.909 [2024-12-09 17:38:59.811957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.909 [2024-12-09 17:38:59.811991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.909 qpair failed and we were unable to recover it. 00:28:30.909 [2024-12-09 17:38:59.812119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.909 [2024-12-09 17:38:59.812153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.909 qpair failed and we were unable to recover it. 00:28:30.909 [2024-12-09 17:38:59.812397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.909 [2024-12-09 17:38:59.812432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.909 qpair failed and we were unable to recover it. 00:28:30.909 [2024-12-09 17:38:59.812721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.909 [2024-12-09 17:38:59.812754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.909 qpair failed and we were unable to recover it. 00:28:30.909 [2024-12-09 17:38:59.812884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.909 [2024-12-09 17:38:59.812917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.909 qpair failed and we were unable to recover it. 00:28:30.909 [2024-12-09 17:38:59.813182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.909 [2024-12-09 17:38:59.813214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.909 qpair failed and we were unable to recover it. 00:28:30.909 [2024-12-09 17:38:59.813326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.909 [2024-12-09 17:38:59.813359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.909 qpair failed and we were unable to recover it. 00:28:30.909 [2024-12-09 17:38:59.813482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.909 [2024-12-09 17:38:59.813516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.909 qpair failed and we were unable to recover it. 00:28:30.909 [2024-12-09 17:38:59.813710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.909 [2024-12-09 17:38:59.813743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.909 qpair failed and we were unable to recover it. 00:28:30.909 [2024-12-09 17:38:59.813850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.909 [2024-12-09 17:38:59.813888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.909 qpair failed and we were unable to recover it. 00:28:30.909 [2024-12-09 17:38:59.814035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.909 [2024-12-09 17:38:59.814068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.909 qpair failed and we were unable to recover it. 00:28:30.909 [2024-12-09 17:38:59.814190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.909 [2024-12-09 17:38:59.814233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.909 qpair failed and we were unable to recover it. 00:28:30.909 [2024-12-09 17:38:59.814349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.909 [2024-12-09 17:38:59.814382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.909 qpair failed and we were unable to recover it. 00:28:30.909 [2024-12-09 17:38:59.814509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.909 [2024-12-09 17:38:59.814542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.909 qpair failed and we were unable to recover it. 00:28:30.909 [2024-12-09 17:38:59.814803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.909 [2024-12-09 17:38:59.814835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.909 qpair failed and we were unable to recover it. 00:28:30.909 [2024-12-09 17:38:59.815011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.909 [2024-12-09 17:38:59.815043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.909 qpair failed and we were unable to recover it. 00:28:30.909 [2024-12-09 17:38:59.815174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.909 [2024-12-09 17:38:59.815208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.909 qpair failed and we were unable to recover it. 00:28:30.910 [2024-12-09 17:38:59.815430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.910 [2024-12-09 17:38:59.815463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.910 qpair failed and we were unable to recover it. 00:28:30.910 [2024-12-09 17:38:59.815591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.910 [2024-12-09 17:38:59.815625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.910 qpair failed and we were unable to recover it. 00:28:30.910 [2024-12-09 17:38:59.815739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.910 [2024-12-09 17:38:59.815771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.910 qpair failed and we were unable to recover it. 00:28:30.910 [2024-12-09 17:38:59.815875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.910 [2024-12-09 17:38:59.815908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.910 qpair failed and we were unable to recover it. 00:28:30.910 [2024-12-09 17:38:59.816087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.910 [2024-12-09 17:38:59.816121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.910 qpair failed and we were unable to recover it. 00:28:30.910 [2024-12-09 17:38:59.816334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.910 [2024-12-09 17:38:59.816369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.910 qpair failed and we were unable to recover it. 00:28:30.910 [2024-12-09 17:38:59.816561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.910 [2024-12-09 17:38:59.816596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.910 qpair failed and we were unable to recover it. 00:28:30.910 [2024-12-09 17:38:59.816796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.910 [2024-12-09 17:38:59.816831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.910 qpair failed and we were unable to recover it. 00:28:30.910 [2024-12-09 17:38:59.816949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.910 [2024-12-09 17:38:59.816983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.910 qpair failed and we were unable to recover it. 00:28:30.910 [2024-12-09 17:38:59.817117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.910 [2024-12-09 17:38:59.817151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.910 qpair failed and we were unable to recover it. 00:28:30.910 [2024-12-09 17:38:59.817328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.910 [2024-12-09 17:38:59.817362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.910 qpair failed and we were unable to recover it. 00:28:30.910 [2024-12-09 17:38:59.817536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.910 [2024-12-09 17:38:59.817570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.910 qpair failed and we were unable to recover it. 00:28:30.910 [2024-12-09 17:38:59.817740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.910 [2024-12-09 17:38:59.817774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.910 qpair failed and we were unable to recover it. 00:28:30.910 [2024-12-09 17:38:59.817959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.910 [2024-12-09 17:38:59.817993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.910 qpair failed and we were unable to recover it. 00:28:30.910 [2024-12-09 17:38:59.818263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.910 [2024-12-09 17:38:59.818298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.910 qpair failed and we were unable to recover it. 00:28:30.910 [2024-12-09 17:38:59.818476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.910 [2024-12-09 17:38:59.818509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.910 qpair failed and we were unable to recover it. 00:28:30.910 [2024-12-09 17:38:59.818706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.910 [2024-12-09 17:38:59.818739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.910 qpair failed and we were unable to recover it. 00:28:30.910 [2024-12-09 17:38:59.818947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.910 [2024-12-09 17:38:59.818981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.910 qpair failed and we were unable to recover it. 00:28:30.910 [2024-12-09 17:38:59.819160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.910 [2024-12-09 17:38:59.819194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.910 qpair failed and we were unable to recover it. 00:28:30.910 [2024-12-09 17:38:59.819385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.910 [2024-12-09 17:38:59.819419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.910 qpair failed and we were unable to recover it. 00:28:30.910 [2024-12-09 17:38:59.819660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.910 [2024-12-09 17:38:59.819694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.910 qpair failed and we were unable to recover it. 00:28:30.910 [2024-12-09 17:38:59.819812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.910 [2024-12-09 17:38:59.819845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.910 qpair failed and we were unable to recover it. 00:28:30.910 [2024-12-09 17:38:59.820087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.910 [2024-12-09 17:38:59.820120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.910 qpair failed and we were unable to recover it. 00:28:30.910 [2024-12-09 17:38:59.820245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.910 [2024-12-09 17:38:59.820280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.910 qpair failed and we were unable to recover it. 00:28:30.910 [2024-12-09 17:38:59.820476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.910 [2024-12-09 17:38:59.820510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.910 qpair failed and we were unable to recover it. 00:28:30.910 [2024-12-09 17:38:59.820620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.910 [2024-12-09 17:38:59.820653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.910 qpair failed and we were unable to recover it. 00:28:30.910 [2024-12-09 17:38:59.820775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.910 [2024-12-09 17:38:59.820808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.910 qpair failed and we were unable to recover it. 00:28:30.910 [2024-12-09 17:38:59.820925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.910 [2024-12-09 17:38:59.820959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.910 qpair failed and we were unable to recover it. 00:28:30.910 [2024-12-09 17:38:59.821167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.910 [2024-12-09 17:38:59.821199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.910 qpair failed and we were unable to recover it. 00:28:30.910 [2024-12-09 17:38:59.821405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.910 [2024-12-09 17:38:59.821439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.910 qpair failed and we were unable to recover it. 00:28:30.910 [2024-12-09 17:38:59.821651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.910 [2024-12-09 17:38:59.821683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.910 qpair failed and we were unable to recover it. 00:28:30.910 [2024-12-09 17:38:59.821783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.910 [2024-12-09 17:38:59.821816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.910 qpair failed and we were unable to recover it. 00:28:30.910 [2024-12-09 17:38:59.821987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.910 [2024-12-09 17:38:59.822025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.910 qpair failed and we were unable to recover it. 00:28:30.910 [2024-12-09 17:38:59.822300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.910 [2024-12-09 17:38:59.822335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.910 qpair failed and we were unable to recover it. 00:28:30.910 [2024-12-09 17:38:59.822469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.910 [2024-12-09 17:38:59.822502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.910 qpair failed and we were unable to recover it. 00:28:30.910 [2024-12-09 17:38:59.822624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.910 [2024-12-09 17:38:59.822658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.910 qpair failed and we were unable to recover it. 00:28:30.910 [2024-12-09 17:38:59.822899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.910 [2024-12-09 17:38:59.822932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.910 qpair failed and we were unable to recover it. 00:28:30.910 [2024-12-09 17:38:59.823117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.911 [2024-12-09 17:38:59.823150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.911 qpair failed and we were unable to recover it. 00:28:30.911 [2024-12-09 17:38:59.823337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.911 [2024-12-09 17:38:59.823371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.911 qpair failed and we were unable to recover it. 00:28:30.911 [2024-12-09 17:38:59.823560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.911 [2024-12-09 17:38:59.823593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.911 qpair failed and we were unable to recover it. 00:28:30.911 [2024-12-09 17:38:59.823770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.911 [2024-12-09 17:38:59.823802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.911 qpair failed and we were unable to recover it. 00:28:30.911 [2024-12-09 17:38:59.824002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.911 [2024-12-09 17:38:59.824035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.911 qpair failed and we were unable to recover it. 00:28:30.911 [2024-12-09 17:38:59.824233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.911 [2024-12-09 17:38:59.824268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.911 qpair failed and we were unable to recover it. 00:28:30.911 [2024-12-09 17:38:59.824447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.911 [2024-12-09 17:38:59.824480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.911 qpair failed and we were unable to recover it. 00:28:30.911 [2024-12-09 17:38:59.824591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.911 [2024-12-09 17:38:59.824623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.911 qpair failed and we were unable to recover it. 00:28:30.911 [2024-12-09 17:38:59.824890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.911 [2024-12-09 17:38:59.824924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.911 qpair failed and we were unable to recover it. 00:28:30.911 [2024-12-09 17:38:59.825191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.911 [2024-12-09 17:38:59.825246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.911 qpair failed and we were unable to recover it. 00:28:30.911 [2024-12-09 17:38:59.825433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.911 [2024-12-09 17:38:59.825468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.911 qpair failed and we were unable to recover it. 00:28:30.911 [2024-12-09 17:38:59.825607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.911 [2024-12-09 17:38:59.825639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.911 qpair failed and we were unable to recover it. 00:28:30.911 [2024-12-09 17:38:59.825749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.911 [2024-12-09 17:38:59.825782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.911 qpair failed and we were unable to recover it. 00:28:30.911 [2024-12-09 17:38:59.826050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.911 [2024-12-09 17:38:59.826082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.911 qpair failed and we were unable to recover it. 00:28:30.911 [2024-12-09 17:38:59.826330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.911 [2024-12-09 17:38:59.826367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.911 qpair failed and we were unable to recover it. 00:28:30.911 [2024-12-09 17:38:59.826558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.911 [2024-12-09 17:38:59.826589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.911 qpair failed and we were unable to recover it. 00:28:30.911 [2024-12-09 17:38:59.826770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.911 [2024-12-09 17:38:59.826803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.911 qpair failed and we were unable to recover it. 00:28:30.911 [2024-12-09 17:38:59.826932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.911 [2024-12-09 17:38:59.826964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.911 qpair failed and we were unable to recover it. 00:28:30.911 [2024-12-09 17:38:59.827090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.911 [2024-12-09 17:38:59.827123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.911 qpair failed and we were unable to recover it. 00:28:30.911 [2024-12-09 17:38:59.827308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.911 [2024-12-09 17:38:59.827343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.911 qpair failed and we were unable to recover it. 00:28:30.911 [2024-12-09 17:38:59.827562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.911 [2024-12-09 17:38:59.827594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.911 qpair failed and we were unable to recover it. 00:28:30.911 [2024-12-09 17:38:59.827724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.911 [2024-12-09 17:38:59.827757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.911 qpair failed and we were unable to recover it. 00:28:30.911 [2024-12-09 17:38:59.827926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.911 [2024-12-09 17:38:59.827998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.911 qpair failed and we were unable to recover it. 00:28:30.911 [2024-12-09 17:38:59.828230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.911 [2024-12-09 17:38:59.828269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.911 qpair failed and we were unable to recover it. 00:28:30.911 [2024-12-09 17:38:59.828381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.911 [2024-12-09 17:38:59.828414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.911 qpair failed and we were unable to recover it. 00:28:30.911 [2024-12-09 17:38:59.828534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.911 [2024-12-09 17:38:59.828566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.911 qpair failed and we were unable to recover it. 00:28:30.911 [2024-12-09 17:38:59.828810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.911 [2024-12-09 17:38:59.828843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.911 qpair failed and we were unable to recover it. 00:28:30.911 [2024-12-09 17:38:59.829036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.911 [2024-12-09 17:38:59.829068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.911 qpair failed and we were unable to recover it. 00:28:30.911 [2024-12-09 17:38:59.829248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.911 [2024-12-09 17:38:59.829283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.911 qpair failed and we were unable to recover it. 00:28:30.911 [2024-12-09 17:38:59.829551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.911 [2024-12-09 17:38:59.829583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.911 qpair failed and we were unable to recover it. 00:28:30.911 [2024-12-09 17:38:59.829691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.911 [2024-12-09 17:38:59.829724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.911 qpair failed and we were unable to recover it. 00:28:30.911 [2024-12-09 17:38:59.829842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.911 [2024-12-09 17:38:59.829875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.911 qpair failed and we were unable to recover it. 00:28:30.911 [2024-12-09 17:38:59.830019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.911 [2024-12-09 17:38:59.830051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.911 qpair failed and we were unable to recover it. 00:28:30.911 [2024-12-09 17:38:59.830291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.911 [2024-12-09 17:38:59.830326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.911 qpair failed and we were unable to recover it. 00:28:30.911 [2024-12-09 17:38:59.830497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.911 [2024-12-09 17:38:59.830530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.911 qpair failed and we were unable to recover it. 00:28:30.911 [2024-12-09 17:38:59.830723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.911 [2024-12-09 17:38:59.830764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.911 qpair failed and we were unable to recover it. 00:28:30.911 [2024-12-09 17:38:59.830882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.911 [2024-12-09 17:38:59.830916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.911 qpair failed and we were unable to recover it. 00:28:30.911 [2024-12-09 17:38:59.831134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.912 [2024-12-09 17:38:59.831168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.912 qpair failed and we were unable to recover it. 00:28:30.912 [2024-12-09 17:38:59.831374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.912 [2024-12-09 17:38:59.831407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.912 qpair failed and we were unable to recover it. 00:28:30.912 [2024-12-09 17:38:59.831594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.912 [2024-12-09 17:38:59.831627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.912 qpair failed and we were unable to recover it. 00:28:30.912 [2024-12-09 17:38:59.831828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.912 [2024-12-09 17:38:59.831861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.912 qpair failed and we were unable to recover it. 00:28:30.912 [2024-12-09 17:38:59.832047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.912 [2024-12-09 17:38:59.832080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.912 qpair failed and we were unable to recover it. 00:28:30.912 [2024-12-09 17:38:59.832255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.912 [2024-12-09 17:38:59.832289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.912 qpair failed and we were unable to recover it. 00:28:30.912 [2024-12-09 17:38:59.832505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.912 [2024-12-09 17:38:59.832538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.912 qpair failed and we were unable to recover it. 00:28:30.912 [2024-12-09 17:38:59.832734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.912 [2024-12-09 17:38:59.832767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.912 qpair failed and we were unable to recover it. 00:28:30.912 [2024-12-09 17:38:59.832942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.912 [2024-12-09 17:38:59.832975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.912 qpair failed and we were unable to recover it. 00:28:30.912 [2024-12-09 17:38:59.833105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.912 [2024-12-09 17:38:59.833138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.912 qpair failed and we were unable to recover it. 00:28:30.912 [2024-12-09 17:38:59.833381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.912 [2024-12-09 17:38:59.833415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.912 qpair failed and we were unable to recover it. 00:28:30.912 [2024-12-09 17:38:59.833678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.912 [2024-12-09 17:38:59.833712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.912 qpair failed and we were unable to recover it. 00:28:30.912 [2024-12-09 17:38:59.833832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.912 [2024-12-09 17:38:59.833864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.912 qpair failed and we were unable to recover it. 00:28:30.912 [2024-12-09 17:38:59.834040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.912 [2024-12-09 17:38:59.834073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.912 qpair failed and we were unable to recover it. 00:28:30.912 [2024-12-09 17:38:59.834211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.912 [2024-12-09 17:38:59.834253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.912 qpair failed and we were unable to recover it. 00:28:30.912 [2024-12-09 17:38:59.834438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.912 [2024-12-09 17:38:59.834471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.912 qpair failed and we were unable to recover it. 00:28:30.912 [2024-12-09 17:38:59.834604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.912 [2024-12-09 17:38:59.834638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.912 qpair failed and we were unable to recover it. 00:28:30.912 [2024-12-09 17:38:59.834891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.912 [2024-12-09 17:38:59.834924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.912 qpair failed and we were unable to recover it. 00:28:30.912 [2024-12-09 17:38:59.835176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.912 [2024-12-09 17:38:59.835209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.912 qpair failed and we were unable to recover it. 00:28:30.912 [2024-12-09 17:38:59.835404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.912 [2024-12-09 17:38:59.835437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.912 qpair failed and we were unable to recover it. 00:28:30.912 [2024-12-09 17:38:59.835557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.912 [2024-12-09 17:38:59.835591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.912 qpair failed and we were unable to recover it. 00:28:30.912 [2024-12-09 17:38:59.835767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.912 [2024-12-09 17:38:59.835800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.912 qpair failed and we were unable to recover it. 00:28:30.912 [2024-12-09 17:38:59.836076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.912 [2024-12-09 17:38:59.836110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.912 qpair failed and we were unable to recover it. 00:28:30.912 [2024-12-09 17:38:59.836306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.912 [2024-12-09 17:38:59.836342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.912 qpair failed and we were unable to recover it. 00:28:30.912 [2024-12-09 17:38:59.836533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.912 [2024-12-09 17:38:59.836567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.912 qpair failed and we were unable to recover it. 00:28:30.912 [2024-12-09 17:38:59.836708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.912 [2024-12-09 17:38:59.836741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.912 qpair failed and we were unable to recover it. 00:28:30.912 [2024-12-09 17:38:59.836947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.912 [2024-12-09 17:38:59.836981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.912 qpair failed and we were unable to recover it. 00:28:30.912 [2024-12-09 17:38:59.837160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.912 [2024-12-09 17:38:59.837193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.912 qpair failed and we were unable to recover it. 00:28:30.912 [2024-12-09 17:38:59.837443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.912 [2024-12-09 17:38:59.837477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.912 qpair failed and we were unable to recover it. 00:28:30.912 [2024-12-09 17:38:59.837766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.912 [2024-12-09 17:38:59.837799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.912 qpair failed and we were unable to recover it. 00:28:30.912 [2024-12-09 17:38:59.837927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.912 [2024-12-09 17:38:59.837960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.912 qpair failed and we were unable to recover it. 00:28:30.912 [2024-12-09 17:38:59.838176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.912 [2024-12-09 17:38:59.838211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.912 qpair failed and we were unable to recover it. 00:28:30.912 [2024-12-09 17:38:59.838333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.912 [2024-12-09 17:38:59.838367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.912 qpair failed and we were unable to recover it. 00:28:30.912 [2024-12-09 17:38:59.838556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.912 [2024-12-09 17:38:59.838590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.912 qpair failed and we were unable to recover it. 00:28:30.912 [2024-12-09 17:38:59.838709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.912 [2024-12-09 17:38:59.838742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.912 qpair failed and we were unable to recover it. 00:28:30.912 [2024-12-09 17:38:59.839004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.912 [2024-12-09 17:38:59.839038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.912 qpair failed and we were unable to recover it. 00:28:30.912 [2024-12-09 17:38:59.839227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.912 [2024-12-09 17:38:59.839260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.912 qpair failed and we were unable to recover it. 00:28:30.912 [2024-12-09 17:38:59.839534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.912 [2024-12-09 17:38:59.839567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.912 qpair failed and we were unable to recover it. 00:28:30.912 [2024-12-09 17:38:59.839754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.912 [2024-12-09 17:38:59.839792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.912 qpair failed and we were unable to recover it. 00:28:30.913 [2024-12-09 17:38:59.839978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.913 [2024-12-09 17:38:59.840011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.913 qpair failed and we were unable to recover it. 00:28:30.913 [2024-12-09 17:38:59.840194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.913 [2024-12-09 17:38:59.840236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.913 qpair failed and we were unable to recover it. 00:28:30.913 [2024-12-09 17:38:59.840432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.913 [2024-12-09 17:38:59.840465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.913 qpair failed and we were unable to recover it. 00:28:30.913 [2024-12-09 17:38:59.840651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.913 [2024-12-09 17:38:59.840684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.913 qpair failed and we were unable to recover it. 00:28:30.913 [2024-12-09 17:38:59.840927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.913 [2024-12-09 17:38:59.840960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.913 qpair failed and we were unable to recover it. 00:28:30.913 [2024-12-09 17:38:59.841153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.913 [2024-12-09 17:38:59.841185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.913 qpair failed and we were unable to recover it. 00:28:30.913 [2024-12-09 17:38:59.841367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.913 [2024-12-09 17:38:59.841441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.913 qpair failed and we were unable to recover it. 00:28:30.913 [2024-12-09 17:38:59.841639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.913 [2024-12-09 17:38:59.841712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.913 qpair failed and we were unable to recover it. 00:28:30.913 [2024-12-09 17:38:59.841916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.913 [2024-12-09 17:38:59.841952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.913 qpair failed and we were unable to recover it. 00:28:30.913 [2024-12-09 17:38:59.842135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.913 [2024-12-09 17:38:59.842169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.913 qpair failed and we were unable to recover it. 00:28:30.913 [2024-12-09 17:38:59.842441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.913 [2024-12-09 17:38:59.842477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.913 qpair failed and we were unable to recover it. 00:28:30.913 [2024-12-09 17:38:59.842665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.913 [2024-12-09 17:38:59.842698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.913 qpair failed and we were unable to recover it. 00:28:30.913 [2024-12-09 17:38:59.842889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.913 [2024-12-09 17:38:59.842923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.913 qpair failed and we were unable to recover it. 00:28:30.913 [2024-12-09 17:38:59.843137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.913 [2024-12-09 17:38:59.843170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.913 qpair failed and we were unable to recover it. 00:28:30.913 [2024-12-09 17:38:59.843353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.913 [2024-12-09 17:38:59.843386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.913 qpair failed and we were unable to recover it. 00:28:30.913 [2024-12-09 17:38:59.843509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.913 [2024-12-09 17:38:59.843542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.913 qpair failed and we were unable to recover it. 00:28:30.913 [2024-12-09 17:38:59.843785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.913 [2024-12-09 17:38:59.843818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.913 qpair failed and we were unable to recover it. 00:28:30.913 [2024-12-09 17:38:59.844005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.913 [2024-12-09 17:38:59.844038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.913 qpair failed and we were unable to recover it. 00:28:30.913 [2024-12-09 17:38:59.844152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.913 [2024-12-09 17:38:59.844185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.913 qpair failed and we were unable to recover it. 00:28:30.913 [2024-12-09 17:38:59.844392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.913 [2024-12-09 17:38:59.844427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.913 qpair failed and we were unable to recover it. 00:28:30.913 [2024-12-09 17:38:59.844617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.913 [2024-12-09 17:38:59.844650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.913 qpair failed and we were unable to recover it. 00:28:30.913 [2024-12-09 17:38:59.844868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.913 [2024-12-09 17:38:59.844902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.913 qpair failed and we were unable to recover it. 00:28:30.913 [2024-12-09 17:38:59.845024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.913 [2024-12-09 17:38:59.845056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.913 qpair failed and we were unable to recover it. 00:28:30.913 [2024-12-09 17:38:59.845175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.913 [2024-12-09 17:38:59.845208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.913 qpair failed and we were unable to recover it. 00:28:30.913 [2024-12-09 17:38:59.845338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.913 [2024-12-09 17:38:59.845370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.913 qpair failed and we were unable to recover it. 00:28:30.913 [2024-12-09 17:38:59.845492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.913 [2024-12-09 17:38:59.845525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.913 qpair failed and we were unable to recover it. 00:28:30.913 [2024-12-09 17:38:59.845784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.913 [2024-12-09 17:38:59.845857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.913 qpair failed and we were unable to recover it. 00:28:30.913 [2024-12-09 17:38:59.846068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.913 [2024-12-09 17:38:59.846105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.913 qpair failed and we were unable to recover it. 00:28:30.913 [2024-12-09 17:38:59.846285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.913 [2024-12-09 17:38:59.846320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.913 qpair failed and we were unable to recover it. 00:28:30.913 [2024-12-09 17:38:59.846427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.913 [2024-12-09 17:38:59.846460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.913 qpair failed and we were unable to recover it. 00:28:30.913 [2024-12-09 17:38:59.846696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.913 [2024-12-09 17:38:59.846731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.913 qpair failed and we were unable to recover it. 00:28:30.913 [2024-12-09 17:38:59.846850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.913 [2024-12-09 17:38:59.846883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.913 qpair failed and we were unable to recover it. 00:28:30.913 [2024-12-09 17:38:59.847063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.913 [2024-12-09 17:38:59.847096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.913 qpair failed and we were unable to recover it. 00:28:30.913 [2024-12-09 17:38:59.847333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.913 [2024-12-09 17:38:59.847368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.913 qpair failed and we were unable to recover it. 00:28:30.913 [2024-12-09 17:38:59.847606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.913 [2024-12-09 17:38:59.847640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.913 qpair failed and we were unable to recover it. 00:28:30.913 [2024-12-09 17:38:59.847907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.913 [2024-12-09 17:38:59.847940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.913 qpair failed and we were unable to recover it. 00:28:30.913 [2024-12-09 17:38:59.848131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.913 [2024-12-09 17:38:59.848164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.913 qpair failed and we were unable to recover it. 00:28:30.913 [2024-12-09 17:38:59.848369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.913 [2024-12-09 17:38:59.848403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.913 qpair failed and we were unable to recover it. 00:28:30.913 [2024-12-09 17:38:59.848612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.913 [2024-12-09 17:38:59.848645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.914 qpair failed and we were unable to recover it. 00:28:30.914 [2024-12-09 17:38:59.848819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.914 [2024-12-09 17:38:59.848854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.914 qpair failed and we were unable to recover it. 00:28:30.914 [2024-12-09 17:38:59.849040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.914 [2024-12-09 17:38:59.849074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.914 qpair failed and we were unable to recover it. 00:28:30.914 [2024-12-09 17:38:59.849214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.914 [2024-12-09 17:38:59.849256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.914 qpair failed and we were unable to recover it. 00:28:30.914 [2024-12-09 17:38:59.849451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.914 [2024-12-09 17:38:59.849484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.914 qpair failed and we were unable to recover it. 00:28:30.914 [2024-12-09 17:38:59.849655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.914 [2024-12-09 17:38:59.849688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.914 qpair failed and we were unable to recover it. 00:28:30.914 [2024-12-09 17:38:59.849961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.914 [2024-12-09 17:38:59.849995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.914 qpair failed and we were unable to recover it. 00:28:30.914 [2024-12-09 17:38:59.850131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.914 [2024-12-09 17:38:59.850165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.914 qpair failed and we were unable to recover it. 00:28:30.914 [2024-12-09 17:38:59.850321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.914 [2024-12-09 17:38:59.850357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.914 qpair failed and we were unable to recover it. 00:28:30.914 [2024-12-09 17:38:59.850529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.914 [2024-12-09 17:38:59.850562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.914 qpair failed and we were unable to recover it. 00:28:30.914 [2024-12-09 17:38:59.850753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.914 [2024-12-09 17:38:59.850786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.914 qpair failed and we were unable to recover it. 00:28:30.914 [2024-12-09 17:38:59.850964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.914 [2024-12-09 17:38:59.850997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.914 qpair failed and we were unable to recover it. 00:28:30.914 [2024-12-09 17:38:59.851185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.914 [2024-12-09 17:38:59.851225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.914 qpair failed and we were unable to recover it. 00:28:30.914 [2024-12-09 17:38:59.851410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.914 [2024-12-09 17:38:59.851443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.914 qpair failed and we were unable to recover it. 00:28:30.914 [2024-12-09 17:38:59.851634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.914 [2024-12-09 17:38:59.851667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.914 qpair failed and we were unable to recover it. 00:28:30.914 [2024-12-09 17:38:59.851869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.914 [2024-12-09 17:38:59.851901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.914 qpair failed and we were unable to recover it. 00:28:30.914 [2024-12-09 17:38:59.852165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.914 [2024-12-09 17:38:59.852199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.914 qpair failed and we were unable to recover it. 00:28:30.914 [2024-12-09 17:38:59.852387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.914 [2024-12-09 17:38:59.852420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.914 qpair failed and we were unable to recover it. 00:28:30.914 [2024-12-09 17:38:59.852535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.914 [2024-12-09 17:38:59.852568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.914 qpair failed and we were unable to recover it. 00:28:30.914 [2024-12-09 17:38:59.852751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.914 [2024-12-09 17:38:59.852785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.914 qpair failed and we were unable to recover it. 00:28:30.914 [2024-12-09 17:38:59.852901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.914 [2024-12-09 17:38:59.852935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.914 qpair failed and we were unable to recover it. 00:28:30.914 [2024-12-09 17:38:59.853118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.914 [2024-12-09 17:38:59.853150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.914 qpair failed and we were unable to recover it. 00:28:30.914 [2024-12-09 17:38:59.853272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.914 [2024-12-09 17:38:59.853307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.914 qpair failed and we were unable to recover it. 00:28:30.914 [2024-12-09 17:38:59.853552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.914 [2024-12-09 17:38:59.853585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.914 qpair failed and we were unable to recover it. 00:28:30.914 [2024-12-09 17:38:59.853761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.914 [2024-12-09 17:38:59.853794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.914 qpair failed and we were unable to recover it. 00:28:30.914 [2024-12-09 17:38:59.853996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.914 [2024-12-09 17:38:59.854030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.914 qpair failed and we were unable to recover it. 00:28:30.914 [2024-12-09 17:38:59.854155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.914 [2024-12-09 17:38:59.854188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.914 qpair failed and we were unable to recover it. 00:28:30.914 [2024-12-09 17:38:59.854325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.914 [2024-12-09 17:38:59.854359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.914 qpair failed and we were unable to recover it. 00:28:30.914 [2024-12-09 17:38:59.854644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.914 [2024-12-09 17:38:59.854682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.914 qpair failed and we were unable to recover it. 00:28:30.914 [2024-12-09 17:38:59.854873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.914 [2024-12-09 17:38:59.854907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.914 qpair failed and we were unable to recover it. 00:28:30.914 [2024-12-09 17:38:59.855096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.914 [2024-12-09 17:38:59.855129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.914 qpair failed and we were unable to recover it. 00:28:30.914 [2024-12-09 17:38:59.855256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.914 [2024-12-09 17:38:59.855291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.914 qpair failed and we were unable to recover it. 00:28:30.914 [2024-12-09 17:38:59.855482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.914 [2024-12-09 17:38:59.855516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.914 qpair failed and we were unable to recover it. 00:28:30.914 [2024-12-09 17:38:59.855625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.914 [2024-12-09 17:38:59.855659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.914 qpair failed and we were unable to recover it. 00:28:30.914 [2024-12-09 17:38:59.855856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.914 [2024-12-09 17:38:59.855889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.915 qpair failed and we were unable to recover it. 00:28:30.915 [2024-12-09 17:38:59.856079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.915 [2024-12-09 17:38:59.856111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.915 qpair failed and we were unable to recover it. 00:28:30.915 [2024-12-09 17:38:59.856284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.915 [2024-12-09 17:38:59.856319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.915 qpair failed and we were unable to recover it. 00:28:30.915 [2024-12-09 17:38:59.856494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.915 [2024-12-09 17:38:59.856526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.915 qpair failed and we were unable to recover it. 00:28:30.915 [2024-12-09 17:38:59.856732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.915 [2024-12-09 17:38:59.856765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.915 qpair failed and we were unable to recover it. 00:28:30.915 [2024-12-09 17:38:59.856895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.915 [2024-12-09 17:38:59.856927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.915 qpair failed and we were unable to recover it. 00:28:30.915 [2024-12-09 17:38:59.857168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.915 [2024-12-09 17:38:59.857201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.915 qpair failed and we were unable to recover it. 00:28:30.915 [2024-12-09 17:38:59.857329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.915 [2024-12-09 17:38:59.857362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.915 qpair failed and we were unable to recover it. 00:28:30.915 [2024-12-09 17:38:59.857490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.915 [2024-12-09 17:38:59.857523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.915 qpair failed and we were unable to recover it. 00:28:30.915 [2024-12-09 17:38:59.857711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.915 [2024-12-09 17:38:59.857745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.915 qpair failed and we were unable to recover it. 00:28:30.915 [2024-12-09 17:38:59.857860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.915 [2024-12-09 17:38:59.857893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.915 qpair failed and we were unable to recover it. 00:28:30.915 [2024-12-09 17:38:59.858026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.915 [2024-12-09 17:38:59.858059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.915 qpair failed and we were unable to recover it. 00:28:30.915 [2024-12-09 17:38:59.858304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.915 [2024-12-09 17:38:59.858338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.915 qpair failed and we were unable to recover it. 00:28:30.915 [2024-12-09 17:38:59.858550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.915 [2024-12-09 17:38:59.858583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.915 qpair failed and we were unable to recover it. 00:28:30.915 [2024-12-09 17:38:59.858707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.915 [2024-12-09 17:38:59.858740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.915 qpair failed and we were unable to recover it. 00:28:30.915 [2024-12-09 17:38:59.858860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.915 [2024-12-09 17:38:59.858893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.915 qpair failed and we were unable to recover it. 00:28:30.915 [2024-12-09 17:38:59.859087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.915 [2024-12-09 17:38:59.859119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.915 qpair failed and we were unable to recover it. 00:28:30.915 [2024-12-09 17:38:59.859361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.915 [2024-12-09 17:38:59.859396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.915 qpair failed and we were unable to recover it. 00:28:30.915 [2024-12-09 17:38:59.859635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.915 [2024-12-09 17:38:59.859668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.915 qpair failed and we were unable to recover it. 00:28:30.915 [2024-12-09 17:38:59.859845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.915 [2024-12-09 17:38:59.859879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.915 qpair failed and we were unable to recover it. 00:28:30.915 [2024-12-09 17:38:59.860072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.915 [2024-12-09 17:38:59.860104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.915 qpair failed and we were unable to recover it. 00:28:30.915 [2024-12-09 17:38:59.860296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.915 [2024-12-09 17:38:59.860331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.915 qpair failed and we were unable to recover it. 00:28:30.915 [2024-12-09 17:38:59.860504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.915 [2024-12-09 17:38:59.860538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.915 qpair failed and we were unable to recover it. 00:28:30.915 [2024-12-09 17:38:59.860816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.915 [2024-12-09 17:38:59.860853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.915 qpair failed and we were unable to recover it. 00:28:30.915 [2024-12-09 17:38:59.860982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.915 [2024-12-09 17:38:59.861013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.915 qpair failed and we were unable to recover it. 00:28:30.915 [2024-12-09 17:38:59.861120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.915 [2024-12-09 17:38:59.861151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.915 qpair failed and we were unable to recover it. 00:28:30.915 [2024-12-09 17:38:59.861342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.915 [2024-12-09 17:38:59.861374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.915 qpair failed and we were unable to recover it. 00:28:30.915 [2024-12-09 17:38:59.861637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.915 [2024-12-09 17:38:59.861670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.915 qpair failed and we were unable to recover it. 00:28:30.915 [2024-12-09 17:38:59.861786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.915 [2024-12-09 17:38:59.861819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.915 qpair failed and we were unable to recover it. 00:28:30.915 [2024-12-09 17:38:59.861957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.915 [2024-12-09 17:38:59.861989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.915 qpair failed and we were unable to recover it. 00:28:30.915 [2024-12-09 17:38:59.862095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.915 [2024-12-09 17:38:59.862129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.915 qpair failed and we were unable to recover it. 00:28:30.915 [2024-12-09 17:38:59.862327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.915 [2024-12-09 17:38:59.862363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.915 qpair failed and we were unable to recover it. 00:28:30.915 [2024-12-09 17:38:59.862550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.915 [2024-12-09 17:38:59.862582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.915 qpair failed and we were unable to recover it. 00:28:30.915 [2024-12-09 17:38:59.862767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.915 [2024-12-09 17:38:59.862800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.915 qpair failed and we were unable to recover it. 00:28:30.915 [2024-12-09 17:38:59.862921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.915 [2024-12-09 17:38:59.862960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.915 qpair failed and we were unable to recover it. 00:28:30.915 [2024-12-09 17:38:59.863142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.915 [2024-12-09 17:38:59.863174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.915 qpair failed and we were unable to recover it. 00:28:30.915 [2024-12-09 17:38:59.863432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.915 [2024-12-09 17:38:59.863466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.915 qpair failed and we were unable to recover it. 00:28:30.915 [2024-12-09 17:38:59.863643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.915 [2024-12-09 17:38:59.863677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.915 qpair failed and we were unable to recover it. 00:28:30.915 [2024-12-09 17:38:59.863869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.915 [2024-12-09 17:38:59.863902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.915 qpair failed and we were unable to recover it. 00:28:30.915 [2024-12-09 17:38:59.864073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.915 [2024-12-09 17:38:59.864105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.916 qpair failed and we were unable to recover it. 00:28:30.916 [2024-12-09 17:38:59.864296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.916 [2024-12-09 17:38:59.864331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.916 qpair failed and we were unable to recover it. 00:28:30.916 [2024-12-09 17:38:59.864452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.916 [2024-12-09 17:38:59.864484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.916 qpair failed and we were unable to recover it. 00:28:30.916 [2024-12-09 17:38:59.864661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.916 [2024-12-09 17:38:59.864695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.916 qpair failed and we were unable to recover it. 00:28:30.916 [2024-12-09 17:38:59.864947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.916 [2024-12-09 17:38:59.864979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.916 qpair failed and we were unable to recover it. 00:28:30.916 [2024-12-09 17:38:59.865118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.916 [2024-12-09 17:38:59.865152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.916 qpair failed and we were unable to recover it. 00:28:30.916 [2024-12-09 17:38:59.865328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.916 [2024-12-09 17:38:59.865362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.916 qpair failed and we were unable to recover it. 00:28:30.916 [2024-12-09 17:38:59.865636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.916 [2024-12-09 17:38:59.865669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.916 qpair failed and we were unable to recover it. 00:28:30.916 [2024-12-09 17:38:59.865924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.916 [2024-12-09 17:38:59.865957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.916 qpair failed and we were unable to recover it. 00:28:30.916 [2024-12-09 17:38:59.866099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.916 [2024-12-09 17:38:59.866132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.916 qpair failed and we were unable to recover it. 00:28:30.916 [2024-12-09 17:38:59.866263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.916 [2024-12-09 17:38:59.866301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.916 qpair failed and we were unable to recover it. 00:28:30.916 [2024-12-09 17:38:59.866558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.916 [2024-12-09 17:38:59.866590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.916 qpair failed and we were unable to recover it. 00:28:30.916 [2024-12-09 17:38:59.866694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.916 [2024-12-09 17:38:59.866728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.916 qpair failed and we were unable to recover it. 00:28:30.916 [2024-12-09 17:38:59.866843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.916 [2024-12-09 17:38:59.866876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.916 qpair failed and we were unable to recover it. 00:28:30.916 [2024-12-09 17:38:59.867003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.916 [2024-12-09 17:38:59.867036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.916 qpair failed and we were unable to recover it. 00:28:30.916 [2024-12-09 17:38:59.867254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.916 [2024-12-09 17:38:59.867290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.916 qpair failed and we were unable to recover it. 00:28:30.916 [2024-12-09 17:38:59.867532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.916 [2024-12-09 17:38:59.867566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.916 qpair failed and we were unable to recover it. 00:28:30.916 [2024-12-09 17:38:59.867761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.916 [2024-12-09 17:38:59.867794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.916 qpair failed and we were unable to recover it. 00:28:30.916 [2024-12-09 17:38:59.868052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.916 [2024-12-09 17:38:59.868085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.916 qpair failed and we were unable to recover it. 00:28:30.916 [2024-12-09 17:38:59.868269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.916 [2024-12-09 17:38:59.868304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.916 qpair failed and we were unable to recover it. 00:28:30.916 [2024-12-09 17:38:59.868434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.916 [2024-12-09 17:38:59.868466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.916 qpair failed and we were unable to recover it. 00:28:30.916 [2024-12-09 17:38:59.868729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.916 [2024-12-09 17:38:59.868762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.916 qpair failed and we were unable to recover it. 00:28:30.916 [2024-12-09 17:38:59.869012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.916 [2024-12-09 17:38:59.869046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.916 qpair failed and we were unable to recover it. 00:28:30.916 [2024-12-09 17:38:59.869236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.916 [2024-12-09 17:38:59.869270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.916 qpair failed and we were unable to recover it. 00:28:30.916 [2024-12-09 17:38:59.869448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.916 [2024-12-09 17:38:59.869482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.916 qpair failed and we were unable to recover it. 00:28:30.916 [2024-12-09 17:38:59.869604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.916 [2024-12-09 17:38:59.869638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.916 qpair failed and we were unable to recover it. 00:28:30.916 [2024-12-09 17:38:59.869883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.916 [2024-12-09 17:38:59.869916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.916 qpair failed and we were unable to recover it. 00:28:30.916 [2024-12-09 17:38:59.870175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.916 [2024-12-09 17:38:59.870209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.916 qpair failed and we were unable to recover it. 00:28:30.916 [2024-12-09 17:38:59.870411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.916 [2024-12-09 17:38:59.870444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.916 qpair failed and we were unable to recover it. 00:28:30.916 [2024-12-09 17:38:59.870688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.916 [2024-12-09 17:38:59.870722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.916 qpair failed and we were unable to recover it. 00:28:30.916 [2024-12-09 17:38:59.870843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.916 [2024-12-09 17:38:59.870876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.916 qpair failed and we were unable to recover it. 00:28:30.916 [2024-12-09 17:38:59.871005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.916 [2024-12-09 17:38:59.871039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.916 qpair failed and we were unable to recover it. 00:28:30.916 [2024-12-09 17:38:59.871235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.916 [2024-12-09 17:38:59.871270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.916 qpair failed and we were unable to recover it. 00:28:30.916 [2024-12-09 17:38:59.871536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.916 [2024-12-09 17:38:59.871570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.916 qpair failed and we were unable to recover it. 00:28:30.916 [2024-12-09 17:38:59.871744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.916 [2024-12-09 17:38:59.871777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.916 qpair failed and we were unable to recover it. 00:28:30.916 [2024-12-09 17:38:59.871957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.916 [2024-12-09 17:38:59.871997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.916 qpair failed and we were unable to recover it. 00:28:30.916 [2024-12-09 17:38:59.872107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.916 [2024-12-09 17:38:59.872140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.916 qpair failed and we were unable to recover it. 00:28:30.916 [2024-12-09 17:38:59.872337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.916 [2024-12-09 17:38:59.872372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.916 qpair failed and we were unable to recover it. 00:28:30.916 [2024-12-09 17:38:59.872588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.916 [2024-12-09 17:38:59.872622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.916 qpair failed and we were unable to recover it. 00:28:30.916 [2024-12-09 17:38:59.872752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.917 [2024-12-09 17:38:59.872786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.917 qpair failed and we were unable to recover it. 00:28:30.917 [2024-12-09 17:38:59.873047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.917 [2024-12-09 17:38:59.873080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.917 qpair failed and we were unable to recover it. 00:28:30.917 [2024-12-09 17:38:59.873275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.917 [2024-12-09 17:38:59.873309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.917 qpair failed and we were unable to recover it. 00:28:30.917 [2024-12-09 17:38:59.873512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.917 [2024-12-09 17:38:59.873545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.917 qpair failed and we were unable to recover it. 00:28:30.917 [2024-12-09 17:38:59.873784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.917 [2024-12-09 17:38:59.873816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.917 qpair failed and we were unable to recover it. 00:28:30.917 [2024-12-09 17:38:59.873939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.917 [2024-12-09 17:38:59.873972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.917 qpair failed and we were unable to recover it. 00:28:30.917 [2024-12-09 17:38:59.874093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.917 [2024-12-09 17:38:59.874126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.917 qpair failed and we were unable to recover it. 00:28:30.917 [2024-12-09 17:38:59.874367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.917 [2024-12-09 17:38:59.874401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.917 qpair failed and we were unable to recover it. 00:28:30.917 [2024-12-09 17:38:59.874592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.917 [2024-12-09 17:38:59.874625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.917 qpair failed and we were unable to recover it. 00:28:30.917 [2024-12-09 17:38:59.874816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.917 [2024-12-09 17:38:59.874850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.917 qpair failed and we were unable to recover it. 00:28:30.917 [2024-12-09 17:38:59.875048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.917 [2024-12-09 17:38:59.875080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.917 qpair failed and we were unable to recover it. 00:28:30.917 [2024-12-09 17:38:59.875203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.917 [2024-12-09 17:38:59.875245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.917 qpair failed and we were unable to recover it. 00:28:30.917 [2024-12-09 17:38:59.875430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.917 [2024-12-09 17:38:59.875463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.917 qpair failed and we were unable to recover it. 00:28:30.917 [2024-12-09 17:38:59.875653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.917 [2024-12-09 17:38:59.875685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.917 qpair failed and we were unable to recover it. 00:28:30.917 [2024-12-09 17:38:59.875819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.917 [2024-12-09 17:38:59.875852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.917 qpair failed and we were unable to recover it. 00:28:30.917 [2024-12-09 17:38:59.876025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.917 [2024-12-09 17:38:59.876058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.917 qpair failed and we were unable to recover it. 00:28:30.917 [2024-12-09 17:38:59.876252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.917 [2024-12-09 17:38:59.876286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.917 qpair failed and we were unable to recover it. 00:28:30.917 [2024-12-09 17:38:59.876531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.917 [2024-12-09 17:38:59.876564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.917 qpair failed and we were unable to recover it. 00:28:30.917 [2024-12-09 17:38:59.876807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.917 [2024-12-09 17:38:59.876841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.917 qpair failed and we were unable to recover it. 00:28:30.917 [2024-12-09 17:38:59.876967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.917 [2024-12-09 17:38:59.876999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.917 qpair failed and we were unable to recover it. 00:28:30.917 [2024-12-09 17:38:59.877173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.917 [2024-12-09 17:38:59.877206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.917 qpair failed and we were unable to recover it. 00:28:30.917 [2024-12-09 17:38:59.877407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.917 [2024-12-09 17:38:59.877440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.917 qpair failed and we were unable to recover it. 00:28:30.917 [2024-12-09 17:38:59.877708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.917 [2024-12-09 17:38:59.877741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.917 qpair failed and we were unable to recover it. 00:28:30.917 [2024-12-09 17:38:59.877921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.917 [2024-12-09 17:38:59.877955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.917 qpair failed and we were unable to recover it. 00:28:30.917 [2024-12-09 17:38:59.878065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.917 [2024-12-09 17:38:59.878098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.917 qpair failed and we were unable to recover it. 00:28:30.917 [2024-12-09 17:38:59.878242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.917 [2024-12-09 17:38:59.878276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.917 qpair failed and we were unable to recover it. 00:28:30.917 [2024-12-09 17:38:59.878536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.917 [2024-12-09 17:38:59.878570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.917 qpair failed and we were unable to recover it. 00:28:30.917 [2024-12-09 17:38:59.878792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.917 [2024-12-09 17:38:59.878826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.917 qpair failed and we were unable to recover it. 00:28:30.917 [2024-12-09 17:38:59.879009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.917 [2024-12-09 17:38:59.879042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.917 qpair failed and we were unable to recover it. 00:28:30.917 [2024-12-09 17:38:59.879236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.917 [2024-12-09 17:38:59.879270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.917 qpair failed and we were unable to recover it. 00:28:30.917 [2024-12-09 17:38:59.879443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.917 [2024-12-09 17:38:59.879477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.917 qpair failed and we were unable to recover it. 00:28:30.917 [2024-12-09 17:38:59.879651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.917 [2024-12-09 17:38:59.879683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.917 qpair failed and we were unable to recover it. 00:28:30.917 [2024-12-09 17:38:59.879898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.917 [2024-12-09 17:38:59.879931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.917 qpair failed and we were unable to recover it. 00:28:30.917 [2024-12-09 17:38:59.880061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.917 [2024-12-09 17:38:59.880094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.917 qpair failed and we were unable to recover it. 00:28:30.917 [2024-12-09 17:38:59.880286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.917 [2024-12-09 17:38:59.880320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.917 qpair failed and we were unable to recover it. 00:28:30.917 [2024-12-09 17:38:59.880438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.917 [2024-12-09 17:38:59.880471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.917 qpair failed and we were unable to recover it. 00:28:30.917 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2738486 Killed "${NVMF_APP[@]}" "$@" 00:28:30.917 [2024-12-09 17:38:59.880654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.917 [2024-12-09 17:38:59.880689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.917 qpair failed and we were unable to recover it. 00:28:30.917 [2024-12-09 17:38:59.880867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.917 [2024-12-09 17:38:59.880899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.917 qpair failed and we were unable to recover it. 00:28:30.917 [2024-12-09 17:38:59.881111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.917 [2024-12-09 17:38:59.881144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.917 qpair failed and we were unable to recover it. 00:28:30.918 17:38:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:28:30.918 [2024-12-09 17:38:59.881334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.918 [2024-12-09 17:38:59.881370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.918 qpair failed and we were unable to recover it. 00:28:30.918 [2024-12-09 17:38:59.881549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.918 [2024-12-09 17:38:59.881583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.918 qpair failed and we were unable to recover it. 00:28:30.918 [2024-12-09 17:38:59.881712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.918 17:38:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:30.918 [2024-12-09 17:38:59.881745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.918 qpair failed and we were unable to recover it. 00:28:30.918 [2024-12-09 17:38:59.882016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.918 [2024-12-09 17:38:59.882049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.918 qpair failed and we were unable to recover it. 00:28:30.918 17:38:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:30.918 [2024-12-09 17:38:59.882187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.918 [2024-12-09 17:38:59.882229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.918 qpair failed and we were unable to recover it. 00:28:30.918 17:38:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:30.918 [2024-12-09 17:38:59.882476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.918 [2024-12-09 17:38:59.882511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.918 qpair failed and we were unable to recover it. 00:28:30.918 17:38:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:30.918 [2024-12-09 17:38:59.882767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.918 [2024-12-09 17:38:59.882800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.918 qpair failed and we were unable to recover it. 00:28:30.918 [2024-12-09 17:38:59.882981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.918 [2024-12-09 17:38:59.883014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.918 qpair failed and we were unable to recover it. 00:28:30.918 [2024-12-09 17:38:59.883141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.918 [2024-12-09 17:38:59.883174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.918 qpair failed and we were unable to recover it. 00:28:30.918 [2024-12-09 17:38:59.883426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.918 [2024-12-09 17:38:59.883460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.918 qpair failed and we were unable to recover it. 00:28:30.918 [2024-12-09 17:38:59.883753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.918 [2024-12-09 17:38:59.883785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.918 qpair failed and we were unable to recover it. 00:28:30.918 [2024-12-09 17:38:59.883961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.918 [2024-12-09 17:38:59.883995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.918 qpair failed and we were unable to recover it. 00:28:30.918 [2024-12-09 17:38:59.884178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.918 [2024-12-09 17:38:59.884210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.918 qpair failed and we were unable to recover it. 00:28:30.918 [2024-12-09 17:38:59.884346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.918 [2024-12-09 17:38:59.884380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.918 qpair failed and we were unable to recover it. 00:28:30.918 [2024-12-09 17:38:59.884584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.918 [2024-12-09 17:38:59.884617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.918 qpair failed and we were unable to recover it. 00:28:30.918 [2024-12-09 17:38:59.884743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.918 [2024-12-09 17:38:59.884776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.918 qpair failed and we were unable to recover it. 00:28:30.918 [2024-12-09 17:38:59.884986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.918 [2024-12-09 17:38:59.885020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.918 qpair failed and we were unable to recover it. 00:28:30.918 [2024-12-09 17:38:59.885167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.918 [2024-12-09 17:38:59.885201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.918 qpair failed and we were unable to recover it. 00:28:30.918 [2024-12-09 17:38:59.885327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.918 [2024-12-09 17:38:59.885361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.918 qpair failed and we were unable to recover it. 00:28:30.918 [2024-12-09 17:38:59.885565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.918 [2024-12-09 17:38:59.885598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.918 qpair failed and we were unable to recover it. 00:28:30.918 [2024-12-09 17:38:59.885739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.918 [2024-12-09 17:38:59.885775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.918 qpair failed and we were unable to recover it. 00:28:30.918 [2024-12-09 17:38:59.886028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.918 [2024-12-09 17:38:59.886067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.918 qpair failed and we were unable to recover it. 00:28:30.918 [2024-12-09 17:38:59.886261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.918 [2024-12-09 17:38:59.886296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.918 qpair failed and we were unable to recover it. 00:28:30.918 [2024-12-09 17:38:59.886560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.918 [2024-12-09 17:38:59.886593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.918 qpair failed and we were unable to recover it. 00:28:30.918 [2024-12-09 17:38:59.886731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.918 [2024-12-09 17:38:59.886765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.918 qpair failed and we were unable to recover it. 00:28:30.918 [2024-12-09 17:38:59.886939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.918 [2024-12-09 17:38:59.886972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.918 qpair failed and we were unable to recover it. 00:28:30.918 [2024-12-09 17:38:59.887082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.918 [2024-12-09 17:38:59.887114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.918 qpair failed and we were unable to recover it. 00:28:30.918 [2024-12-09 17:38:59.887315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.918 [2024-12-09 17:38:59.887347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.918 qpair failed and we were unable to recover it. 00:28:30.918 [2024-12-09 17:38:59.887529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.918 [2024-12-09 17:38:59.887560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.918 qpair failed and we were unable to recover it. 00:28:30.918 [2024-12-09 17:38:59.887669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.918 [2024-12-09 17:38:59.887704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.918 qpair failed and we were unable to recover it. 00:28:30.918 [2024-12-09 17:38:59.887832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.918 [2024-12-09 17:38:59.887864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.918 qpair failed and we were unable to recover it. 00:28:30.918 [2024-12-09 17:38:59.888047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.918 [2024-12-09 17:38:59.888080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.918 qpair failed and we were unable to recover it. 00:28:30.918 [2024-12-09 17:38:59.888213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.918 [2024-12-09 17:38:59.888256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.919 qpair failed and we were unable to recover it. 00:28:30.919 [2024-12-09 17:38:59.888364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.919 [2024-12-09 17:38:59.888397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.919 qpair failed and we were unable to recover it. 00:28:30.919 [2024-12-09 17:38:59.888580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.919 [2024-12-09 17:38:59.888612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.919 qpair failed and we were unable to recover it. 00:28:30.919 [2024-12-09 17:38:59.888791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.919 [2024-12-09 17:38:59.888824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.919 qpair failed and we were unable to recover it. 00:28:30.919 [2024-12-09 17:38:59.889018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.919 [2024-12-09 17:38:59.889051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.919 qpair failed and we were unable to recover it. 00:28:30.919 [2024-12-09 17:38:59.889163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.919 [2024-12-09 17:38:59.889197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.919 qpair failed and we were unable to recover it. 00:28:30.919 [2024-12-09 17:38:59.889348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.919 [2024-12-09 17:38:59.889382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.919 qpair failed and we were unable to recover it. 00:28:30.919 [2024-12-09 17:38:59.889515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.919 [2024-12-09 17:38:59.889549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.919 qpair failed and we were unable to recover it. 00:28:30.919 [2024-12-09 17:38:59.889671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.919 [2024-12-09 17:38:59.889708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.919 qpair failed and we were unable to recover it. 00:28:30.919 17:38:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2739190 00:28:30.919 [2024-12-09 17:38:59.889881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.919 [2024-12-09 17:38:59.889916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.919 qpair failed and we were unable to recover it. 00:28:30.919 17:38:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2739190 00:28:30.919 [2024-12-09 17:38:59.890181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.919 17:38:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:30.919 [2024-12-09 17:38:59.890227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.919 qpair failed and we were unable to recover it. 00:28:30.919 [2024-12-09 17:38:59.890356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.919 [2024-12-09 17:38:59.890389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.919 qpair failed and we were unable to recover it. 00:28:30.919 [2024-12-09 17:38:59.890543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.919 17:38:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2739190 ']' 00:28:30.919 [2024-12-09 17:38:59.890578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.919 qpair failed and we were unable to recover it. 00:28:30.919 [2024-12-09 17:38:59.890758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.919 [2024-12-09 17:38:59.890791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.919 qpair failed and we were unable to recover it. 00:28:30.919 17:38:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:30.919 [2024-12-09 17:38:59.890969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.919 [2024-12-09 17:38:59.891003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.919 qpair failed and we were unable to recover it. 00:28:30.919 17:38:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:30.919 [2024-12-09 17:38:59.891231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.919 [2024-12-09 17:38:59.891266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.919 qpair failed and we were unable to recover it. 00:28:30.919 17:38:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:30.919 [2024-12-09 17:38:59.891509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:30.919 [2024-12-09 17:38:59.891548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.919 qpair failed and we were unable to recover it. 00:28:30.919 [2024-12-09 17:38:59.891686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.919 [2024-12-09 17:38:59.891722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.919 qpair failed and we were unable to recover it. 00:28:30.919 17:38:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:30.919 [2024-12-09 17:38:59.891847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.919 [2024-12-09 17:38:59.891882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.919 qpair failed and we were unable to recover it. 00:28:30.919 [2024-12-09 17:38:59.892058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.919 17:38:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:30.919 [2024-12-09 17:38:59.892092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.919 qpair failed and we were unable to recover it. 00:28:30.919 [2024-12-09 17:38:59.892241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.919 [2024-12-09 17:38:59.892277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.919 qpair failed and we were unable to recover it. 00:28:30.919 [2024-12-09 17:38:59.892410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.919 [2024-12-09 17:38:59.892445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.919 qpair failed and we were unable to recover it. 00:28:30.919 [2024-12-09 17:38:59.893254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.919 [2024-12-09 17:38:59.893302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.919 qpair failed and we were unable to recover it. 00:28:30.919 [2024-12-09 17:38:59.893502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.919 [2024-12-09 17:38:59.893535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.919 qpair failed and we were unable to recover it. 00:28:30.919 [2024-12-09 17:38:59.893724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.919 [2024-12-09 17:38:59.893758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.919 qpair failed and we were unable to recover it. 00:28:30.919 [2024-12-09 17:38:59.893968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.919 [2024-12-09 17:38:59.894005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.919 qpair failed and we were unable to recover it. 00:28:30.919 [2024-12-09 17:38:59.894253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.919 [2024-12-09 17:38:59.894288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.919 qpair failed and we were unable to recover it. 00:28:30.919 [2024-12-09 17:38:59.894408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.919 [2024-12-09 17:38:59.894442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.919 qpair failed and we were unable to recover it. 00:28:30.919 [2024-12-09 17:38:59.894622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.919 [2024-12-09 17:38:59.894657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.919 qpair failed and we were unable to recover it. 00:28:30.919 [2024-12-09 17:38:59.894785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.919 [2024-12-09 17:38:59.894819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.919 qpair failed and we were unable to recover it. 00:28:30.919 [2024-12-09 17:38:59.895034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.919 [2024-12-09 17:38:59.895070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.919 qpair failed and we were unable to recover it. 00:28:30.919 [2024-12-09 17:38:59.895259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.919 [2024-12-09 17:38:59.895294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.919 qpair failed and we were unable to recover it. 00:28:30.919 [2024-12-09 17:38:59.895407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.919 [2024-12-09 17:38:59.895440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.919 qpair failed and we were unable to recover it. 00:28:30.919 [2024-12-09 17:38:59.895663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.919 [2024-12-09 17:38:59.895698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.919 qpair failed and we were unable to recover it. 00:28:30.919 [2024-12-09 17:38:59.895896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.919 [2024-12-09 17:38:59.895934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.920 qpair failed and we were unable to recover it. 00:28:30.920 [2024-12-09 17:38:59.896119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.920 [2024-12-09 17:38:59.896153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.920 qpair failed and we were unable to recover it. 00:28:30.920 [2024-12-09 17:38:59.896361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.920 [2024-12-09 17:38:59.896395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.920 qpair failed and we were unable to recover it. 00:28:30.920 [2024-12-09 17:38:59.896589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.920 [2024-12-09 17:38:59.896624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.920 qpair failed and we were unable to recover it. 00:28:30.920 [2024-12-09 17:38:59.896803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.920 [2024-12-09 17:38:59.896837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.920 qpair failed and we were unable to recover it. 00:28:30.920 [2024-12-09 17:38:59.897028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.920 [2024-12-09 17:38:59.897063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.920 qpair failed and we were unable to recover it. 00:28:30.920 [2024-12-09 17:38:59.897203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.920 [2024-12-09 17:38:59.897249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.920 qpair failed and we were unable to recover it. 00:28:30.920 [2024-12-09 17:38:59.897490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.920 [2024-12-09 17:38:59.897522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.920 qpair failed and we were unable to recover it. 00:28:30.920 [2024-12-09 17:38:59.897658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.920 [2024-12-09 17:38:59.897692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.920 qpair failed and we were unable to recover it. 00:28:30.920 [2024-12-09 17:38:59.897867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.920 [2024-12-09 17:38:59.897902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.920 qpair failed and we were unable to recover it. 00:28:30.920 [2024-12-09 17:38:59.898035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.920 [2024-12-09 17:38:59.898068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.920 qpair failed and we were unable to recover it. 00:28:30.920 [2024-12-09 17:38:59.898192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.920 [2024-12-09 17:38:59.898236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.920 qpair failed and we were unable to recover it. 00:28:30.920 [2024-12-09 17:38:59.898439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.920 [2024-12-09 17:38:59.898472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.920 qpair failed and we were unable to recover it. 00:28:30.920 [2024-12-09 17:38:59.898658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.920 [2024-12-09 17:38:59.898692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.920 qpair failed and we were unable to recover it. 00:28:30.920 [2024-12-09 17:38:59.898821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.920 [2024-12-09 17:38:59.898855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.920 qpair failed and we were unable to recover it. 00:28:30.920 [2024-12-09 17:38:59.899053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.920 [2024-12-09 17:38:59.899087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.920 qpair failed and we were unable to recover it. 00:28:30.920 [2024-12-09 17:38:59.899336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.920 [2024-12-09 17:38:59.899372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.920 qpair failed and we were unable to recover it. 00:28:30.920 [2024-12-09 17:38:59.899619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.920 [2024-12-09 17:38:59.899658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.920 qpair failed and we were unable to recover it. 00:28:30.920 [2024-12-09 17:38:59.899791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.920 [2024-12-09 17:38:59.899825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.920 qpair failed and we were unable to recover it. 00:28:30.920 [2024-12-09 17:38:59.899950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.920 [2024-12-09 17:38:59.899984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.920 qpair failed and we were unable to recover it. 00:28:30.920 [2024-12-09 17:38:59.900096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.920 [2024-12-09 17:38:59.900131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.920 qpair failed and we were unable to recover it. 00:28:30.920 [2024-12-09 17:38:59.900375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.920 [2024-12-09 17:38:59.900410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.920 qpair failed and we were unable to recover it. 00:28:30.920 [2024-12-09 17:38:59.900598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.920 [2024-12-09 17:38:59.900632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.920 qpair failed and we were unable to recover it. 00:28:30.920 [2024-12-09 17:38:59.900829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.920 [2024-12-09 17:38:59.900863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.920 qpair failed and we were unable to recover it. 00:28:30.920 [2024-12-09 17:38:59.901042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.920 [2024-12-09 17:38:59.901076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.920 qpair failed and we were unable to recover it. 00:28:30.920 [2024-12-09 17:38:59.901268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.920 [2024-12-09 17:38:59.901304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.920 qpair failed and we were unable to recover it. 00:28:30.920 [2024-12-09 17:38:59.901556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.920 [2024-12-09 17:38:59.901590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.920 qpair failed and we were unable to recover it. 00:28:30.920 [2024-12-09 17:38:59.901787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.920 [2024-12-09 17:38:59.901820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.920 qpair failed and we were unable to recover it. 00:28:30.920 [2024-12-09 17:38:59.902037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.920 [2024-12-09 17:38:59.902070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.920 qpair failed and we were unable to recover it. 00:28:30.920 [2024-12-09 17:38:59.902270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.920 [2024-12-09 17:38:59.902306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.920 qpair failed and we were unable to recover it. 00:28:30.920 [2024-12-09 17:38:59.902437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.920 [2024-12-09 17:38:59.902471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.920 qpair failed and we were unable to recover it. 00:28:30.920 [2024-12-09 17:38:59.902618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.920 [2024-12-09 17:38:59.902653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.920 qpair failed and we were unable to recover it. 00:28:30.920 [2024-12-09 17:38:59.902779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.920 [2024-12-09 17:38:59.902813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.920 qpair failed and we were unable to recover it. 00:28:30.920 [2024-12-09 17:38:59.902931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.920 [2024-12-09 17:38:59.902965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.920 qpair failed and we were unable to recover it. 00:28:30.920 [2024-12-09 17:38:59.903079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.920 [2024-12-09 17:38:59.903113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.920 qpair failed and we were unable to recover it. 00:28:30.920 [2024-12-09 17:38:59.903258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.920 [2024-12-09 17:38:59.903293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.920 qpair failed and we were unable to recover it. 00:28:30.920 [2024-12-09 17:38:59.903482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.920 [2024-12-09 17:38:59.903517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.920 qpair failed and we were unable to recover it. 00:28:30.920 [2024-12-09 17:38:59.903691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.920 [2024-12-09 17:38:59.903724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.920 qpair failed and we were unable to recover it. 00:28:30.920 [2024-12-09 17:38:59.903916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.920 [2024-12-09 17:38:59.903951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.920 qpair failed and we were unable to recover it. 00:28:30.920 [2024-12-09 17:38:59.904132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.921 [2024-12-09 17:38:59.904168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.921 qpair failed and we were unable to recover it. 00:28:30.921 [2024-12-09 17:38:59.904283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.921 [2024-12-09 17:38:59.904317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.921 qpair failed and we were unable to recover it. 00:28:30.921 [2024-12-09 17:38:59.904621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.921 [2024-12-09 17:38:59.904655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.921 qpair failed and we were unable to recover it. 00:28:30.921 [2024-12-09 17:38:59.904830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.921 [2024-12-09 17:38:59.904865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.921 qpair failed and we were unable to recover it. 00:28:30.921 [2024-12-09 17:38:59.905108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.921 [2024-12-09 17:38:59.905142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.921 qpair failed and we were unable to recover it. 00:28:30.921 [2024-12-09 17:38:59.905284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.921 [2024-12-09 17:38:59.905319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.921 qpair failed and we were unable to recover it. 00:28:30.921 [2024-12-09 17:38:59.905557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.921 [2024-12-09 17:38:59.905591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.921 qpair failed and we were unable to recover it. 00:28:30.921 [2024-12-09 17:38:59.905764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.921 [2024-12-09 17:38:59.905798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.921 qpair failed and we were unable to recover it. 00:28:30.921 [2024-12-09 17:38:59.905932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.921 [2024-12-09 17:38:59.905966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.921 qpair failed and we were unable to recover it. 00:28:30.921 [2024-12-09 17:38:59.906076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.921 [2024-12-09 17:38:59.906110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.921 qpair failed and we were unable to recover it. 00:28:30.921 [2024-12-09 17:38:59.906238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.921 [2024-12-09 17:38:59.906273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.921 qpair failed and we were unable to recover it. 00:28:30.921 [2024-12-09 17:38:59.906464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.921 [2024-12-09 17:38:59.906498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.921 qpair failed and we were unable to recover it. 00:28:30.921 [2024-12-09 17:38:59.906734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.921 [2024-12-09 17:38:59.906768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.921 qpair failed and we were unable to recover it. 00:28:30.921 [2024-12-09 17:38:59.906998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.921 [2024-12-09 17:38:59.907031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.921 qpair failed and we were unable to recover it. 00:28:30.921 [2024-12-09 17:38:59.907151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.921 [2024-12-09 17:38:59.907184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.921 qpair failed and we were unable to recover it. 00:28:30.921 [2024-12-09 17:38:59.907399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.921 [2024-12-09 17:38:59.907433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.921 qpair failed and we were unable to recover it. 00:28:30.921 [2024-12-09 17:38:59.907561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.921 [2024-12-09 17:38:59.907594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.921 qpair failed and we were unable to recover it. 00:28:30.921 [2024-12-09 17:38:59.907854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.921 [2024-12-09 17:38:59.907889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.921 qpair failed and we were unable to recover it. 00:28:30.921 [2024-12-09 17:38:59.908015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.921 [2024-12-09 17:38:59.908054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.921 qpair failed and we were unable to recover it. 00:28:30.921 [2024-12-09 17:38:59.908250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.921 [2024-12-09 17:38:59.908284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.921 qpair failed and we were unable to recover it. 00:28:30.921 [2024-12-09 17:38:59.908548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.921 [2024-12-09 17:38:59.908582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.921 qpair failed and we were unable to recover it. 00:28:30.921 [2024-12-09 17:38:59.908763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.921 [2024-12-09 17:38:59.908797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.921 qpair failed and we were unable to recover it. 00:28:30.921 [2024-12-09 17:38:59.908933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.921 [2024-12-09 17:38:59.908966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.921 qpair failed and we were unable to recover it. 00:28:30.921 [2024-12-09 17:38:59.909085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.921 [2024-12-09 17:38:59.909119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.921 qpair failed and we were unable to recover it. 00:28:30.921 [2024-12-09 17:38:59.909298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.921 [2024-12-09 17:38:59.909334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.921 qpair failed and we were unable to recover it. 00:28:30.921 [2024-12-09 17:38:59.909515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.921 [2024-12-09 17:38:59.909548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.921 qpair failed and we were unable to recover it. 00:28:30.921 [2024-12-09 17:38:59.909741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.921 [2024-12-09 17:38:59.909774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.921 qpair failed and we were unable to recover it. 00:28:30.921 [2024-12-09 17:38:59.909887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.921 [2024-12-09 17:38:59.909920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.921 qpair failed and we were unable to recover it. 00:28:30.921 [2024-12-09 17:38:59.910058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.921 [2024-12-09 17:38:59.910092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.921 qpair failed and we were unable to recover it. 00:28:30.921 [2024-12-09 17:38:59.910231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.921 [2024-12-09 17:38:59.910267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.921 qpair failed and we were unable to recover it. 00:28:30.921 [2024-12-09 17:38:59.910448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.921 [2024-12-09 17:38:59.910481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.921 qpair failed and we were unable to recover it. 00:28:30.921 [2024-12-09 17:38:59.910731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.921 [2024-12-09 17:38:59.910767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.921 qpair failed and we were unable to recover it. 00:28:30.921 [2024-12-09 17:38:59.910981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.921 [2024-12-09 17:38:59.911014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.921 qpair failed and we were unable to recover it. 00:28:30.921 [2024-12-09 17:38:59.911126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.921 [2024-12-09 17:38:59.911159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.921 qpair failed and we were unable to recover it. 00:28:30.921 [2024-12-09 17:38:59.911271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.921 [2024-12-09 17:38:59.911304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.921 qpair failed and we were unable to recover it. 00:28:30.921 [2024-12-09 17:38:59.911501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.921 [2024-12-09 17:38:59.911535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.921 qpair failed and we were unable to recover it. 00:28:30.921 [2024-12-09 17:38:59.911661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.921 [2024-12-09 17:38:59.911695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.921 qpair failed and we were unable to recover it. 00:28:30.921 [2024-12-09 17:38:59.911879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.921 [2024-12-09 17:38:59.911913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.921 qpair failed and we were unable to recover it. 00:28:30.921 [2024-12-09 17:38:59.912154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.921 [2024-12-09 17:38:59.912187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.921 qpair failed and we were unable to recover it. 00:28:30.921 [2024-12-09 17:38:59.912301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.922 [2024-12-09 17:38:59.912335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.922 qpair failed and we were unable to recover it. 00:28:30.922 [2024-12-09 17:38:59.912532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.922 [2024-12-09 17:38:59.912567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.922 qpair failed and we were unable to recover it. 00:28:30.922 [2024-12-09 17:38:59.912759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.922 [2024-12-09 17:38:59.912792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.922 qpair failed and we were unable to recover it. 00:28:30.922 [2024-12-09 17:38:59.913032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.922 [2024-12-09 17:38:59.913065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.922 qpair failed and we were unable to recover it. 00:28:30.922 [2024-12-09 17:38:59.913208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.922 [2024-12-09 17:38:59.913251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.922 qpair failed and we were unable to recover it. 00:28:30.922 [2024-12-09 17:38:59.913425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.922 [2024-12-09 17:38:59.913458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.922 qpair failed and we were unable to recover it. 00:28:30.922 [2024-12-09 17:38:59.913641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.922 [2024-12-09 17:38:59.913674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.922 qpair failed and we were unable to recover it. 00:28:30.922 [2024-12-09 17:38:59.913819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.922 [2024-12-09 17:38:59.913853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.922 qpair failed and we were unable to recover it. 00:28:30.922 [2024-12-09 17:38:59.914099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.922 [2024-12-09 17:38:59.914133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.922 qpair failed and we were unable to recover it. 00:28:30.922 [2024-12-09 17:38:59.914252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.922 [2024-12-09 17:38:59.914288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.922 qpair failed and we were unable to recover it. 00:28:30.922 [2024-12-09 17:38:59.914480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.922 [2024-12-09 17:38:59.914513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.922 qpair failed and we were unable to recover it. 00:28:30.922 [2024-12-09 17:38:59.914657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.922 [2024-12-09 17:38:59.914691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.922 qpair failed and we were unable to recover it. 00:28:30.922 [2024-12-09 17:38:59.914907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.922 [2024-12-09 17:38:59.914941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.922 qpair failed and we were unable to recover it. 00:28:30.922 [2024-12-09 17:38:59.915161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.922 [2024-12-09 17:38:59.915194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.922 qpair failed and we were unable to recover it. 00:28:30.922 [2024-12-09 17:38:59.915325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.922 [2024-12-09 17:38:59.915360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.922 qpair failed and we were unable to recover it. 00:28:30.922 [2024-12-09 17:38:59.915494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.922 [2024-12-09 17:38:59.915527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.922 qpair failed and we were unable to recover it. 00:28:30.922 [2024-12-09 17:38:59.915646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.922 [2024-12-09 17:38:59.915679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.922 qpair failed and we were unable to recover it. 00:28:30.922 [2024-12-09 17:38:59.915806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.922 [2024-12-09 17:38:59.915840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.922 qpair failed and we were unable to recover it. 00:28:30.922 [2024-12-09 17:38:59.916020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.922 [2024-12-09 17:38:59.916053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.922 qpair failed and we were unable to recover it. 00:28:30.922 [2024-12-09 17:38:59.916177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.922 [2024-12-09 17:38:59.916216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.922 qpair failed and we were unable to recover it. 00:28:30.922 [2024-12-09 17:38:59.916418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.922 [2024-12-09 17:38:59.916454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.922 qpair failed and we were unable to recover it. 00:28:30.922 [2024-12-09 17:38:59.916564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.922 [2024-12-09 17:38:59.916596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.922 qpair failed and we were unable to recover it. 00:28:30.922 [2024-12-09 17:38:59.916869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.922 [2024-12-09 17:38:59.916903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.922 qpair failed and we were unable to recover it. 00:28:30.922 [2024-12-09 17:38:59.917043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.922 [2024-12-09 17:38:59.917077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.922 qpair failed and we were unable to recover it. 00:28:30.922 [2024-12-09 17:38:59.917251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.922 [2024-12-09 17:38:59.917287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.922 qpair failed and we were unable to recover it. 00:28:30.922 [2024-12-09 17:38:59.917398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.922 [2024-12-09 17:38:59.917433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.922 qpair failed and we were unable to recover it. 00:28:30.922 [2024-12-09 17:38:59.917631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.922 [2024-12-09 17:38:59.917665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.922 qpair failed and we were unable to recover it. 00:28:30.922 [2024-12-09 17:38:59.917792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.922 [2024-12-09 17:38:59.917827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.922 qpair failed and we were unable to recover it. 00:28:30.922 [2024-12-09 17:38:59.918107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.922 [2024-12-09 17:38:59.918141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.922 qpair failed and we were unable to recover it. 00:28:30.922 [2024-12-09 17:38:59.918331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.922 [2024-12-09 17:38:59.918367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.922 qpair failed and we were unable to recover it. 00:28:30.922 [2024-12-09 17:38:59.918475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.922 [2024-12-09 17:38:59.918517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.922 qpair failed and we were unable to recover it. 00:28:30.922 [2024-12-09 17:38:59.918628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.922 [2024-12-09 17:38:59.918659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.922 qpair failed and we were unable to recover it. 00:28:30.922 [2024-12-09 17:38:59.918780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.922 [2024-12-09 17:38:59.918814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.922 qpair failed and we were unable to recover it. 00:28:30.922 [2024-12-09 17:38:59.919088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.922 [2024-12-09 17:38:59.919123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.922 qpair failed and we were unable to recover it. 00:28:30.922 [2024-12-09 17:38:59.919317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.922 [2024-12-09 17:38:59.919352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.922 qpair failed and we were unable to recover it. 00:28:30.922 [2024-12-09 17:38:59.919530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.922 [2024-12-09 17:38:59.919564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.922 qpair failed and we were unable to recover it. 00:28:30.922 [2024-12-09 17:38:59.919691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.922 [2024-12-09 17:38:59.919725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.922 qpair failed and we were unable to recover it. 00:28:30.922 [2024-12-09 17:38:59.919848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.922 [2024-12-09 17:38:59.919884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.922 qpair failed and we were unable to recover it. 00:28:30.922 [2024-12-09 17:38:59.920004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.922 [2024-12-09 17:38:59.920038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.922 qpair failed and we were unable to recover it. 00:28:30.922 [2024-12-09 17:38:59.920163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.923 [2024-12-09 17:38:59.920197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.923 qpair failed and we were unable to recover it. 00:28:30.923 [2024-12-09 17:38:59.920438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.923 [2024-12-09 17:38:59.920476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.923 qpair failed and we were unable to recover it. 00:28:30.923 [2024-12-09 17:38:59.920656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.923 [2024-12-09 17:38:59.920691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.923 qpair failed and we were unable to recover it. 00:28:30.923 [2024-12-09 17:38:59.920811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.923 [2024-12-09 17:38:59.920845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.923 qpair failed and we were unable to recover it. 00:28:30.923 [2024-12-09 17:38:59.920978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.923 [2024-12-09 17:38:59.921012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.923 qpair failed and we were unable to recover it. 00:28:30.923 [2024-12-09 17:38:59.921253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.923 [2024-12-09 17:38:59.921289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.923 qpair failed and we were unable to recover it. 00:28:30.923 [2024-12-09 17:38:59.921415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.923 [2024-12-09 17:38:59.921449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.923 qpair failed and we were unable to recover it. 00:28:30.923 [2024-12-09 17:38:59.921631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.923 [2024-12-09 17:38:59.921666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.923 qpair failed and we were unable to recover it. 00:28:30.923 [2024-12-09 17:38:59.921912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.923 [2024-12-09 17:38:59.921947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.923 qpair failed and we were unable to recover it. 00:28:30.923 [2024-12-09 17:38:59.922243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.923 [2024-12-09 17:38:59.922279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.923 qpair failed and we were unable to recover it. 00:28:30.923 [2024-12-09 17:38:59.922459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.923 [2024-12-09 17:38:59.922499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.923 qpair failed and we were unable to recover it. 00:28:30.923 [2024-12-09 17:38:59.922618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.923 [2024-12-09 17:38:59.922649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.923 qpair failed and we were unable to recover it. 00:28:30.923 [2024-12-09 17:38:59.922836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.923 [2024-12-09 17:38:59.922869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.923 qpair failed and we were unable to recover it. 00:28:30.923 [2024-12-09 17:38:59.923080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.923 [2024-12-09 17:38:59.923114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.923 qpair failed and we were unable to recover it. 00:28:30.923 [2024-12-09 17:38:59.923366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.923 [2024-12-09 17:38:59.923402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.923 qpair failed and we were unable to recover it. 00:28:30.923 [2024-12-09 17:38:59.923598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.923 [2024-12-09 17:38:59.923632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.923 qpair failed and we were unable to recover it. 00:28:30.923 [2024-12-09 17:38:59.923871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.923 [2024-12-09 17:38:59.923904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.923 qpair failed and we were unable to recover it. 00:28:30.923 [2024-12-09 17:38:59.924085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.923 [2024-12-09 17:38:59.924119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.923 qpair failed and we were unable to recover it. 00:28:30.923 [2024-12-09 17:38:59.924232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.923 [2024-12-09 17:38:59.924267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.923 qpair failed and we were unable to recover it. 00:28:30.923 [2024-12-09 17:38:59.924456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.923 [2024-12-09 17:38:59.924489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.923 qpair failed and we were unable to recover it. 00:28:30.923 [2024-12-09 17:38:59.924667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.923 [2024-12-09 17:38:59.924707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.923 qpair failed and we were unable to recover it. 00:28:30.923 [2024-12-09 17:38:59.924904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.923 [2024-12-09 17:38:59.924939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.923 qpair failed and we were unable to recover it. 00:28:30.923 [2024-12-09 17:38:59.925082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.923 [2024-12-09 17:38:59.925115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.923 qpair failed and we were unable to recover it. 00:28:30.923 [2024-12-09 17:38:59.925265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.923 [2024-12-09 17:38:59.925301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.923 qpair failed and we were unable to recover it. 00:28:30.923 [2024-12-09 17:38:59.925431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.923 [2024-12-09 17:38:59.925465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.923 qpair failed and we were unable to recover it. 00:28:30.923 [2024-12-09 17:38:59.925577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.923 [2024-12-09 17:38:59.925610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.923 qpair failed and we were unable to recover it. 00:28:30.923 [2024-12-09 17:38:59.925811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.923 [2024-12-09 17:38:59.925847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.923 qpair failed and we were unable to recover it. 00:28:30.923 [2024-12-09 17:38:59.925951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.923 [2024-12-09 17:38:59.925985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.923 qpair failed and we were unable to recover it. 00:28:30.923 [2024-12-09 17:38:59.926155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.923 [2024-12-09 17:38:59.926189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.923 qpair failed and we were unable to recover it. 00:28:30.923 [2024-12-09 17:38:59.926391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.923 [2024-12-09 17:38:59.926425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.923 qpair failed and we were unable to recover it. 00:28:30.923 [2024-12-09 17:38:59.926607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.923 [2024-12-09 17:38:59.926642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.923 qpair failed and we were unable to recover it. 00:28:30.923 [2024-12-09 17:38:59.926759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.923 [2024-12-09 17:38:59.926793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.923 qpair failed and we were unable to recover it. 00:28:30.923 [2024-12-09 17:38:59.926904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.923 [2024-12-09 17:38:59.926938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.923 qpair failed and we were unable to recover it. 00:28:30.923 [2024-12-09 17:38:59.927065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.923 [2024-12-09 17:38:59.927100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.923 qpair failed and we were unable to recover it. 00:28:30.923 [2024-12-09 17:38:59.927324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.923 [2024-12-09 17:38:59.927361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.924 qpair failed and we were unable to recover it. 00:28:30.924 [2024-12-09 17:38:59.927495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.924 [2024-12-09 17:38:59.927529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.924 qpair failed and we were unable to recover it. 00:28:30.924 [2024-12-09 17:38:59.927770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.924 [2024-12-09 17:38:59.927803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.924 qpair failed and we were unable to recover it. 00:28:30.924 [2024-12-09 17:38:59.927918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.924 [2024-12-09 17:38:59.927952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.924 qpair failed and we were unable to recover it. 00:28:30.924 [2024-12-09 17:38:59.928124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.924 [2024-12-09 17:38:59.928162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.924 qpair failed and we were unable to recover it. 00:28:30.924 [2024-12-09 17:38:59.928356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.924 [2024-12-09 17:38:59.928390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.924 qpair failed and we were unable to recover it. 00:28:30.924 [2024-12-09 17:38:59.928595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.924 [2024-12-09 17:38:59.928628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.924 qpair failed and we were unable to recover it. 00:28:30.924 [2024-12-09 17:38:59.928902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.924 [2024-12-09 17:38:59.928936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.924 qpair failed and we were unable to recover it. 00:28:30.924 [2024-12-09 17:38:59.929062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.924 [2024-12-09 17:38:59.929094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.924 qpair failed and we were unable to recover it. 00:28:30.924 [2024-12-09 17:38:59.929355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.924 [2024-12-09 17:38:59.929389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.924 qpair failed and we were unable to recover it. 00:28:30.924 [2024-12-09 17:38:59.929516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.924 [2024-12-09 17:38:59.929549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.924 qpair failed and we were unable to recover it. 00:28:30.924 [2024-12-09 17:38:59.929793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.924 [2024-12-09 17:38:59.929825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.924 qpair failed and we were unable to recover it. 00:28:30.924 [2024-12-09 17:38:59.930003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.924 [2024-12-09 17:38:59.930036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.924 qpair failed and we were unable to recover it. 00:28:30.924 [2024-12-09 17:38:59.930358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.924 [2024-12-09 17:38:59.930431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.924 qpair failed and we were unable to recover it. 00:28:30.924 [2024-12-09 17:38:59.930673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.924 [2024-12-09 17:38:59.930712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.924 qpair failed and we were unable to recover it. 00:28:30.924 [2024-12-09 17:38:59.930913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.924 [2024-12-09 17:38:59.930947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.924 qpair failed and we were unable to recover it. 00:28:30.924 [2024-12-09 17:38:59.931070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.924 [2024-12-09 17:38:59.931103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.924 qpair failed and we were unable to recover it. 00:28:30.924 [2024-12-09 17:38:59.931276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.924 [2024-12-09 17:38:59.931310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.924 qpair failed and we were unable to recover it. 00:28:30.924 [2024-12-09 17:38:59.931423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.924 [2024-12-09 17:38:59.931457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.924 qpair failed and we were unable to recover it. 00:28:30.924 [2024-12-09 17:38:59.931629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.924 [2024-12-09 17:38:59.931662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.924 qpair failed and we were unable to recover it. 00:28:30.924 [2024-12-09 17:38:59.931858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.924 [2024-12-09 17:38:59.931892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.924 qpair failed and we were unable to recover it. 00:28:30.924 [2024-12-09 17:38:59.932063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.924 [2024-12-09 17:38:59.932096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.924 qpair failed and we were unable to recover it. 00:28:30.924 [2024-12-09 17:38:59.932195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.924 [2024-12-09 17:38:59.932245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.924 qpair failed and we were unable to recover it. 00:28:30.924 [2024-12-09 17:38:59.932487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.924 [2024-12-09 17:38:59.932519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.924 qpair failed and we were unable to recover it. 00:28:30.924 [2024-12-09 17:38:59.932639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.924 [2024-12-09 17:38:59.932673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.924 qpair failed and we were unable to recover it. 00:28:30.924 [2024-12-09 17:38:59.932873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.924 [2024-12-09 17:38:59.932906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.924 qpair failed and we were unable to recover it. 00:28:30.924 [2024-12-09 17:38:59.933083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.924 [2024-12-09 17:38:59.933126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.924 qpair failed and we were unable to recover it. 00:28:30.924 [2024-12-09 17:38:59.933240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.924 [2024-12-09 17:38:59.933276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.924 qpair failed and we were unable to recover it. 00:28:30.924 [2024-12-09 17:38:59.933469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.924 [2024-12-09 17:38:59.933502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.924 qpair failed and we were unable to recover it. 00:28:30.924 [2024-12-09 17:38:59.933621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.924 [2024-12-09 17:38:59.933654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.924 qpair failed and we were unable to recover it. 00:28:30.924 [2024-12-09 17:38:59.933851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.924 [2024-12-09 17:38:59.933884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.924 qpair failed and we were unable to recover it. 00:28:30.924 [2024-12-09 17:38:59.934080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.924 [2024-12-09 17:38:59.934113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.924 qpair failed and we were unable to recover it. 00:28:30.924 [2024-12-09 17:38:59.934386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.924 [2024-12-09 17:38:59.934420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.924 qpair failed and we were unable to recover it. 00:28:30.924 [2024-12-09 17:38:59.934612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.924 [2024-12-09 17:38:59.934645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.924 qpair failed and we were unable to recover it. 00:28:30.924 [2024-12-09 17:38:59.934764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.924 [2024-12-09 17:38:59.934798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.924 qpair failed and we were unable to recover it. 00:28:30.924 [2024-12-09 17:38:59.934985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.924 [2024-12-09 17:38:59.935019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.924 qpair failed and we were unable to recover it. 00:28:30.924 [2024-12-09 17:38:59.935274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.924 [2024-12-09 17:38:59.935309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.924 qpair failed and we were unable to recover it. 00:28:30.924 [2024-12-09 17:38:59.935519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.924 [2024-12-09 17:38:59.935552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.924 qpair failed and we were unable to recover it. 00:28:30.924 [2024-12-09 17:38:59.935660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.924 [2024-12-09 17:38:59.935694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.924 qpair failed and we were unable to recover it. 00:28:30.925 [2024-12-09 17:38:59.935799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.925 [2024-12-09 17:38:59.935832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.925 qpair failed and we were unable to recover it. 00:28:30.925 [2024-12-09 17:38:59.935964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.925 [2024-12-09 17:38:59.935999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.925 qpair failed and we were unable to recover it. 00:28:30.925 [2024-12-09 17:38:59.936112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.925 [2024-12-09 17:38:59.936146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.925 qpair failed and we were unable to recover it. 00:28:30.925 [2024-12-09 17:38:59.936346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.925 [2024-12-09 17:38:59.936380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.925 qpair failed and we were unable to recover it. 00:28:30.925 [2024-12-09 17:38:59.936566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.925 [2024-12-09 17:38:59.936599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.925 qpair failed and we were unable to recover it. 00:28:30.925 [2024-12-09 17:38:59.936818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.925 [2024-12-09 17:38:59.936851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.925 qpair failed and we were unable to recover it. 00:28:30.925 [2024-12-09 17:38:59.937067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.925 [2024-12-09 17:38:59.937100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.925 qpair failed and we were unable to recover it. 00:28:30.925 [2024-12-09 17:38:59.937237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.925 [2024-12-09 17:38:59.937272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.925 qpair failed and we were unable to recover it. 00:28:30.925 [2024-12-09 17:38:59.937468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.925 [2024-12-09 17:38:59.937502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.925 qpair failed and we were unable to recover it. 00:28:30.925 [2024-12-09 17:38:59.937610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.925 [2024-12-09 17:38:59.937643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.925 qpair failed and we were unable to recover it. 00:28:30.925 [2024-12-09 17:38:59.937838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.925 [2024-12-09 17:38:59.937872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.925 qpair failed and we were unable to recover it. 00:28:30.925 [2024-12-09 17:38:59.938056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.925 [2024-12-09 17:38:59.938090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.925 qpair failed and we were unable to recover it. 00:28:30.925 [2024-12-09 17:38:59.938211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.925 [2024-12-09 17:38:59.938257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.925 qpair failed and we were unable to recover it. 00:28:30.925 [2024-12-09 17:38:59.938501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.925 [2024-12-09 17:38:59.938535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.925 qpair failed and we were unable to recover it. 00:28:30.925 [2024-12-09 17:38:59.938850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.925 [2024-12-09 17:38:59.938923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.925 qpair failed and we were unable to recover it. 00:28:30.925 [2024-12-09 17:38:59.939135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.925 [2024-12-09 17:38:59.939171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.925 qpair failed and we were unable to recover it. 00:28:30.925 [2024-12-09 17:38:59.939389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.925 [2024-12-09 17:38:59.939424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.925 qpair failed and we were unable to recover it. 00:28:30.925 [2024-12-09 17:38:59.939551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.925 [2024-12-09 17:38:59.939583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.925 qpair failed and we were unable to recover it. 00:28:30.925 [2024-12-09 17:38:59.939778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.925 [2024-12-09 17:38:59.939811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.925 qpair failed and we were unable to recover it. 00:28:30.925 [2024-12-09 17:38:59.939992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.925 [2024-12-09 17:38:59.940024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.925 qpair failed and we were unable to recover it. 00:28:30.925 [2024-12-09 17:38:59.940204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.925 [2024-12-09 17:38:59.940251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.925 qpair failed and we were unable to recover it. 00:28:30.925 [2024-12-09 17:38:59.940453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.925 [2024-12-09 17:38:59.940487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.925 qpair failed and we were unable to recover it. 00:28:30.925 [2024-12-09 17:38:59.940678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.925 [2024-12-09 17:38:59.940712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.925 qpair failed and we were unable to recover it. 00:28:30.925 [2024-12-09 17:38:59.940988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.925 [2024-12-09 17:38:59.941022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.925 qpair failed and we were unable to recover it. 00:28:30.925 [2024-12-09 17:38:59.941198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.925 [2024-12-09 17:38:59.941242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.925 qpair failed and we were unable to recover it. 00:28:30.925 [2024-12-09 17:38:59.941385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.925 [2024-12-09 17:38:59.941418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.925 [2024-12-09 17:38:59.941423] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:28:30.925 qpair failed and we were unable to recover it. 00:28:30.925 [2024-12-09 17:38:59.941474] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:30.925 [2024-12-09 17:38:59.941688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.925 [2024-12-09 17:38:59.941729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.925 qpair failed and we were unable to recover it. 00:28:30.925 [2024-12-09 17:38:59.941932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.925 [2024-12-09 17:38:59.941963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.925 qpair failed and we were unable to recover it. 00:28:30.925 [2024-12-09 17:38:59.942086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.925 [2024-12-09 17:38:59.942118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.925 qpair failed and we were unable to recover it. 00:28:30.925 [2024-12-09 17:38:59.942263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.925 [2024-12-09 17:38:59.942295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.925 qpair failed and we were unable to recover it. 00:28:30.925 [2024-12-09 17:38:59.942420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.925 [2024-12-09 17:38:59.942452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.925 qpair failed and we were unable to recover it. 00:28:30.925 [2024-12-09 17:38:59.942657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.925 [2024-12-09 17:38:59.942689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.925 qpair failed and we were unable to recover it. 00:28:30.925 [2024-12-09 17:38:59.942864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.925 [2024-12-09 17:38:59.942898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.925 qpair failed and we were unable to recover it. 00:28:30.925 [2024-12-09 17:38:59.943029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.925 [2024-12-09 17:38:59.943063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.925 qpair failed and we were unable to recover it. 00:28:30.925 [2024-12-09 17:38:59.943242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.925 [2024-12-09 17:38:59.943279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.925 qpair failed and we were unable to recover it. 00:28:30.925 [2024-12-09 17:38:59.943463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.925 [2024-12-09 17:38:59.943498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.925 qpair failed and we were unable to recover it. 00:28:30.925 [2024-12-09 17:38:59.943705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.925 [2024-12-09 17:38:59.943741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.925 qpair failed and we were unable to recover it. 00:28:30.925 [2024-12-09 17:38:59.943924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.925 [2024-12-09 17:38:59.943959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.925 qpair failed and we were unable to recover it. 00:28:30.926 [2024-12-09 17:38:59.944066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.926 [2024-12-09 17:38:59.944100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.926 qpair failed and we were unable to recover it. 00:28:30.926 [2024-12-09 17:38:59.944371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.926 [2024-12-09 17:38:59.944408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.926 qpair failed and we were unable to recover it. 00:28:30.926 [2024-12-09 17:38:59.944684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.926 [2024-12-09 17:38:59.944720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.926 qpair failed and we were unable to recover it. 00:28:30.926 [2024-12-09 17:38:59.944928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.926 [2024-12-09 17:38:59.944963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.926 qpair failed and we were unable to recover it. 00:28:30.926 [2024-12-09 17:38:59.945101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.926 [2024-12-09 17:38:59.945136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.926 qpair failed and we were unable to recover it. 00:28:30.926 [2024-12-09 17:38:59.945327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.926 [2024-12-09 17:38:59.945363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.926 qpair failed and we were unable to recover it. 00:28:30.926 [2024-12-09 17:38:59.945611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.926 [2024-12-09 17:38:59.945645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.926 qpair failed and we were unable to recover it. 00:28:30.926 [2024-12-09 17:38:59.945908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.926 [2024-12-09 17:38:59.945941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.926 qpair failed and we were unable to recover it. 00:28:30.926 [2024-12-09 17:38:59.946100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.926 [2024-12-09 17:38:59.946134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.926 qpair failed and we were unable to recover it. 00:28:30.926 [2024-12-09 17:38:59.946255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.926 [2024-12-09 17:38:59.946289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.926 qpair failed and we were unable to recover it. 00:28:30.926 [2024-12-09 17:38:59.946541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.926 [2024-12-09 17:38:59.946575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.926 qpair failed and we were unable to recover it. 00:28:30.926 [2024-12-09 17:38:59.946701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.926 [2024-12-09 17:38:59.946735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.926 qpair failed and we were unable to recover it. 00:28:30.926 [2024-12-09 17:38:59.947000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.926 [2024-12-09 17:38:59.947033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.926 qpair failed and we were unable to recover it. 00:28:30.926 [2024-12-09 17:38:59.947238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.926 [2024-12-09 17:38:59.947273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.926 qpair failed and we were unable to recover it. 00:28:30.926 [2024-12-09 17:38:59.947534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.926 [2024-12-09 17:38:59.947569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.926 qpair failed and we were unable to recover it. 00:28:30.926 [2024-12-09 17:38:59.947706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.926 [2024-12-09 17:38:59.947740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.926 qpair failed and we were unable to recover it. 00:28:30.926 [2024-12-09 17:38:59.947873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.926 [2024-12-09 17:38:59.947907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.926 qpair failed and we were unable to recover it. 00:28:30.926 [2024-12-09 17:38:59.948102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.926 [2024-12-09 17:38:59.948137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.926 qpair failed and we were unable to recover it. 00:28:30.926 [2024-12-09 17:38:59.948358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.926 [2024-12-09 17:38:59.948393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.926 qpair failed and we were unable to recover it. 00:28:30.926 [2024-12-09 17:38:59.948577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.926 [2024-12-09 17:38:59.948611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.926 qpair failed and we were unable to recover it. 00:28:30.926 [2024-12-09 17:38:59.948745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.926 [2024-12-09 17:38:59.948780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.926 qpair failed and we were unable to recover it. 00:28:30.926 [2024-12-09 17:38:59.948915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.926 [2024-12-09 17:38:59.948949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.926 qpair failed and we were unable to recover it. 00:28:30.926 [2024-12-09 17:38:59.949129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.926 [2024-12-09 17:38:59.949163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.926 qpair failed and we were unable to recover it. 00:28:30.926 [2024-12-09 17:38:59.949308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.926 [2024-12-09 17:38:59.949344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.926 qpair failed and we were unable to recover it. 00:28:30.926 [2024-12-09 17:38:59.949463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.926 [2024-12-09 17:38:59.949497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.926 qpair failed and we were unable to recover it. 00:28:30.926 [2024-12-09 17:38:59.949791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.926 [2024-12-09 17:38:59.949825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.926 qpair failed and we were unable to recover it. 00:28:30.926 [2024-12-09 17:38:59.949949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.926 [2024-12-09 17:38:59.949983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.926 qpair failed and we were unable to recover it. 00:28:30.926 [2024-12-09 17:38:59.950166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.926 [2024-12-09 17:38:59.950200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.926 qpair failed and we were unable to recover it. 00:28:30.926 [2024-12-09 17:38:59.950319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.926 [2024-12-09 17:38:59.950364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.926 qpair failed and we were unable to recover it. 00:28:30.926 [2024-12-09 17:38:59.950583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.926 [2024-12-09 17:38:59.950615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.926 qpair failed and we were unable to recover it. 00:28:30.926 [2024-12-09 17:38:59.950739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.926 [2024-12-09 17:38:59.950773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.926 qpair failed and we were unable to recover it. 00:28:30.926 [2024-12-09 17:38:59.950950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.926 [2024-12-09 17:38:59.950985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.926 qpair failed and we were unable to recover it. 00:28:30.926 [2024-12-09 17:38:59.951180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.926 [2024-12-09 17:38:59.951214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.926 qpair failed and we were unable to recover it. 00:28:30.926 [2024-12-09 17:38:59.951497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.926 [2024-12-09 17:38:59.951531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.926 qpair failed and we were unable to recover it. 00:28:30.926 [2024-12-09 17:38:59.951727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.926 [2024-12-09 17:38:59.951761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.926 qpair failed and we were unable to recover it. 00:28:30.926 [2024-12-09 17:38:59.951952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.926 [2024-12-09 17:38:59.951986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.926 qpair failed and we were unable to recover it. 00:28:30.926 [2024-12-09 17:38:59.952188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.926 [2024-12-09 17:38:59.952232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.926 qpair failed and we were unable to recover it. 00:28:30.926 [2024-12-09 17:38:59.952368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.926 [2024-12-09 17:38:59.952402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.926 qpair failed and we were unable to recover it. 00:28:30.926 [2024-12-09 17:38:59.952596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.926 [2024-12-09 17:38:59.952630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.926 qpair failed and we were unable to recover it. 00:28:30.927 [2024-12-09 17:38:59.952803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.927 [2024-12-09 17:38:59.952837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.927 qpair failed and we were unable to recover it. 00:28:30.927 [2024-12-09 17:38:59.953018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.927 [2024-12-09 17:38:59.953052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.927 qpair failed and we were unable to recover it. 00:28:30.927 [2024-12-09 17:38:59.953177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.927 [2024-12-09 17:38:59.953211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.927 qpair failed and we were unable to recover it. 00:28:30.927 [2024-12-09 17:38:59.953470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.927 [2024-12-09 17:38:59.953504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.927 qpair failed and we were unable to recover it. 00:28:30.927 [2024-12-09 17:38:59.953676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.927 [2024-12-09 17:38:59.953710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.927 qpair failed and we were unable to recover it. 00:28:30.927 [2024-12-09 17:38:59.953849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.927 [2024-12-09 17:38:59.953883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.927 qpair failed and we were unable to recover it. 00:28:30.927 [2024-12-09 17:38:59.954012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.927 [2024-12-09 17:38:59.954047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.927 qpair failed and we were unable to recover it. 00:28:30.927 [2024-12-09 17:38:59.954163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.927 [2024-12-09 17:38:59.954196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.927 qpair failed and we were unable to recover it. 00:28:30.927 [2024-12-09 17:38:59.954454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.927 [2024-12-09 17:38:59.954488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.927 qpair failed and we were unable to recover it. 00:28:30.927 [2024-12-09 17:38:59.954621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.927 [2024-12-09 17:38:59.954655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.927 qpair failed and we were unable to recover it. 00:28:30.927 [2024-12-09 17:38:59.954850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.927 [2024-12-09 17:38:59.954884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.927 qpair failed and we were unable to recover it. 00:28:30.927 [2024-12-09 17:38:59.955087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.927 [2024-12-09 17:38:59.955121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.927 qpair failed and we were unable to recover it. 00:28:30.927 [2024-12-09 17:38:59.955260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.927 [2024-12-09 17:38:59.955296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.927 qpair failed and we were unable to recover it. 00:28:30.927 [2024-12-09 17:38:59.955419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.927 [2024-12-09 17:38:59.955453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.927 qpair failed and we were unable to recover it. 00:28:30.927 [2024-12-09 17:38:59.955581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.927 [2024-12-09 17:38:59.955616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.927 qpair failed and we were unable to recover it. 00:28:30.927 [2024-12-09 17:38:59.955831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.927 [2024-12-09 17:38:59.955866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.927 qpair failed and we were unable to recover it. 00:28:30.927 [2024-12-09 17:38:59.955976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.927 [2024-12-09 17:38:59.956012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.927 qpair failed and we were unable to recover it. 00:28:30.927 [2024-12-09 17:38:59.956295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.927 [2024-12-09 17:38:59.956331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.927 qpair failed and we were unable to recover it. 00:28:30.927 [2024-12-09 17:38:59.956508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.927 [2024-12-09 17:38:59.956542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.927 qpair failed and we were unable to recover it. 00:28:30.927 [2024-12-09 17:38:59.956749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.927 [2024-12-09 17:38:59.956784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.927 qpair failed and we were unable to recover it. 00:28:30.927 [2024-12-09 17:38:59.956964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.927 [2024-12-09 17:38:59.956998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.927 qpair failed and we were unable to recover it. 00:28:30.927 [2024-12-09 17:38:59.957132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.927 [2024-12-09 17:38:59.957166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.927 qpair failed and we were unable to recover it. 00:28:30.927 [2024-12-09 17:38:59.957351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.927 [2024-12-09 17:38:59.957386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.927 qpair failed and we were unable to recover it. 00:28:30.927 [2024-12-09 17:38:59.957629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.927 [2024-12-09 17:38:59.957663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.927 qpair failed and we were unable to recover it. 00:28:30.927 [2024-12-09 17:38:59.957873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.927 [2024-12-09 17:38:59.957906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.927 qpair failed and we were unable to recover it. 00:28:30.927 [2024-12-09 17:38:59.958118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.927 [2024-12-09 17:38:59.958159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.927 qpair failed and we were unable to recover it. 00:28:30.927 [2024-12-09 17:38:59.958340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.927 [2024-12-09 17:38:59.958376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.927 qpair failed and we were unable to recover it. 00:28:30.927 [2024-12-09 17:38:59.958508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.927 [2024-12-09 17:38:59.958542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.927 qpair failed and we were unable to recover it. 00:28:30.927 [2024-12-09 17:38:59.958659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.927 [2024-12-09 17:38:59.958693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.927 qpair failed and we were unable to recover it. 00:28:30.927 [2024-12-09 17:38:59.958815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.927 [2024-12-09 17:38:59.958854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.927 qpair failed and we were unable to recover it. 00:28:30.927 [2024-12-09 17:38:59.959052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.927 [2024-12-09 17:38:59.959086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.927 qpair failed and we were unable to recover it. 00:28:30.927 [2024-12-09 17:38:59.959194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.927 [2024-12-09 17:38:59.959246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.928 qpair failed and we were unable to recover it. 00:28:30.928 [2024-12-09 17:38:59.959358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.928 [2024-12-09 17:38:59.959392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.928 qpair failed and we were unable to recover it. 00:28:30.928 [2024-12-09 17:38:59.959658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.928 [2024-12-09 17:38:59.959692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.928 qpair failed and we were unable to recover it. 00:28:30.928 [2024-12-09 17:38:59.959889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.928 [2024-12-09 17:38:59.959923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.928 qpair failed and we were unable to recover it. 00:28:30.928 [2024-12-09 17:38:59.960102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.928 [2024-12-09 17:38:59.960136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.928 qpair failed and we were unable to recover it. 00:28:30.928 [2024-12-09 17:38:59.960272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.928 [2024-12-09 17:38:59.960308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.928 qpair failed and we were unable to recover it. 00:28:30.928 [2024-12-09 17:38:59.960419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.928 [2024-12-09 17:38:59.960453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.928 qpair failed and we were unable to recover it. 00:28:30.928 [2024-12-09 17:38:59.960629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.928 [2024-12-09 17:38:59.960663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.928 qpair failed and we were unable to recover it. 00:28:30.928 [2024-12-09 17:38:59.960782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.928 [2024-12-09 17:38:59.960816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.928 qpair failed and we were unable to recover it. 00:28:30.928 [2024-12-09 17:38:59.960997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.928 [2024-12-09 17:38:59.961030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.928 qpair failed and we were unable to recover it. 00:28:30.928 [2024-12-09 17:38:59.961171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.928 [2024-12-09 17:38:59.961204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.928 qpair failed and we were unable to recover it. 00:28:30.928 [2024-12-09 17:38:59.961461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.928 [2024-12-09 17:38:59.961496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.928 qpair failed and we were unable to recover it. 00:28:30.928 [2024-12-09 17:38:59.961681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.928 [2024-12-09 17:38:59.961715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.928 qpair failed and we were unable to recover it. 00:28:30.928 [2024-12-09 17:38:59.961828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.928 [2024-12-09 17:38:59.961863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.928 qpair failed and we were unable to recover it. 00:28:30.928 [2024-12-09 17:38:59.961990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.928 [2024-12-09 17:38:59.962024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.928 qpair failed and we were unable to recover it. 00:28:30.928 [2024-12-09 17:38:59.962196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.928 [2024-12-09 17:38:59.962241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.928 qpair failed and we were unable to recover it. 00:28:30.928 [2024-12-09 17:38:59.962355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.928 [2024-12-09 17:38:59.962390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.928 qpair failed and we were unable to recover it. 00:28:30.928 [2024-12-09 17:38:59.962566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.928 [2024-12-09 17:38:59.962600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.928 qpair failed and we were unable to recover it. 00:28:30.928 [2024-12-09 17:38:59.962723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.928 [2024-12-09 17:38:59.962757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.928 qpair failed and we were unable to recover it. 00:28:30.928 [2024-12-09 17:38:59.962941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.928 [2024-12-09 17:38:59.962975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.928 qpair failed and we were unable to recover it. 00:28:30.928 [2024-12-09 17:38:59.963244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.928 [2024-12-09 17:38:59.963279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.928 qpair failed and we were unable to recover it. 00:28:30.928 [2024-12-09 17:38:59.963521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.928 [2024-12-09 17:38:59.963558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.928 qpair failed and we were unable to recover it. 00:28:30.928 [2024-12-09 17:38:59.963692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.928 [2024-12-09 17:38:59.963726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.928 qpair failed and we were unable to recover it. 00:28:30.928 [2024-12-09 17:38:59.963988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.928 [2024-12-09 17:38:59.964023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.928 qpair failed and we were unable to recover it. 00:28:30.928 [2024-12-09 17:38:59.964150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.928 [2024-12-09 17:38:59.964185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.928 qpair failed and we were unable to recover it. 00:28:30.928 [2024-12-09 17:38:59.964454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.928 [2024-12-09 17:38:59.964526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.928 qpair failed and we were unable to recover it. 00:28:30.928 [2024-12-09 17:38:59.964831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.928 [2024-12-09 17:38:59.964903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.928 qpair failed and we were unable to recover it. 00:28:30.928 [2024-12-09 17:38:59.965231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.928 [2024-12-09 17:38:59.965274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.928 qpair failed and we were unable to recover it. 00:28:30.928 [2024-12-09 17:38:59.965470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.928 [2024-12-09 17:38:59.965508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.928 qpair failed and we were unable to recover it. 00:28:30.928 [2024-12-09 17:38:59.965651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.928 [2024-12-09 17:38:59.965693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.928 qpair failed and we were unable to recover it. 00:28:30.928 [2024-12-09 17:38:59.965809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.928 [2024-12-09 17:38:59.965846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.928 qpair failed and we were unable to recover it. 00:28:30.928 [2024-12-09 17:38:59.966148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.928 [2024-12-09 17:38:59.966183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.928 qpair failed and we were unable to recover it. 00:28:30.928 [2024-12-09 17:38:59.966382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.928 [2024-12-09 17:38:59.966423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.928 qpair failed and we were unable to recover it. 00:28:30.928 [2024-12-09 17:38:59.966544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.928 [2024-12-09 17:38:59.966578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.928 qpair failed and we were unable to recover it. 00:28:30.928 [2024-12-09 17:38:59.966822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.928 [2024-12-09 17:38:59.966856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.928 qpair failed and we were unable to recover it. 00:28:30.928 [2024-12-09 17:38:59.967043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.928 [2024-12-09 17:38:59.967079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.928 qpair failed and we were unable to recover it. 00:28:30.928 [2024-12-09 17:38:59.967258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.928 [2024-12-09 17:38:59.967294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.928 qpair failed and we were unable to recover it. 00:28:30.928 [2024-12-09 17:38:59.967494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.928 [2024-12-09 17:38:59.967529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.928 qpair failed and we were unable to recover it. 00:28:30.928 [2024-12-09 17:38:59.967719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.928 [2024-12-09 17:38:59.967759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.928 qpair failed and we were unable to recover it. 00:28:30.928 [2024-12-09 17:38:59.967896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.928 [2024-12-09 17:38:59.967931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.929 qpair failed and we were unable to recover it. 00:28:30.929 [2024-12-09 17:38:59.968059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.929 [2024-12-09 17:38:59.968100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.929 qpair failed and we were unable to recover it. 00:28:30.929 [2024-12-09 17:38:59.968281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.929 [2024-12-09 17:38:59.968323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.929 qpair failed and we were unable to recover it. 00:28:30.929 [2024-12-09 17:38:59.968548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.929 [2024-12-09 17:38:59.968587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.929 qpair failed and we were unable to recover it. 00:28:30.929 [2024-12-09 17:38:59.968772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.929 [2024-12-09 17:38:59.968807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.929 qpair failed and we were unable to recover it. 00:28:30.929 [2024-12-09 17:38:59.968949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.929 [2024-12-09 17:38:59.968986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.929 qpair failed and we were unable to recover it. 00:28:30.929 [2024-12-09 17:38:59.969238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.929 [2024-12-09 17:38:59.969273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.929 qpair failed and we were unable to recover it. 00:28:30.929 [2024-12-09 17:38:59.969466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.929 [2024-12-09 17:38:59.969501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.929 qpair failed and we were unable to recover it. 00:28:30.929 [2024-12-09 17:38:59.969632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.929 [2024-12-09 17:38:59.969667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.929 qpair failed and we were unable to recover it. 00:28:30.929 [2024-12-09 17:38:59.969798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.929 [2024-12-09 17:38:59.969832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.929 qpair failed and we were unable to recover it. 00:28:30.929 [2024-12-09 17:38:59.970026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.929 [2024-12-09 17:38:59.970060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.929 qpair failed and we were unable to recover it. 00:28:30.929 [2024-12-09 17:38:59.970168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.929 [2024-12-09 17:38:59.970214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.929 qpair failed and we were unable to recover it. 00:28:30.929 [2024-12-09 17:38:59.970410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.929 [2024-12-09 17:38:59.970443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.929 qpair failed and we were unable to recover it. 00:28:30.929 [2024-12-09 17:38:59.970644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.929 [2024-12-09 17:38:59.970685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.929 qpair failed and we were unable to recover it. 00:28:30.929 [2024-12-09 17:38:59.970818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.929 [2024-12-09 17:38:59.970850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.929 qpair failed and we were unable to recover it. 00:28:30.929 [2024-12-09 17:38:59.970966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.929 [2024-12-09 17:38:59.971000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.929 qpair failed and we were unable to recover it. 00:28:30.929 [2024-12-09 17:38:59.971182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.929 [2024-12-09 17:38:59.971216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.929 qpair failed and we were unable to recover it. 00:28:30.929 [2024-12-09 17:38:59.971411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.929 [2024-12-09 17:38:59.971445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.929 qpair failed and we were unable to recover it. 00:28:30.929 [2024-12-09 17:38:59.971684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.929 [2024-12-09 17:38:59.971717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.929 qpair failed and we were unable to recover it. 00:28:30.929 [2024-12-09 17:38:59.971891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.929 [2024-12-09 17:38:59.971924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.929 qpair failed and we were unable to recover it. 00:28:30.929 [2024-12-09 17:38:59.972056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.929 [2024-12-09 17:38:59.972089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.929 qpair failed and we were unable to recover it. 00:28:30.929 [2024-12-09 17:38:59.972331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.929 [2024-12-09 17:38:59.972365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.929 qpair failed and we were unable to recover it. 00:28:30.929 [2024-12-09 17:38:59.972544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.929 [2024-12-09 17:38:59.972576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.929 qpair failed and we were unable to recover it. 00:28:30.929 [2024-12-09 17:38:59.972703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.929 [2024-12-09 17:38:59.972735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.929 qpair failed and we were unable to recover it. 00:28:30.929 [2024-12-09 17:38:59.972875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.929 [2024-12-09 17:38:59.972908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.929 qpair failed and we were unable to recover it. 00:28:30.929 [2024-12-09 17:38:59.973023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.929 [2024-12-09 17:38:59.973056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.929 qpair failed and we were unable to recover it. 00:28:30.929 [2024-12-09 17:38:59.973193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.929 [2024-12-09 17:38:59.973234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.929 qpair failed and we were unable to recover it. 00:28:30.929 [2024-12-09 17:38:59.973494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.929 [2024-12-09 17:38:59.973527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.929 qpair failed and we were unable to recover it. 00:28:30.929 [2024-12-09 17:38:59.973717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.929 [2024-12-09 17:38:59.973749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.929 qpair failed and we were unable to recover it. 00:28:30.929 [2024-12-09 17:38:59.973931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.929 [2024-12-09 17:38:59.973964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.929 qpair failed and we were unable to recover it. 00:28:30.929 [2024-12-09 17:38:59.974164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.929 [2024-12-09 17:38:59.974198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.929 qpair failed and we were unable to recover it. 00:28:30.929 [2024-12-09 17:38:59.974313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.929 [2024-12-09 17:38:59.974347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.929 qpair failed and we were unable to recover it. 00:28:30.929 [2024-12-09 17:38:59.974535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.929 [2024-12-09 17:38:59.974567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.929 qpair failed and we were unable to recover it. 00:28:30.929 [2024-12-09 17:38:59.974765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.929 [2024-12-09 17:38:59.974803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.929 qpair failed and we were unable to recover it. 00:28:30.929 [2024-12-09 17:38:59.975051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.929 [2024-12-09 17:38:59.975083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.929 qpair failed and we were unable to recover it. 00:28:30.929 [2024-12-09 17:38:59.975296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.929 [2024-12-09 17:38:59.975334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.929 qpair failed and we were unable to recover it. 00:28:30.929 [2024-12-09 17:38:59.975440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.929 [2024-12-09 17:38:59.975472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.929 qpair failed and we were unable to recover it. 00:28:30.929 [2024-12-09 17:38:59.975648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.929 [2024-12-09 17:38:59.975680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.929 qpair failed and we were unable to recover it. 00:28:30.929 [2024-12-09 17:38:59.975866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.929 [2024-12-09 17:38:59.975899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.929 qpair failed and we were unable to recover it. 00:28:30.929 [2024-12-09 17:38:59.976067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.929 [2024-12-09 17:38:59.976100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.929 qpair failed and we were unable to recover it. 00:28:30.930 [2024-12-09 17:38:59.976241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.930 [2024-12-09 17:38:59.976280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.930 qpair failed and we were unable to recover it. 00:28:30.930 [2024-12-09 17:38:59.976450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.930 [2024-12-09 17:38:59.976483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.930 qpair failed and we were unable to recover it. 00:28:30.930 [2024-12-09 17:38:59.976725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.930 [2024-12-09 17:38:59.976757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.930 qpair failed and we were unable to recover it. 00:28:30.930 [2024-12-09 17:38:59.976892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.930 [2024-12-09 17:38:59.976924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.930 qpair failed and we were unable to recover it. 00:28:30.930 [2024-12-09 17:38:59.977100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.930 [2024-12-09 17:38:59.977133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.930 qpair failed and we were unable to recover it. 00:28:30.930 [2024-12-09 17:38:59.977319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.930 [2024-12-09 17:38:59.977353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.930 qpair failed and we were unable to recover it. 00:28:30.930 [2024-12-09 17:38:59.977485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.930 [2024-12-09 17:38:59.977517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.930 qpair failed and we were unable to recover it. 00:28:30.930 [2024-12-09 17:38:59.977637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.930 [2024-12-09 17:38:59.977672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.930 qpair failed and we were unable to recover it. 00:28:30.930 [2024-12-09 17:38:59.977800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.930 [2024-12-09 17:38:59.977834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.930 qpair failed and we were unable to recover it. 00:28:30.930 [2024-12-09 17:38:59.978148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.930 [2024-12-09 17:38:59.978180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.930 qpair failed and we were unable to recover it. 00:28:30.930 [2024-12-09 17:38:59.978316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.930 [2024-12-09 17:38:59.978350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.930 qpair failed and we were unable to recover it. 00:28:30.930 [2024-12-09 17:38:59.978529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.930 [2024-12-09 17:38:59.978562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.930 qpair failed and we were unable to recover it. 00:28:30.930 [2024-12-09 17:38:59.978752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.930 [2024-12-09 17:38:59.978784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.930 qpair failed and we were unable to recover it. 00:28:30.930 [2024-12-09 17:38:59.979021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.930 [2024-12-09 17:38:59.979054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.930 qpair failed and we were unable to recover it. 00:28:30.930 [2024-12-09 17:38:59.979378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.930 [2024-12-09 17:38:59.979450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.930 qpair failed and we were unable to recover it. 00:28:30.930 [2024-12-09 17:38:59.979670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.930 [2024-12-09 17:38:59.979710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.930 qpair failed and we were unable to recover it. 00:28:30.930 [2024-12-09 17:38:59.979979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.930 [2024-12-09 17:38:59.980013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.930 qpair failed and we were unable to recover it. 00:28:30.930 [2024-12-09 17:38:59.980233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.930 [2024-12-09 17:38:59.980268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.930 qpair failed and we were unable to recover it. 00:28:30.930 [2024-12-09 17:38:59.980465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.930 [2024-12-09 17:38:59.980499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.930 qpair failed and we were unable to recover it. 00:28:30.930 [2024-12-09 17:38:59.980691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.930 [2024-12-09 17:38:59.980725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.930 qpair failed and we were unable to recover it. 00:28:30.930 [2024-12-09 17:38:59.980904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.930 [2024-12-09 17:38:59.980937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.930 qpair failed and we were unable to recover it. 00:28:30.930 [2024-12-09 17:38:59.981129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.930 [2024-12-09 17:38:59.981162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.930 qpair failed and we were unable to recover it. 00:28:30.930 [2024-12-09 17:38:59.981462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.930 [2024-12-09 17:38:59.981496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.930 qpair failed and we were unable to recover it. 00:28:30.930 [2024-12-09 17:38:59.981633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.930 [2024-12-09 17:38:59.981666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.930 qpair failed and we were unable to recover it. 00:28:30.930 [2024-12-09 17:38:59.981866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.930 [2024-12-09 17:38:59.981899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.930 qpair failed and we were unable to recover it. 00:28:30.930 [2024-12-09 17:38:59.982141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.930 [2024-12-09 17:38:59.982174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.930 qpair failed and we were unable to recover it. 00:28:30.930 [2024-12-09 17:38:59.982436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.930 [2024-12-09 17:38:59.982470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.930 qpair failed and we were unable to recover it. 00:28:30.930 [2024-12-09 17:38:59.982659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.930 [2024-12-09 17:38:59.982699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.930 qpair failed and we were unable to recover it. 00:28:30.930 [2024-12-09 17:38:59.982943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.930 [2024-12-09 17:38:59.982978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.930 qpair failed and we were unable to recover it. 00:28:30.930 [2024-12-09 17:38:59.983089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.930 [2024-12-09 17:38:59.983123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.930 qpair failed and we were unable to recover it. 00:28:30.930 [2024-12-09 17:38:59.983372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.930 [2024-12-09 17:38:59.983407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.930 qpair failed and we were unable to recover it. 00:28:30.930 [2024-12-09 17:38:59.983596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.930 [2024-12-09 17:38:59.983630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.930 qpair failed and we were unable to recover it. 00:28:30.930 [2024-12-09 17:38:59.983897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.930 [2024-12-09 17:38:59.983929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.930 qpair failed and we were unable to recover it. 00:28:30.930 [2024-12-09 17:38:59.984062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.930 [2024-12-09 17:38:59.984095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.930 qpair failed and we were unable to recover it. 00:28:30.930 [2024-12-09 17:38:59.984296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.930 [2024-12-09 17:38:59.984331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.930 qpair failed and we were unable to recover it. 00:28:30.930 [2024-12-09 17:38:59.984516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.930 [2024-12-09 17:38:59.984550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.930 qpair failed and we were unable to recover it. 00:28:30.930 [2024-12-09 17:38:59.984735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.930 [2024-12-09 17:38:59.984768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.930 qpair failed and we were unable to recover it. 00:28:30.930 [2024-12-09 17:38:59.984941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.930 [2024-12-09 17:38:59.984975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.930 qpair failed and we were unable to recover it. 00:28:30.930 [2024-12-09 17:38:59.985166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.930 [2024-12-09 17:38:59.985198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.930 qpair failed and we were unable to recover it. 00:28:30.930 [2024-12-09 17:38:59.985453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.931 [2024-12-09 17:38:59.985487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.931 qpair failed and we were unable to recover it. 00:28:30.931 [2024-12-09 17:38:59.985589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.931 [2024-12-09 17:38:59.985623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.931 qpair failed and we were unable to recover it. 00:28:30.931 [2024-12-09 17:38:59.985898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.931 [2024-12-09 17:38:59.985931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.931 qpair failed and we were unable to recover it. 00:28:30.931 [2024-12-09 17:38:59.986130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.931 [2024-12-09 17:38:59.986164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.931 qpair failed and we were unable to recover it. 00:28:30.931 [2024-12-09 17:38:59.986351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.931 [2024-12-09 17:38:59.986384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.931 qpair failed and we were unable to recover it. 00:28:30.931 [2024-12-09 17:38:59.986517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.931 [2024-12-09 17:38:59.986550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.931 qpair failed and we were unable to recover it. 00:28:30.931 [2024-12-09 17:38:59.986740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.931 [2024-12-09 17:38:59.986773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.931 qpair failed and we were unable to recover it. 00:28:30.931 [2024-12-09 17:38:59.986950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.931 [2024-12-09 17:38:59.986982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.931 qpair failed and we were unable to recover it. 00:28:30.931 [2024-12-09 17:38:59.987159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.931 [2024-12-09 17:38:59.987192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.931 qpair failed and we were unable to recover it. 00:28:30.931 [2024-12-09 17:38:59.987389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.931 [2024-12-09 17:38:59.987422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.931 qpair failed and we were unable to recover it. 00:28:30.931 [2024-12-09 17:38:59.987563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.931 [2024-12-09 17:38:59.987596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.931 qpair failed and we were unable to recover it. 00:28:30.931 [2024-12-09 17:38:59.987754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.931 [2024-12-09 17:38:59.987787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.931 qpair failed and we were unable to recover it. 00:28:30.931 [2024-12-09 17:38:59.987909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.931 [2024-12-09 17:38:59.987942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.931 qpair failed and we were unable to recover it. 00:28:30.931 [2024-12-09 17:38:59.988067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.931 [2024-12-09 17:38:59.988100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.931 qpair failed and we were unable to recover it. 00:28:30.931 [2024-12-09 17:38:59.988234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.931 [2024-12-09 17:38:59.988269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.931 qpair failed and we were unable to recover it. 00:28:30.931 [2024-12-09 17:38:59.988467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.931 [2024-12-09 17:38:59.988512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.931 qpair failed and we were unable to recover it. 00:28:30.931 [2024-12-09 17:38:59.988649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.931 [2024-12-09 17:38:59.988683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.931 qpair failed and we were unable to recover it. 00:28:30.931 [2024-12-09 17:38:59.988815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.931 [2024-12-09 17:38:59.988848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.931 qpair failed and we were unable to recover it. 00:28:30.931 [2024-12-09 17:38:59.989062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.931 [2024-12-09 17:38:59.989096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.931 qpair failed and we were unable to recover it. 00:28:30.931 [2024-12-09 17:38:59.989236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.931 [2024-12-09 17:38:59.989271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.931 qpair failed and we were unable to recover it. 00:28:30.931 [2024-12-09 17:38:59.989530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.931 [2024-12-09 17:38:59.989563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.931 qpair failed and we were unable to recover it. 00:28:30.931 [2024-12-09 17:38:59.989679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.931 [2024-12-09 17:38:59.989711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.931 qpair failed and we were unable to recover it. 00:28:30.931 [2024-12-09 17:38:59.989912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.931 [2024-12-09 17:38:59.989945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.931 qpair failed and we were unable to recover it. 00:28:30.931 [2024-12-09 17:38:59.990126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.931 [2024-12-09 17:38:59.990160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.931 qpair failed and we were unable to recover it. 00:28:30.931 [2024-12-09 17:38:59.990357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.931 [2024-12-09 17:38:59.990392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.931 qpair failed and we were unable to recover it. 00:28:30.931 [2024-12-09 17:38:59.990516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.931 [2024-12-09 17:38:59.990550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.931 qpair failed and we were unable to recover it. 00:28:30.931 [2024-12-09 17:38:59.990733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.931 [2024-12-09 17:38:59.990765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.931 qpair failed and we were unable to recover it. 00:28:30.931 [2024-12-09 17:38:59.990888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.931 [2024-12-09 17:38:59.990920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.931 qpair failed and we were unable to recover it. 00:28:30.931 [2024-12-09 17:38:59.991160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.931 [2024-12-09 17:38:59.991202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.931 qpair failed and we were unable to recover it. 00:28:30.931 [2024-12-09 17:38:59.991461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.931 [2024-12-09 17:38:59.991495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.931 qpair failed and we were unable to recover it. 00:28:30.931 [2024-12-09 17:38:59.991763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.931 [2024-12-09 17:38:59.991794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.931 qpair failed and we were unable to recover it. 00:28:30.931 [2024-12-09 17:38:59.991980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.931 [2024-12-09 17:38:59.992012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.931 qpair failed and we were unable to recover it. 00:28:30.931 [2024-12-09 17:38:59.992145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.931 [2024-12-09 17:38:59.992178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.931 qpair failed and we were unable to recover it. 00:28:30.931 [2024-12-09 17:38:59.992431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.931 [2024-12-09 17:38:59.992466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.931 qpair failed and we were unable to recover it. 00:28:30.931 [2024-12-09 17:38:59.992586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.931 [2024-12-09 17:38:59.992619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.931 qpair failed and we were unable to recover it. 00:28:30.931 [2024-12-09 17:38:59.992762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.931 [2024-12-09 17:38:59.992795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.931 qpair failed and we were unable to recover it. 00:28:30.931 [2024-12-09 17:38:59.992977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.931 [2024-12-09 17:38:59.993010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.931 qpair failed and we were unable to recover it. 00:28:30.931 [2024-12-09 17:38:59.993201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.931 [2024-12-09 17:38:59.993244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.931 qpair failed and we were unable to recover it. 00:28:30.931 [2024-12-09 17:38:59.993437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.931 [2024-12-09 17:38:59.993470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.932 qpair failed and we were unable to recover it. 00:28:30.932 [2024-12-09 17:38:59.993652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.932 [2024-12-09 17:38:59.993685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.932 qpair failed and we were unable to recover it. 00:28:30.932 [2024-12-09 17:38:59.993815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.932 [2024-12-09 17:38:59.993847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.932 qpair failed and we were unable to recover it. 00:28:30.932 [2024-12-09 17:38:59.994021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.932 [2024-12-09 17:38:59.994053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.932 qpair failed and we were unable to recover it. 00:28:30.932 [2024-12-09 17:38:59.994267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.932 [2024-12-09 17:38:59.994302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.932 qpair failed and we were unable to recover it. 00:28:30.932 [2024-12-09 17:38:59.994569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.932 [2024-12-09 17:38:59.994602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.932 qpair failed and we were unable to recover it. 00:28:30.932 [2024-12-09 17:38:59.994780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.932 [2024-12-09 17:38:59.994812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.932 qpair failed and we were unable to recover it. 00:28:30.932 [2024-12-09 17:38:59.994917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.932 [2024-12-09 17:38:59.994951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.932 qpair failed and we were unable to recover it. 00:28:30.932 [2024-12-09 17:38:59.995140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.932 [2024-12-09 17:38:59.995172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.932 qpair failed and we were unable to recover it. 00:28:30.932 [2024-12-09 17:38:59.995487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.932 [2024-12-09 17:38:59.995523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.932 qpair failed and we were unable to recover it. 00:28:30.932 [2024-12-09 17:38:59.995655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.932 [2024-12-09 17:38:59.995688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.932 qpair failed and we were unable to recover it. 00:28:30.932 [2024-12-09 17:38:59.995859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.932 [2024-12-09 17:38:59.995892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.932 qpair failed and we were unable to recover it. 00:28:30.932 [2024-12-09 17:38:59.996020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.932 [2024-12-09 17:38:59.996054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.932 qpair failed and we were unable to recover it. 00:28:30.932 [2024-12-09 17:38:59.996237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.932 [2024-12-09 17:38:59.996271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.932 qpair failed and we were unable to recover it. 00:28:30.932 [2024-12-09 17:38:59.996446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.932 [2024-12-09 17:38:59.996478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.932 qpair failed and we were unable to recover it. 00:28:30.932 [2024-12-09 17:38:59.996672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.932 [2024-12-09 17:38:59.996705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.932 qpair failed and we were unable to recover it. 00:28:30.932 [2024-12-09 17:38:59.996828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.932 [2024-12-09 17:38:59.996862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.932 qpair failed and we were unable to recover it. 00:28:30.932 [2024-12-09 17:38:59.997044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.932 [2024-12-09 17:38:59.997078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.932 qpair failed and we were unable to recover it. 00:28:30.932 [2024-12-09 17:38:59.997311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.932 [2024-12-09 17:38:59.997345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.932 qpair failed and we were unable to recover it. 00:28:30.932 [2024-12-09 17:38:59.997510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.932 [2024-12-09 17:38:59.997542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.932 qpair failed and we were unable to recover it. 00:28:30.932 [2024-12-09 17:38:59.997681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.932 [2024-12-09 17:38:59.997713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.932 qpair failed and we were unable to recover it. 00:28:30.932 [2024-12-09 17:38:59.997964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.932 [2024-12-09 17:38:59.997998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.932 qpair failed and we were unable to recover it. 00:28:30.932 [2024-12-09 17:38:59.998103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.932 [2024-12-09 17:38:59.998136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.932 qpair failed and we were unable to recover it. 00:28:30.932 [2024-12-09 17:38:59.998323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.932 [2024-12-09 17:38:59.998357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.932 qpair failed and we were unable to recover it. 00:28:30.932 [2024-12-09 17:38:59.998621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.932 [2024-12-09 17:38:59.998653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.932 qpair failed and we were unable to recover it. 00:28:30.932 [2024-12-09 17:38:59.998767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.932 [2024-12-09 17:38:59.998802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.932 qpair failed and we were unable to recover it. 00:28:30.932 [2024-12-09 17:38:59.998919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.932 [2024-12-09 17:38:59.998951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.932 qpair failed and we were unable to recover it. 00:28:30.932 [2024-12-09 17:38:59.999269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.932 [2024-12-09 17:38:59.999302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.932 qpair failed and we were unable to recover it. 00:28:30.932 [2024-12-09 17:38:59.999477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.932 [2024-12-09 17:38:59.999509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.932 qpair failed and we were unable to recover it. 00:28:30.932 [2024-12-09 17:38:59.999612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.932 [2024-12-09 17:38:59.999645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.932 qpair failed and we were unable to recover it. 00:28:30.932 [2024-12-09 17:38:59.999844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.932 [2024-12-09 17:38:59.999883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.932 qpair failed and we were unable to recover it. 00:28:30.932 [2024-12-09 17:39:00.000087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.932 [2024-12-09 17:39:00.000121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.932 qpair failed and we were unable to recover it. 00:28:30.932 [2024-12-09 17:39:00.000325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.932 [2024-12-09 17:39:00.000358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.932 qpair failed and we were unable to recover it. 00:28:30.932 [2024-12-09 17:39:00.000488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.932 [2024-12-09 17:39:00.000521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.932 qpair failed and we were unable to recover it. 00:28:30.932 [2024-12-09 17:39:00.000699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.932 [2024-12-09 17:39:00.000732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.932 qpair failed and we were unable to recover it. 00:28:30.932 [2024-12-09 17:39:00.000954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.933 [2024-12-09 17:39:00.000989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.933 qpair failed and we were unable to recover it. 00:28:30.933 [2024-12-09 17:39:00.001128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.933 [2024-12-09 17:39:00.001163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.933 qpair failed and we were unable to recover it. 00:28:30.933 [2024-12-09 17:39:00.001402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.933 [2024-12-09 17:39:00.001437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.933 qpair failed and we were unable to recover it. 00:28:30.933 [2024-12-09 17:39:00.001625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.933 [2024-12-09 17:39:00.001658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.933 qpair failed and we were unable to recover it. 00:28:30.933 [2024-12-09 17:39:00.001785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.933 [2024-12-09 17:39:00.001817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.933 qpair failed and we were unable to recover it. 00:28:30.933 [2024-12-09 17:39:00.001972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.933 [2024-12-09 17:39:00.002005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.933 qpair failed and we were unable to recover it. 00:28:30.933 [2024-12-09 17:39:00.002343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.933 [2024-12-09 17:39:00.002377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.933 qpair failed and we were unable to recover it. 00:28:30.933 [2024-12-09 17:39:00.002508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.933 [2024-12-09 17:39:00.002541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.933 qpair failed and we were unable to recover it. 00:28:30.933 [2024-12-09 17:39:00.002674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.933 [2024-12-09 17:39:00.002708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.933 qpair failed and we were unable to recover it. 00:28:30.933 [2024-12-09 17:39:00.002846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.933 [2024-12-09 17:39:00.002879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.933 qpair failed and we were unable to recover it. 00:28:30.933 [2024-12-09 17:39:00.002994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.933 [2024-12-09 17:39:00.003029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.933 qpair failed and we were unable to recover it. 00:28:30.933 [2024-12-09 17:39:00.003150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.933 [2024-12-09 17:39:00.003183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.933 qpair failed and we were unable to recover it. 00:28:30.933 [2024-12-09 17:39:00.003351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.933 [2024-12-09 17:39:00.003385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.933 qpair failed and we were unable to recover it. 00:28:30.933 [2024-12-09 17:39:00.003504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.933 [2024-12-09 17:39:00.003537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.933 qpair failed and we were unable to recover it. 00:28:30.933 [2024-12-09 17:39:00.003656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.933 [2024-12-09 17:39:00.003689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.933 qpair failed and we were unable to recover it. 00:28:30.933 [2024-12-09 17:39:00.003858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.933 [2024-12-09 17:39:00.003891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.933 qpair failed and we were unable to recover it. 00:28:30.933 [2024-12-09 17:39:00.004004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.933 [2024-12-09 17:39:00.004037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.933 qpair failed and we were unable to recover it. 00:28:30.933 [2024-12-09 17:39:00.004178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.933 [2024-12-09 17:39:00.004212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.933 qpair failed and we were unable to recover it. 00:28:30.933 [2024-12-09 17:39:00.004406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.933 [2024-12-09 17:39:00.004440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.933 qpair failed and we were unable to recover it. 00:28:30.933 [2024-12-09 17:39:00.004556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.933 [2024-12-09 17:39:00.004589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.933 qpair failed and we were unable to recover it. 00:28:30.933 [2024-12-09 17:39:00.004711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.933 [2024-12-09 17:39:00.004745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.933 qpair failed and we were unable to recover it. 00:28:30.933 [2024-12-09 17:39:00.004919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.933 [2024-12-09 17:39:00.004952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.933 qpair failed and we were unable to recover it. 00:28:30.933 [2024-12-09 17:39:00.005114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.933 [2024-12-09 17:39:00.005185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.933 qpair failed and we were unable to recover it. 00:28:30.933 [2024-12-09 17:39:00.005405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.933 [2024-12-09 17:39:00.005445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.933 qpair failed and we were unable to recover it. 00:28:30.933 [2024-12-09 17:39:00.005622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.933 [2024-12-09 17:39:00.005657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.933 qpair failed and we were unable to recover it. 00:28:30.933 [2024-12-09 17:39:00.005867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.933 [2024-12-09 17:39:00.005901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.933 qpair failed and we were unable to recover it. 00:28:30.933 [2024-12-09 17:39:00.006022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.933 [2024-12-09 17:39:00.006056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.933 qpair failed and we were unable to recover it. 00:28:30.933 [2024-12-09 17:39:00.006296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.933 [2024-12-09 17:39:00.006332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.933 qpair failed and we were unable to recover it. 00:28:30.933 [2024-12-09 17:39:00.006464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.933 [2024-12-09 17:39:00.006498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.933 qpair failed and we were unable to recover it. 00:28:30.933 [2024-12-09 17:39:00.006681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.933 [2024-12-09 17:39:00.006714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.933 qpair failed and we were unable to recover it. 00:28:30.933 [2024-12-09 17:39:00.006871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.933 [2024-12-09 17:39:00.006905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.933 qpair failed and we were unable to recover it. 00:28:30.933 [2024-12-09 17:39:00.007086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.933 [2024-12-09 17:39:00.007120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.933 qpair failed and we were unable to recover it. 00:28:30.933 [2024-12-09 17:39:00.007264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.933 [2024-12-09 17:39:00.007301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.933 qpair failed and we were unable to recover it. 00:28:30.933 [2024-12-09 17:39:00.007411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.933 [2024-12-09 17:39:00.007447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.933 qpair failed and we were unable to recover it. 00:28:30.933 [2024-12-09 17:39:00.007563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.933 [2024-12-09 17:39:00.007596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.933 qpair failed and we were unable to recover it. 00:28:30.933 [2024-12-09 17:39:00.007725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.933 [2024-12-09 17:39:00.007765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.933 qpair failed and we were unable to recover it. 00:28:30.933 [2024-12-09 17:39:00.007883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.933 [2024-12-09 17:39:00.007916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.933 qpair failed and we were unable to recover it. 00:28:30.933 [2024-12-09 17:39:00.008066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.933 [2024-12-09 17:39:00.008101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.933 qpair failed and we were unable to recover it. 00:28:30.933 [2024-12-09 17:39:00.008229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.934 [2024-12-09 17:39:00.008264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.934 qpair failed and we were unable to recover it. 00:28:30.934 [2024-12-09 17:39:00.008383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.934 [2024-12-09 17:39:00.008417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.934 qpair failed and we were unable to recover it. 00:28:30.934 [2024-12-09 17:39:00.008556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.934 [2024-12-09 17:39:00.008589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.934 qpair failed and we were unable to recover it. 00:28:30.934 [2024-12-09 17:39:00.008849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.934 [2024-12-09 17:39:00.008882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.934 qpair failed and we were unable to recover it. 00:28:30.934 [2024-12-09 17:39:00.009069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.934 [2024-12-09 17:39:00.009103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.934 qpair failed and we were unable to recover it. 00:28:30.934 [2024-12-09 17:39:00.009212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.934 [2024-12-09 17:39:00.009255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.934 qpair failed and we were unable to recover it. 00:28:30.934 [2024-12-09 17:39:00.009472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.934 [2024-12-09 17:39:00.009505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.934 qpair failed and we were unable to recover it. 00:28:30.934 [2024-12-09 17:39:00.009689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.934 [2024-12-09 17:39:00.009723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.934 qpair failed and we were unable to recover it. 00:28:30.934 [2024-12-09 17:39:00.009858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.934 [2024-12-09 17:39:00.009892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.934 qpair failed and we were unable to recover it. 00:28:30.934 [2024-12-09 17:39:00.010003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.934 [2024-12-09 17:39:00.010037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.934 qpair failed and we were unable to recover it. 00:28:30.934 [2024-12-09 17:39:00.010214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.934 [2024-12-09 17:39:00.010257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.934 qpair failed and we were unable to recover it. 00:28:30.934 [2024-12-09 17:39:00.010397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.934 [2024-12-09 17:39:00.010431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.934 qpair failed and we were unable to recover it. 00:28:30.934 [2024-12-09 17:39:00.010621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.934 [2024-12-09 17:39:00.010655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.934 qpair failed and we were unable to recover it. 00:28:30.934 [2024-12-09 17:39:00.010840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.934 [2024-12-09 17:39:00.010873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.934 qpair failed and we were unable to recover it. 00:28:30.934 [2024-12-09 17:39:00.011045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.934 [2024-12-09 17:39:00.011079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.934 qpair failed and we were unable to recover it. 00:28:30.934 [2024-12-09 17:39:00.011210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.934 [2024-12-09 17:39:00.011259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.934 qpair failed and we were unable to recover it. 00:28:30.934 [2024-12-09 17:39:00.011475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.934 [2024-12-09 17:39:00.011509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.934 qpair failed and we were unable to recover it. 00:28:30.934 [2024-12-09 17:39:00.011625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.934 [2024-12-09 17:39:00.011659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.934 qpair failed and we were unable to recover it. 00:28:30.934 [2024-12-09 17:39:00.011843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.934 [2024-12-09 17:39:00.011876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.934 qpair failed and we were unable to recover it. 00:28:30.934 [2024-12-09 17:39:00.012001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.934 [2024-12-09 17:39:00.012034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.934 qpair failed and we were unable to recover it. 00:28:30.934 [2024-12-09 17:39:00.012238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.934 [2024-12-09 17:39:00.012274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.934 qpair failed and we were unable to recover it. 00:28:30.934 [2024-12-09 17:39:00.012412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.934 [2024-12-09 17:39:00.012445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.934 qpair failed and we were unable to recover it. 00:28:30.934 [2024-12-09 17:39:00.012692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.934 [2024-12-09 17:39:00.012726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.934 qpair failed and we were unable to recover it. 00:28:30.934 [2024-12-09 17:39:00.012838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.934 [2024-12-09 17:39:00.012872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.934 qpair failed and we were unable to recover it. 00:28:30.934 [2024-12-09 17:39:00.013061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.934 [2024-12-09 17:39:00.013099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.934 qpair failed and we were unable to recover it. 00:28:30.934 [2024-12-09 17:39:00.013289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.934 [2024-12-09 17:39:00.013324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.934 qpair failed and we were unable to recover it. 00:28:30.934 [2024-12-09 17:39:00.013515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.934 [2024-12-09 17:39:00.013549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.934 qpair failed and we were unable to recover it. 00:28:30.934 [2024-12-09 17:39:00.013716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.934 [2024-12-09 17:39:00.013749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.934 qpair failed and we were unable to recover it. 00:28:30.934 [2024-12-09 17:39:00.013883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.934 [2024-12-09 17:39:00.013916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.934 qpair failed and we were unable to recover it. 00:28:30.934 [2024-12-09 17:39:00.014089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.934 [2024-12-09 17:39:00.014123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.934 qpair failed and we were unable to recover it. 00:28:30.934 [2024-12-09 17:39:00.014327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.934 [2024-12-09 17:39:00.014364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.934 qpair failed and we were unable to recover it. 00:28:30.934 [2024-12-09 17:39:00.014549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.934 [2024-12-09 17:39:00.014581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.934 qpair failed and we were unable to recover it. 00:28:30.934 [2024-12-09 17:39:00.014753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.934 [2024-12-09 17:39:00.014786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.934 qpair failed and we were unable to recover it. 00:28:30.934 [2024-12-09 17:39:00.014920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.934 [2024-12-09 17:39:00.014953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.934 qpair failed and we were unable to recover it. 00:28:30.934 [2024-12-09 17:39:00.015200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.934 [2024-12-09 17:39:00.015244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.934 qpair failed and we were unable to recover it. 00:28:30.934 [2024-12-09 17:39:00.015352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.934 [2024-12-09 17:39:00.015386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.934 qpair failed and we were unable to recover it. 00:28:30.934 [2024-12-09 17:39:00.015649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.934 [2024-12-09 17:39:00.015682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.934 qpair failed and we were unable to recover it. 00:28:30.935 [2024-12-09 17:39:00.015807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.935 [2024-12-09 17:39:00.015841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.935 qpair failed and we were unable to recover it. 00:28:30.935 [2024-12-09 17:39:00.016049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.935 [2024-12-09 17:39:00.016084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.935 qpair failed and we were unable to recover it. 00:28:30.935 [2024-12-09 17:39:00.016204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.935 [2024-12-09 17:39:00.016247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.935 qpair failed and we were unable to recover it. 00:28:30.935 [2024-12-09 17:39:00.016489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.935 [2024-12-09 17:39:00.016525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.935 qpair failed and we were unable to recover it. 00:28:30.935 [2024-12-09 17:39:00.018529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.935 [2024-12-09 17:39:00.018587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.935 qpair failed and we were unable to recover it. 00:28:30.935 [2024-12-09 17:39:00.018818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.935 [2024-12-09 17:39:00.018879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.935 qpair failed and we were unable to recover it. 00:28:30.935 [2024-12-09 17:39:00.019165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.935 [2024-12-09 17:39:00.019203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.935 qpair failed and we were unable to recover it. 00:28:30.935 [2024-12-09 17:39:00.019390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.935 [2024-12-09 17:39:00.019425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.935 qpair failed and we were unable to recover it. 00:28:30.935 [2024-12-09 17:39:00.019669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.935 [2024-12-09 17:39:00.019704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.935 qpair failed and we were unable to recover it. 00:28:30.935 [2024-12-09 17:39:00.019835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.935 [2024-12-09 17:39:00.019869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.935 qpair failed and we were unable to recover it. 00:28:30.935 [2024-12-09 17:39:00.020115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.935 [2024-12-09 17:39:00.020151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.935 qpair failed and we were unable to recover it. 00:28:30.935 [2024-12-09 17:39:00.020341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.935 [2024-12-09 17:39:00.020377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.935 qpair failed and we were unable to recover it. 00:28:30.935 [2024-12-09 17:39:00.020507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.935 [2024-12-09 17:39:00.020540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.935 qpair failed and we were unable to recover it. 00:28:30.935 [2024-12-09 17:39:00.020651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.935 [2024-12-09 17:39:00.020686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.935 qpair failed and we were unable to recover it. 00:28:30.935 [2024-12-09 17:39:00.020839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.935 [2024-12-09 17:39:00.020874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.935 qpair failed and we were unable to recover it. 00:28:30.935 [2024-12-09 17:39:00.021019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.935 [2024-12-09 17:39:00.021052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.935 qpair failed and we were unable to recover it. 00:28:30.935 [2024-12-09 17:39:00.021191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.935 [2024-12-09 17:39:00.021236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.935 qpair failed and we were unable to recover it. 00:28:30.935 [2024-12-09 17:39:00.021380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.935 [2024-12-09 17:39:00.021412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.935 qpair failed and we were unable to recover it. 00:28:30.935 [2024-12-09 17:39:00.021545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.935 [2024-12-09 17:39:00.021576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.935 qpair failed and we were unable to recover it. 00:28:30.935 [2024-12-09 17:39:00.021723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.935 [2024-12-09 17:39:00.021755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.935 qpair failed and we were unable to recover it. 00:28:30.935 [2024-12-09 17:39:00.021879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.935 [2024-12-09 17:39:00.021914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.935 qpair failed and we were unable to recover it. 00:28:30.935 [2024-12-09 17:39:00.022035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.935 [2024-12-09 17:39:00.022066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.935 qpair failed and we were unable to recover it. 00:28:30.935 [2024-12-09 17:39:00.022259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.935 [2024-12-09 17:39:00.022293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.935 qpair failed and we were unable to recover it. 00:28:30.935 [2024-12-09 17:39:00.022413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.935 [2024-12-09 17:39:00.022444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.935 qpair failed and we were unable to recover it. 00:28:30.935 [2024-12-09 17:39:00.022638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.935 [2024-12-09 17:39:00.022680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.935 qpair failed and we were unable to recover it. 00:28:30.935 [2024-12-09 17:39:00.022844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.935 [2024-12-09 17:39:00.022889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.935 qpair failed and we were unable to recover it. 00:28:30.935 [2024-12-09 17:39:00.023049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.935 [2024-12-09 17:39:00.023091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.935 qpair failed and we were unable to recover it. 00:28:30.935 [2024-12-09 17:39:00.023288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.935 [2024-12-09 17:39:00.023342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.935 qpair failed and we were unable to recover it. 00:28:30.935 [2024-12-09 17:39:00.023486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.935 [2024-12-09 17:39:00.023520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.935 qpair failed and we were unable to recover it. 00:28:30.935 [2024-12-09 17:39:00.023647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.935 [2024-12-09 17:39:00.023678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.935 qpair failed and we were unable to recover it. 00:28:30.935 [2024-12-09 17:39:00.023789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.935 [2024-12-09 17:39:00.023820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.935 qpair failed and we were unable to recover it. 00:28:30.935 [2024-12-09 17:39:00.023931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.935 [2024-12-09 17:39:00.023962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.935 qpair failed and we were unable to recover it. 00:28:30.935 [2024-12-09 17:39:00.024085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.935 [2024-12-09 17:39:00.024117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.935 qpair failed and we were unable to recover it. 00:28:30.935 [2024-12-09 17:39:00.024252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.935 [2024-12-09 17:39:00.024305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.935 qpair failed and we were unable to recover it. 00:28:30.935 [2024-12-09 17:39:00.024563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.935 [2024-12-09 17:39:00.024596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.935 qpair failed and we were unable to recover it. 00:28:30.935 [2024-12-09 17:39:00.024778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.935 [2024-12-09 17:39:00.024810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.935 qpair failed and we were unable to recover it. 00:28:30.935 [2024-12-09 17:39:00.025019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.935 [2024-12-09 17:39:00.025053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.935 qpair failed and we were unable to recover it. 00:28:30.936 [2024-12-09 17:39:00.025248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.936 [2024-12-09 17:39:00.025282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.936 qpair failed and we were unable to recover it. 00:28:30.936 [2024-12-09 17:39:00.025405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.936 [2024-12-09 17:39:00.025436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.936 qpair failed and we were unable to recover it. 00:28:30.936 [2024-12-09 17:39:00.025571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.936 [2024-12-09 17:39:00.025601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.936 qpair failed and we were unable to recover it. 00:28:30.936 [2024-12-09 17:39:00.025723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.936 [2024-12-09 17:39:00.025754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.936 qpair failed and we were unable to recover it. 00:28:30.936 [2024-12-09 17:39:00.025951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.936 [2024-12-09 17:39:00.025981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.936 qpair failed and we were unable to recover it. 00:28:30.936 [2024-12-09 17:39:00.026124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:30.936 [2024-12-09 17:39:00.026193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.936 [2024-12-09 17:39:00.026269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.936 qpair failed and we were unable to recover it. 00:28:30.936 [2024-12-09 17:39:00.026398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.936 [2024-12-09 17:39:00.026432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.936 qpair failed and we were unable to recover it. 00:28:30.936 [2024-12-09 17:39:00.026611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.936 [2024-12-09 17:39:00.026643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.936 qpair failed and we were unable to recover it. 00:28:30.936 [2024-12-09 17:39:00.026832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.936 [2024-12-09 17:39:00.026867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.936 qpair failed and we were unable to recover it. 00:28:30.936 [2024-12-09 17:39:00.026986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.936 [2024-12-09 17:39:00.027018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.936 qpair failed and we were unable to recover it. 00:28:30.936 [2024-12-09 17:39:00.027195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.936 [2024-12-09 17:39:00.027248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.936 qpair failed and we were unable to recover it. 00:28:30.936 [2024-12-09 17:39:00.027421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.936 [2024-12-09 17:39:00.027455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.936 qpair failed and we were unable to recover it. 00:28:30.936 [2024-12-09 17:39:00.027583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.936 [2024-12-09 17:39:00.027614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.936 qpair failed and we were unable to recover it. 00:28:30.936 [2024-12-09 17:39:00.027736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.936 [2024-12-09 17:39:00.027768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.936 qpair failed and we were unable to recover it. 00:28:30.936 [2024-12-09 17:39:00.027963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.936 [2024-12-09 17:39:00.027993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.936 qpair failed and we were unable to recover it. 00:28:30.936 [2024-12-09 17:39:00.028132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.936 [2024-12-09 17:39:00.028164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.936 qpair failed and we were unable to recover it. 00:28:30.936 [2024-12-09 17:39:00.028447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.936 [2024-12-09 17:39:00.028484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.936 qpair failed and we were unable to recover it. 00:28:30.936 [2024-12-09 17:39:00.028723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.936 [2024-12-09 17:39:00.028758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.936 qpair failed and we were unable to recover it. 00:28:30.936 [2024-12-09 17:39:00.028938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.936 [2024-12-09 17:39:00.028970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.936 qpair failed and we were unable to recover it. 00:28:30.936 [2024-12-09 17:39:00.029095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.936 [2024-12-09 17:39:00.029128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.936 qpair failed and we were unable to recover it. 00:28:30.936 [2024-12-09 17:39:00.029255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.936 [2024-12-09 17:39:00.029288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.936 qpair failed and we were unable to recover it. 00:28:30.936 [2024-12-09 17:39:00.029412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.936 [2024-12-09 17:39:00.029445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.936 qpair failed and we were unable to recover it. 00:28:30.936 [2024-12-09 17:39:00.029598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.936 [2024-12-09 17:39:00.029629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.936 qpair failed and we were unable to recover it. 00:28:30.936 [2024-12-09 17:39:00.029746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.936 [2024-12-09 17:39:00.029778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.936 qpair failed and we were unable to recover it. 00:28:30.936 [2024-12-09 17:39:00.030018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.936 [2024-12-09 17:39:00.030055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.936 qpair failed and we were unable to recover it. 00:28:30.936 [2024-12-09 17:39:00.030193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.936 [2024-12-09 17:39:00.030231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.936 qpair failed and we were unable to recover it. 00:28:30.936 [2024-12-09 17:39:00.030361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.936 [2024-12-09 17:39:00.030393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.936 qpair failed and we were unable to recover it. 00:28:30.936 [2024-12-09 17:39:00.030506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.936 [2024-12-09 17:39:00.030537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.936 qpair failed and we were unable to recover it. 00:28:30.936 [2024-12-09 17:39:00.030655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.936 [2024-12-09 17:39:00.030686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.936 qpair failed and we were unable to recover it. 00:28:30.936 [2024-12-09 17:39:00.030820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.936 [2024-12-09 17:39:00.030854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.936 qpair failed and we were unable to recover it. 00:28:30.936 [2024-12-09 17:39:00.030987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.936 [2024-12-09 17:39:00.031018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.936 qpair failed and we were unable to recover it. 00:28:30.936 [2024-12-09 17:39:00.031154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.936 [2024-12-09 17:39:00.031186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.936 qpair failed and we were unable to recover it. 00:28:30.936 [2024-12-09 17:39:00.031305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.936 [2024-12-09 17:39:00.031341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.936 qpair failed and we were unable to recover it. 00:28:30.936 [2024-12-09 17:39:00.031551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.936 [2024-12-09 17:39:00.031585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.936 qpair failed and we were unable to recover it. 00:28:30.936 [2024-12-09 17:39:00.031731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.936 [2024-12-09 17:39:00.031764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.936 qpair failed and we were unable to recover it. 00:28:30.936 [2024-12-09 17:39:00.031977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.936 [2024-12-09 17:39:00.032011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.937 qpair failed and we were unable to recover it. 00:28:30.937 [2024-12-09 17:39:00.032196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.937 [2024-12-09 17:39:00.032253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.937 qpair failed and we were unable to recover it. 00:28:30.937 [2024-12-09 17:39:00.032443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.937 [2024-12-09 17:39:00.032478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.937 qpair failed and we were unable to recover it. 00:28:30.937 [2024-12-09 17:39:00.032688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.937 [2024-12-09 17:39:00.032723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.937 qpair failed and we were unable to recover it. 00:28:30.937 [2024-12-09 17:39:00.032911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.937 [2024-12-09 17:39:00.032946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.937 qpair failed and we were unable to recover it. 00:28:30.937 [2024-12-09 17:39:00.033134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.937 [2024-12-09 17:39:00.033168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.937 qpair failed and we were unable to recover it. 00:28:30.937 [2024-12-09 17:39:00.033301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.937 [2024-12-09 17:39:00.033337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.937 qpair failed and we were unable to recover it. 00:28:30.937 [2024-12-09 17:39:00.033592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.937 [2024-12-09 17:39:00.033627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.937 qpair failed and we were unable to recover it. 00:28:30.937 [2024-12-09 17:39:00.033807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.937 [2024-12-09 17:39:00.033847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.937 qpair failed and we were unable to recover it. 00:28:30.937 [2024-12-09 17:39:00.033967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.937 [2024-12-09 17:39:00.033998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.937 qpair failed and we were unable to recover it. 00:28:30.937 [2024-12-09 17:39:00.034109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.937 [2024-12-09 17:39:00.034140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.937 qpair failed and we were unable to recover it. 00:28:30.937 [2024-12-09 17:39:00.034276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.937 [2024-12-09 17:39:00.034309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.937 qpair failed and we were unable to recover it. 00:28:30.937 [2024-12-09 17:39:00.034487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.937 [2024-12-09 17:39:00.034520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.937 qpair failed and we were unable to recover it. 00:28:30.937 [2024-12-09 17:39:00.034633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.937 [2024-12-09 17:39:00.034664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.937 qpair failed and we were unable to recover it. 00:28:30.937 [2024-12-09 17:39:00.034795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.937 [2024-12-09 17:39:00.034827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.937 qpair failed and we were unable to recover it. 00:28:30.937 [2024-12-09 17:39:00.034933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.937 [2024-12-09 17:39:00.034965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.937 qpair failed and we were unable to recover it. 00:28:30.937 [2024-12-09 17:39:00.035098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.937 [2024-12-09 17:39:00.035133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.937 qpair failed and we were unable to recover it. 00:28:30.937 [2024-12-09 17:39:00.035302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.937 [2024-12-09 17:39:00.035340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.937 qpair failed and we were unable to recover it. 00:28:30.937 [2024-12-09 17:39:00.035466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.937 [2024-12-09 17:39:00.035500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.937 qpair failed and we were unable to recover it. 00:28:30.937 [2024-12-09 17:39:00.035676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.937 [2024-12-09 17:39:00.035711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.937 qpair failed and we were unable to recover it. 00:28:30.937 [2024-12-09 17:39:00.035831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.937 [2024-12-09 17:39:00.035862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.937 qpair failed and we were unable to recover it. 00:28:30.937 [2024-12-09 17:39:00.036085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.937 [2024-12-09 17:39:00.036120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.937 qpair failed and we were unable to recover it. 00:28:30.937 [2024-12-09 17:39:00.036241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.937 [2024-12-09 17:39:00.036274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.937 qpair failed and we were unable to recover it. 00:28:30.937 [2024-12-09 17:39:00.036380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.937 [2024-12-09 17:39:00.036411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.937 qpair failed and we were unable to recover it. 00:28:30.937 [2024-12-09 17:39:00.036659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.937 [2024-12-09 17:39:00.036694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.937 qpair failed and we were unable to recover it. 00:28:30.937 [2024-12-09 17:39:00.036934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.937 [2024-12-09 17:39:00.036972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.937 qpair failed and we were unable to recover it. 00:28:30.937 [2024-12-09 17:39:00.037148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.937 [2024-12-09 17:39:00.037183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.937 qpair failed and we were unable to recover it. 00:28:30.937 [2024-12-09 17:39:00.037343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.937 [2024-12-09 17:39:00.037387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.937 qpair failed and we were unable to recover it. 00:28:30.937 [2024-12-09 17:39:00.037580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.937 [2024-12-09 17:39:00.037617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.937 qpair failed and we were unable to recover it. 00:28:30.937 [2024-12-09 17:39:00.037747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.937 [2024-12-09 17:39:00.037779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.937 qpair failed and we were unable to recover it. 00:28:30.937 [2024-12-09 17:39:00.037896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.937 [2024-12-09 17:39:00.037930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.937 qpair failed and we were unable to recover it. 00:28:30.937 [2024-12-09 17:39:00.038053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.937 [2024-12-09 17:39:00.038084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.937 qpair failed and we were unable to recover it. 00:28:30.937 [2024-12-09 17:39:00.038215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.937 [2024-12-09 17:39:00.038260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.937 qpair failed and we were unable to recover it. 00:28:30.937 [2024-12-09 17:39:00.038470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.937 [2024-12-09 17:39:00.038503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.937 qpair failed and we were unable to recover it. 00:28:30.937 [2024-12-09 17:39:00.038675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.937 [2024-12-09 17:39:00.038708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.937 qpair failed and we were unable to recover it. 00:28:30.937 [2024-12-09 17:39:00.038899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.937 [2024-12-09 17:39:00.038933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.937 qpair failed and we were unable to recover it. 00:28:30.937 [2024-12-09 17:39:00.039054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.937 [2024-12-09 17:39:00.039085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.937 qpair failed and we were unable to recover it. 00:28:30.937 [2024-12-09 17:39:00.039207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.938 [2024-12-09 17:39:00.039280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.938 qpair failed and we were unable to recover it. 00:28:30.938 [2024-12-09 17:39:00.039477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.938 [2024-12-09 17:39:00.039510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.938 qpair failed and we were unable to recover it. 00:28:30.938 [2024-12-09 17:39:00.039693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.938 [2024-12-09 17:39:00.039727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.938 qpair failed and we were unable to recover it. 00:28:30.938 [2024-12-09 17:39:00.039903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.938 [2024-12-09 17:39:00.039936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.938 qpair failed and we were unable to recover it. 00:28:30.938 [2024-12-09 17:39:00.040061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.938 [2024-12-09 17:39:00.040092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.938 qpair failed and we were unable to recover it. 00:28:30.938 [2024-12-09 17:39:00.040214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.938 [2024-12-09 17:39:00.040258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.938 qpair failed and we were unable to recover it. 00:28:30.938 [2024-12-09 17:39:00.040442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.938 [2024-12-09 17:39:00.040476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.938 qpair failed and we were unable to recover it. 00:28:30.938 [2024-12-09 17:39:00.040658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.938 [2024-12-09 17:39:00.040692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.938 qpair failed and we were unable to recover it. 00:28:30.938 [2024-12-09 17:39:00.040807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.938 [2024-12-09 17:39:00.040841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.938 qpair failed and we were unable to recover it. 00:28:30.938 [2024-12-09 17:39:00.040964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.938 [2024-12-09 17:39:00.040998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.938 qpair failed and we were unable to recover it. 00:28:30.938 [2024-12-09 17:39:00.041126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.938 [2024-12-09 17:39:00.041160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.938 qpair failed and we were unable to recover it. 00:28:30.938 [2024-12-09 17:39:00.041300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.938 [2024-12-09 17:39:00.041343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.938 qpair failed and we were unable to recover it. 00:28:30.938 [2024-12-09 17:39:00.041451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.938 [2024-12-09 17:39:00.041484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.938 qpair failed and we were unable to recover it. 00:28:30.938 [2024-12-09 17:39:00.041604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.938 [2024-12-09 17:39:00.041637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.938 qpair failed and we were unable to recover it. 00:28:30.938 [2024-12-09 17:39:00.041810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.938 [2024-12-09 17:39:00.041844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.938 qpair failed and we were unable to recover it. 00:28:30.938 [2024-12-09 17:39:00.041977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.938 [2024-12-09 17:39:00.042009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.938 qpair failed and we were unable to recover it. 00:28:30.938 [2024-12-09 17:39:00.042146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.938 [2024-12-09 17:39:00.042180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.938 qpair failed and we were unable to recover it. 00:28:30.938 [2024-12-09 17:39:00.042316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.938 [2024-12-09 17:39:00.042351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.938 qpair failed and we were unable to recover it. 00:28:30.938 [2024-12-09 17:39:00.042529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.938 [2024-12-09 17:39:00.042561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.938 qpair failed and we were unable to recover it. 00:28:30.938 [2024-12-09 17:39:00.042767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.938 [2024-12-09 17:39:00.042800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.938 qpair failed and we were unable to recover it. 00:28:30.938 [2024-12-09 17:39:00.042923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.938 [2024-12-09 17:39:00.042956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.938 qpair failed and we were unable to recover it. 00:28:30.938 [2024-12-09 17:39:00.043185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.938 [2024-12-09 17:39:00.043237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.938 qpair failed and we were unable to recover it. 00:28:30.938 [2024-12-09 17:39:00.043415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.938 [2024-12-09 17:39:00.043450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.938 qpair failed and we were unable to recover it. 00:28:30.938 [2024-12-09 17:39:00.043624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.938 [2024-12-09 17:39:00.043657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.938 qpair failed and we were unable to recover it. 00:28:30.938 [2024-12-09 17:39:00.043849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.938 [2024-12-09 17:39:00.043881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.938 qpair failed and we were unable to recover it. 00:28:30.938 [2024-12-09 17:39:00.044002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.938 [2024-12-09 17:39:00.044035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.938 qpair failed and we were unable to recover it. 00:28:30.938 [2024-12-09 17:39:00.044243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.938 [2024-12-09 17:39:00.044279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.938 qpair failed and we were unable to recover it. 00:28:30.938 [2024-12-09 17:39:00.044480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.938 [2024-12-09 17:39:00.044512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.938 qpair failed and we were unable to recover it. 00:28:30.938 [2024-12-09 17:39:00.044621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.938 [2024-12-09 17:39:00.044655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.938 qpair failed and we were unable to recover it. 00:28:30.938 [2024-12-09 17:39:00.044780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.938 [2024-12-09 17:39:00.044813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.938 qpair failed and we were unable to recover it. 00:28:30.938 [2024-12-09 17:39:00.044998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.938 [2024-12-09 17:39:00.045031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.938 qpair failed and we were unable to recover it. 00:28:30.938 [2024-12-09 17:39:00.045254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.938 [2024-12-09 17:39:00.045289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.938 qpair failed and we were unable to recover it. 00:28:30.938 [2024-12-09 17:39:00.045412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.938 [2024-12-09 17:39:00.045444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.938 qpair failed and we were unable to recover it. 00:28:30.938 [2024-12-09 17:39:00.045605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.938 [2024-12-09 17:39:00.045639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.938 qpair failed and we were unable to recover it. 00:28:30.938 [2024-12-09 17:39:00.045881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.938 [2024-12-09 17:39:00.045914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.938 qpair failed and we were unable to recover it. 00:28:30.938 [2024-12-09 17:39:00.046102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.938 [2024-12-09 17:39:00.046136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.938 qpair failed and we were unable to recover it. 00:28:30.938 [2024-12-09 17:39:00.046273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.938 [2024-12-09 17:39:00.046307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.938 qpair failed and we were unable to recover it. 00:28:30.938 [2024-12-09 17:39:00.046490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.938 [2024-12-09 17:39:00.046523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.938 qpair failed and we were unable to recover it. 00:28:30.938 [2024-12-09 17:39:00.046708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.938 [2024-12-09 17:39:00.046739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.939 qpair failed and we were unable to recover it. 00:28:30.939 [2024-12-09 17:39:00.046861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.939 [2024-12-09 17:39:00.046894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.939 qpair failed and we were unable to recover it. 00:28:30.939 [2024-12-09 17:39:00.047086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.939 [2024-12-09 17:39:00.047120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.939 qpair failed and we were unable to recover it. 00:28:30.939 [2024-12-09 17:39:00.047316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.939 [2024-12-09 17:39:00.047350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.939 qpair failed and we were unable to recover it. 00:28:30.939 [2024-12-09 17:39:00.047523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.939 [2024-12-09 17:39:00.047566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.939 qpair failed and we were unable to recover it. 00:28:30.939 [2024-12-09 17:39:00.047704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.939 [2024-12-09 17:39:00.047738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.939 qpair failed and we were unable to recover it. 00:28:30.939 [2024-12-09 17:39:00.047868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.939 [2024-12-09 17:39:00.047901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.939 qpair failed and we were unable to recover it. 00:28:30.939 [2024-12-09 17:39:00.048029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.939 [2024-12-09 17:39:00.048062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.939 qpair failed and we were unable to recover it. 00:28:30.939 [2024-12-09 17:39:00.048189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.939 [2024-12-09 17:39:00.048241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.939 qpair failed and we were unable to recover it. 00:28:30.939 [2024-12-09 17:39:00.048423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.939 [2024-12-09 17:39:00.048456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.939 qpair failed and we were unable to recover it. 00:28:30.939 [2024-12-09 17:39:00.049034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.939 [2024-12-09 17:39:00.049081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.939 qpair failed and we were unable to recover it. 00:28:30.939 [2024-12-09 17:39:00.049284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.939 [2024-12-09 17:39:00.049322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.939 qpair failed and we were unable to recover it. 00:28:30.939 [2024-12-09 17:39:00.049568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.939 [2024-12-09 17:39:00.049601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.939 qpair failed and we were unable to recover it. 00:28:30.939 [2024-12-09 17:39:00.049736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.939 [2024-12-09 17:39:00.049775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.939 qpair failed and we were unable to recover it. 00:28:30.939 [2024-12-09 17:39:00.049889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.939 [2024-12-09 17:39:00.049924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.939 qpair failed and we were unable to recover it. 00:28:30.939 [2024-12-09 17:39:00.050111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.939 [2024-12-09 17:39:00.050144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.939 qpair failed and we were unable to recover it. 00:28:30.939 [2024-12-09 17:39:00.050333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.939 [2024-12-09 17:39:00.050368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.939 qpair failed and we were unable to recover it. 00:28:30.939 [2024-12-09 17:39:00.050563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.939 [2024-12-09 17:39:00.050596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:30.939 qpair failed and we were unable to recover it. 00:28:30.939 [2024-12-09 17:39:00.050754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.939 [2024-12-09 17:39:00.050827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.939 qpair failed and we were unable to recover it. 00:28:30.939 [2024-12-09 17:39:00.051014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.939 [2024-12-09 17:39:00.051078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.939 qpair failed and we were unable to recover it. 00:28:30.939 [2024-12-09 17:39:00.051360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.939 [2024-12-09 17:39:00.051408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.939 qpair failed and we were unable to recover it. 00:28:30.939 [2024-12-09 17:39:00.051541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.939 [2024-12-09 17:39:00.051583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.939 qpair failed and we were unable to recover it. 00:28:30.939 [2024-12-09 17:39:00.051712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.939 [2024-12-09 17:39:00.051745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.939 qpair failed and we were unable to recover it. 00:28:30.939 [2024-12-09 17:39:00.051937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.939 [2024-12-09 17:39:00.051970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.939 qpair failed and we were unable to recover it. 00:28:30.939 [2024-12-09 17:39:00.052080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.939 [2024-12-09 17:39:00.052112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.939 qpair failed and we were unable to recover it. 00:28:30.939 [2024-12-09 17:39:00.052294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.939 [2024-12-09 17:39:00.052331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.939 qpair failed and we were unable to recover it. 00:28:30.939 [2024-12-09 17:39:00.052456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.939 [2024-12-09 17:39:00.052489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.939 qpair failed and we were unable to recover it. 00:28:30.939 [2024-12-09 17:39:00.052612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.939 [2024-12-09 17:39:00.052645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.939 qpair failed and we were unable to recover it. 00:28:30.939 [2024-12-09 17:39:00.052807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.939 [2024-12-09 17:39:00.052842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.939 qpair failed and we were unable to recover it. 00:28:30.939 [2024-12-09 17:39:00.053022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.939 [2024-12-09 17:39:00.053055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.939 qpair failed and we were unable to recover it. 00:28:30.939 [2024-12-09 17:39:00.053198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.939 [2024-12-09 17:39:00.053247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.939 qpair failed and we were unable to recover it. 00:28:30.939 [2024-12-09 17:39:00.053376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.939 [2024-12-09 17:39:00.053409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.939 qpair failed and we were unable to recover it. 00:28:30.939 [2024-12-09 17:39:00.053553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.939 [2024-12-09 17:39:00.053586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.939 qpair failed and we were unable to recover it. 00:28:30.939 [2024-12-09 17:39:00.053732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.939 [2024-12-09 17:39:00.053765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.939 qpair failed and we were unable to recover it. 00:28:30.939 [2024-12-09 17:39:00.053900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.939 [2024-12-09 17:39:00.053933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.939 qpair failed and we were unable to recover it. 00:28:30.939 [2024-12-09 17:39:00.054085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.939 [2024-12-09 17:39:00.054118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.939 qpair failed and we were unable to recover it. 00:28:30.939 [2024-12-09 17:39:00.054272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.939 [2024-12-09 17:39:00.054305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.939 qpair failed and we were unable to recover it. 00:28:30.939 [2024-12-09 17:39:00.054446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.939 [2024-12-09 17:39:00.054480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.939 qpair failed and we were unable to recover it. 00:28:30.939 [2024-12-09 17:39:00.054587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.939 [2024-12-09 17:39:00.054620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.939 qpair failed and we were unable to recover it. 00:28:30.939 [2024-12-09 17:39:00.054732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.939 [2024-12-09 17:39:00.054764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.939 qpair failed and we were unable to recover it. 00:28:30.940 [2024-12-09 17:39:00.054878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.940 [2024-12-09 17:39:00.054918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.940 qpair failed and we were unable to recover it. 00:28:30.940 [2024-12-09 17:39:00.055050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.940 [2024-12-09 17:39:00.055082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.940 qpair failed and we were unable to recover it. 00:28:30.940 [2024-12-09 17:39:00.055230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.940 [2024-12-09 17:39:00.055264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.940 qpair failed and we were unable to recover it. 00:28:30.940 [2024-12-09 17:39:00.055410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.940 [2024-12-09 17:39:00.055443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.940 qpair failed and we were unable to recover it. 00:28:30.940 [2024-12-09 17:39:00.055601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.940 [2024-12-09 17:39:00.055634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.940 qpair failed and we were unable to recover it. 00:28:30.940 [2024-12-09 17:39:00.055814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.940 [2024-12-09 17:39:00.055847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.940 qpair failed and we were unable to recover it. 00:28:30.940 [2024-12-09 17:39:00.055992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.940 [2024-12-09 17:39:00.056025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.940 qpair failed and we were unable to recover it. 00:28:30.940 [2024-12-09 17:39:00.056153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.940 [2024-12-09 17:39:00.056187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.940 qpair failed and we were unable to recover it. 00:28:30.940 [2024-12-09 17:39:00.056329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.940 [2024-12-09 17:39:00.056364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.940 qpair failed and we were unable to recover it. 00:28:30.940 [2024-12-09 17:39:00.056495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.940 [2024-12-09 17:39:00.056527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.940 qpair failed and we were unable to recover it. 00:28:30.940 [2024-12-09 17:39:00.056650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.940 [2024-12-09 17:39:00.056686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.940 qpair failed and we were unable to recover it. 00:28:30.940 [2024-12-09 17:39:00.056811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.940 [2024-12-09 17:39:00.056843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.940 qpair failed and we were unable to recover it. 00:28:30.940 [2024-12-09 17:39:00.056973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.940 [2024-12-09 17:39:00.057005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.940 qpair failed and we were unable to recover it. 00:28:30.940 [2024-12-09 17:39:00.057114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.940 [2024-12-09 17:39:00.057147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:30.940 qpair failed and we were unable to recover it. 00:28:30.940 [2024-12-09 17:39:00.057310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.940 [2024-12-09 17:39:00.057371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:30.940 qpair failed and we were unable to recover it. 00:28:30.940 [2024-12-09 17:39:00.057520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.940 [2024-12-09 17:39:00.057578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:30.940 qpair failed and we were unable to recover it. 00:28:30.940 [2024-12-09 17:39:00.057799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.214 [2024-12-09 17:39:00.057857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.214 qpair failed and we were unable to recover it. 00:28:31.214 [2024-12-09 17:39:00.058008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.214 [2024-12-09 17:39:00.058044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.214 qpair failed and we were unable to recover it. 00:28:31.214 [2024-12-09 17:39:00.058263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.214 [2024-12-09 17:39:00.058298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.214 qpair failed and we were unable to recover it. 00:28:31.214 [2024-12-09 17:39:00.058420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.214 [2024-12-09 17:39:00.058454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.214 qpair failed and we were unable to recover it. 00:28:31.214 [2024-12-09 17:39:00.058576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.214 [2024-12-09 17:39:00.058609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.214 qpair failed and we were unable to recover it. 00:28:31.214 [2024-12-09 17:39:00.058740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.214 [2024-12-09 17:39:00.058774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.214 qpair failed and we were unable to recover it. 00:28:31.214 [2024-12-09 17:39:00.058968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.214 [2024-12-09 17:39:00.059002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.214 qpair failed and we were unable to recover it. 00:28:31.214 [2024-12-09 17:39:00.059181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.214 [2024-12-09 17:39:00.059213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.214 qpair failed and we were unable to recover it. 00:28:31.214 [2024-12-09 17:39:00.059446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.214 [2024-12-09 17:39:00.059482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.214 qpair failed and we were unable to recover it. 00:28:31.214 [2024-12-09 17:39:00.059657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.214 [2024-12-09 17:39:00.059692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.214 qpair failed and we were unable to recover it. 00:28:31.214 [2024-12-09 17:39:00.059807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.214 [2024-12-09 17:39:00.059840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.214 qpair failed and we were unable to recover it. 00:28:31.214 [2024-12-09 17:39:00.060046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.214 [2024-12-09 17:39:00.060090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.214 qpair failed and we were unable to recover it. 00:28:31.214 [2024-12-09 17:39:00.060260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.214 [2024-12-09 17:39:00.060297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.214 qpair failed and we were unable to recover it. 00:28:31.214 [2024-12-09 17:39:00.060479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.214 [2024-12-09 17:39:00.060513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.214 qpair failed and we were unable to recover it. 00:28:31.214 [2024-12-09 17:39:00.060793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.214 [2024-12-09 17:39:00.060827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.214 qpair failed and we were unable to recover it. 00:28:31.214 [2024-12-09 17:39:00.060963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.214 [2024-12-09 17:39:00.060998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.214 qpair failed and we were unable to recover it. 00:28:31.214 [2024-12-09 17:39:00.061188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.214 [2024-12-09 17:39:00.061232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.214 qpair failed and we were unable to recover it. 00:28:31.214 [2024-12-09 17:39:00.061531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.214 [2024-12-09 17:39:00.061564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.214 qpair failed and we were unable to recover it. 00:28:31.214 [2024-12-09 17:39:00.061807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.214 [2024-12-09 17:39:00.061841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.214 qpair failed and we were unable to recover it. 00:28:31.214 [2024-12-09 17:39:00.062051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.214 [2024-12-09 17:39:00.062086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.214 qpair failed and we were unable to recover it. 00:28:31.214 [2024-12-09 17:39:00.062228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.214 [2024-12-09 17:39:00.062263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.214 qpair failed and we were unable to recover it. 00:28:31.214 [2024-12-09 17:39:00.062445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.214 [2024-12-09 17:39:00.062478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.214 qpair failed and we were unable to recover it. 00:28:31.214 [2024-12-09 17:39:00.062660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.214 [2024-12-09 17:39:00.062694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.214 qpair failed and we were unable to recover it. 00:28:31.214 [2024-12-09 17:39:00.062802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.214 [2024-12-09 17:39:00.062843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.215 qpair failed and we were unable to recover it. 00:28:31.215 [2024-12-09 17:39:00.063132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.215 [2024-12-09 17:39:00.063167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.215 qpair failed and we were unable to recover it. 00:28:31.215 [2024-12-09 17:39:00.063369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.215 [2024-12-09 17:39:00.063404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.215 qpair failed and we were unable to recover it. 00:28:31.215 [2024-12-09 17:39:00.063581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.215 [2024-12-09 17:39:00.063625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.215 qpair failed and we were unable to recover it. 00:28:31.215 [2024-12-09 17:39:00.063763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.215 [2024-12-09 17:39:00.063797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.215 qpair failed and we were unable to recover it. 00:28:31.215 [2024-12-09 17:39:00.063938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.215 [2024-12-09 17:39:00.063972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.215 qpair failed and we were unable to recover it. 00:28:31.215 [2024-12-09 17:39:00.064108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.215 [2024-12-09 17:39:00.064140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.215 qpair failed and we were unable to recover it. 00:28:31.215 [2024-12-09 17:39:00.064365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.215 [2024-12-09 17:39:00.064400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.215 qpair failed and we were unable to recover it. 00:28:31.215 [2024-12-09 17:39:00.064650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.215 [2024-12-09 17:39:00.064684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.215 qpair failed and we were unable to recover it. 00:28:31.215 [2024-12-09 17:39:00.064880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.215 [2024-12-09 17:39:00.064915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.215 qpair failed and we were unable to recover it. 00:28:31.215 [2024-12-09 17:39:00.065159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.215 [2024-12-09 17:39:00.065193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.215 qpair failed and we were unable to recover it. 00:28:31.215 [2024-12-09 17:39:00.065326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.215 [2024-12-09 17:39:00.065360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.215 qpair failed and we were unable to recover it. 00:28:31.215 [2024-12-09 17:39:00.065477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.215 [2024-12-09 17:39:00.065513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.215 qpair failed and we were unable to recover it. 00:28:31.215 [2024-12-09 17:39:00.065700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.215 [2024-12-09 17:39:00.065733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.215 qpair failed and we were unable to recover it. 00:28:31.215 [2024-12-09 17:39:00.065911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.215 [2024-12-09 17:39:00.065945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.215 qpair failed and we were unable to recover it. 00:28:31.215 [2024-12-09 17:39:00.066170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.215 [2024-12-09 17:39:00.066244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.215 qpair failed and we were unable to recover it. 00:28:31.215 [2024-12-09 17:39:00.066520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.215 [2024-12-09 17:39:00.066564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.215 qpair failed and we were unable to recover it. 00:28:31.215 [2024-12-09 17:39:00.066847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.215 [2024-12-09 17:39:00.066891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.215 qpair failed and we were unable to recover it. 00:28:31.215 [2024-12-09 17:39:00.067090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.215 [2024-12-09 17:39:00.067124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.215 qpair failed and we were unable to recover it. 00:28:31.215 [2024-12-09 17:39:00.067346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.215 [2024-12-09 17:39:00.067382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.215 qpair failed and we were unable to recover it. 00:28:31.215 [2024-12-09 17:39:00.067502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.215 [2024-12-09 17:39:00.067537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.215 qpair failed and we were unable to recover it. 00:28:31.215 [2024-12-09 17:39:00.067781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.215 [2024-12-09 17:39:00.067814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.215 qpair failed and we were unable to recover it. 00:28:31.215 [2024-12-09 17:39:00.068058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.215 [2024-12-09 17:39:00.068091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.215 qpair failed and we were unable to recover it. 00:28:31.215 [2024-12-09 17:39:00.068283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.215 [2024-12-09 17:39:00.068319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.215 qpair failed and we were unable to recover it. 00:28:31.215 [2024-12-09 17:39:00.068455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.215 [2024-12-09 17:39:00.068489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.215 qpair failed and we were unable to recover it. 00:28:31.215 [2024-12-09 17:39:00.068680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.215 [2024-12-09 17:39:00.068714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.215 qpair failed and we were unable to recover it. 00:28:31.215 [2024-12-09 17:39:00.068834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.215 [2024-12-09 17:39:00.068867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.215 qpair failed and we were unable to recover it. 00:28:31.215 [2024-12-09 17:39:00.068973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.215 [2024-12-09 17:39:00.069007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.215 qpair failed and we were unable to recover it. 00:28:31.215 [2024-12-09 17:39:00.069275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.215 [2024-12-09 17:39:00.069309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.215 qpair failed and we were unable to recover it. 00:28:31.215 [2024-12-09 17:39:00.069439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.215 [2024-12-09 17:39:00.069473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.215 qpair failed and we were unable to recover it. 00:28:31.215 [2024-12-09 17:39:00.069602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.215 [2024-12-09 17:39:00.069637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.215 qpair failed and we were unable to recover it. 00:28:31.215 [2024-12-09 17:39:00.069829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.215 [2024-12-09 17:39:00.069863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.215 qpair failed and we were unable to recover it. 00:28:31.215 [2024-12-09 17:39:00.070057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.215 [2024-12-09 17:39:00.070090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.215 qpair failed and we were unable to recover it. 00:28:31.215 [2024-12-09 17:39:00.070278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.215 [2024-12-09 17:39:00.070314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.215 qpair failed and we were unable to recover it. 00:28:31.215 [2024-12-09 17:39:00.070420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.215 [2024-12-09 17:39:00.070460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.215 qpair failed and we were unable to recover it. 00:28:31.215 [2024-12-09 17:39:00.070582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.215 [2024-12-09 17:39:00.070614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.215 qpair failed and we were unable to recover it. 00:28:31.215 [2024-12-09 17:39:00.070726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.215 [2024-12-09 17:39:00.070760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.215 qpair failed and we were unable to recover it. 00:28:31.215 [2024-12-09 17:39:00.070867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.215 [2024-12-09 17:39:00.070901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.215 qpair failed and we were unable to recover it. 00:28:31.215 [2024-12-09 17:39:00.071013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.215 [2024-12-09 17:39:00.071046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.215 qpair failed and we were unable to recover it. 00:28:31.215 [2024-12-09 17:39:00.071153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.216 [2024-12-09 17:39:00.071187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.216 qpair failed and we were unable to recover it. 00:28:31.216 [2024-12-09 17:39:00.071234] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:31.216 [2024-12-09 17:39:00.071262] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:31.216 [2024-12-09 17:39:00.071271] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:31.216 [2024-12-09 17:39:00.071278] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:31.216 [2024-12-09 17:39:00.071283] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:31.216 [2024-12-09 17:39:00.071458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.216 [2024-12-09 17:39:00.071506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.216 qpair failed and we were unable to recover it. 00:28:31.216 [2024-12-09 17:39:00.071709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.216 [2024-12-09 17:39:00.071743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.216 qpair failed and we were unable to recover it. 00:28:31.216 [2024-12-09 17:39:00.071921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.216 [2024-12-09 17:39:00.071955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.216 qpair failed and we were unable to recover it. 00:28:31.216 [2024-12-09 17:39:00.072057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.216 [2024-12-09 17:39:00.072091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.216 qpair failed and we were unable to recover it. 00:28:31.216 [2024-12-09 17:39:00.072193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.216 [2024-12-09 17:39:00.072240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.216 qpair failed and we were unable to recover it. 00:28:31.216 [2024-12-09 17:39:00.072462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.216 [2024-12-09 17:39:00.072495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.216 qpair failed and we were unable to recover it. 00:28:31.216 [2024-12-09 17:39:00.072674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.216 [2024-12-09 17:39:00.072708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.216 qpair failed and we were unable to recover it. 00:28:31.216 [2024-12-09 17:39:00.072892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.216 [2024-12-09 17:39:00.072927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.216 [2024-12-09 17:39:00.072836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:28:31.216 qpair failed and we were unable to recover it. 00:28:31.216 [2024-12-09 17:39:00.072944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:28:31.216 [2024-12-09 17:39:00.073052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:31.216 [2024-12-09 17:39:00.073120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.216 [2024-12-09 17:39:00.073152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.216 qpair failed and we were unable to recover it. 00:28:31.216 [2024-12-09 17:39:00.073052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:28:31.216 [2024-12-09 17:39:00.073326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.216 [2024-12-09 17:39:00.073361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.216 qpair failed and we were unable to recover it. 00:28:31.216 [2024-12-09 17:39:00.073609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.216 [2024-12-09 17:39:00.073644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.216 qpair failed and we were unable to recover it. 00:28:31.216 [2024-12-09 17:39:00.073923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.216 [2024-12-09 17:39:00.073957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.216 qpair failed and we were unable to recover it. 00:28:31.216 [2024-12-09 17:39:00.074157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.216 [2024-12-09 17:39:00.074197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.216 qpair failed and we were unable to recover it. 00:28:31.216 [2024-12-09 17:39:00.074325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.216 [2024-12-09 17:39:00.074361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.216 qpair failed and we were unable to recover it. 00:28:31.216 [2024-12-09 17:39:00.074556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.216 [2024-12-09 17:39:00.074589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.216 qpair failed and we were unable to recover it. 00:28:31.216 [2024-12-09 17:39:00.074831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.216 [2024-12-09 17:39:00.074865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.216 qpair failed and we were unable to recover it. 00:28:31.216 [2024-12-09 17:39:00.075062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.216 [2024-12-09 17:39:00.075097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.216 qpair failed and we were unable to recover it. 00:28:31.216 [2024-12-09 17:39:00.075202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.216 [2024-12-09 17:39:00.075245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.216 qpair failed and we were unable to recover it. 00:28:31.216 [2024-12-09 17:39:00.075422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.216 [2024-12-09 17:39:00.075457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.216 qpair failed and we were unable to recover it. 00:28:31.216 [2024-12-09 17:39:00.075640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.216 [2024-12-09 17:39:00.075675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.216 qpair failed and we were unable to recover it. 00:28:31.216 [2024-12-09 17:39:00.075885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.216 [2024-12-09 17:39:00.075919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.216 qpair failed and we were unable to recover it. 00:28:31.216 [2024-12-09 17:39:00.076058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.216 [2024-12-09 17:39:00.076092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.216 qpair failed and we were unable to recover it. 00:28:31.216 [2024-12-09 17:39:00.076287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.216 [2024-12-09 17:39:00.076321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.216 qpair failed and we were unable to recover it. 00:28:31.216 [2024-12-09 17:39:00.076499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.216 [2024-12-09 17:39:00.076534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.216 qpair failed and we were unable to recover it. 00:28:31.216 [2024-12-09 17:39:00.076713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.216 [2024-12-09 17:39:00.076747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.216 qpair failed and we were unable to recover it. 00:28:31.216 [2024-12-09 17:39:00.076914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.216 [2024-12-09 17:39:00.076948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.216 qpair failed and we were unable to recover it. 00:28:31.216 [2024-12-09 17:39:00.077153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.216 [2024-12-09 17:39:00.077188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.216 qpair failed and we were unable to recover it. 00:28:31.216 [2024-12-09 17:39:00.077326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.216 [2024-12-09 17:39:00.077360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.216 qpair failed and we were unable to recover it. 00:28:31.216 [2024-12-09 17:39:00.077487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.216 [2024-12-09 17:39:00.077522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.216 qpair failed and we were unable to recover it. 00:28:31.216 [2024-12-09 17:39:00.077652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.216 [2024-12-09 17:39:00.077686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.216 qpair failed and we were unable to recover it. 00:28:31.216 [2024-12-09 17:39:00.077899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.216 [2024-12-09 17:39:00.077940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.216 qpair failed and we were unable to recover it. 00:28:31.216 [2024-12-09 17:39:00.078139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.216 [2024-12-09 17:39:00.078180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.216 qpair failed and we were unable to recover it. 00:28:31.216 [2024-12-09 17:39:00.078392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.216 [2024-12-09 17:39:00.078433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.216 qpair failed and we were unable to recover it. 00:28:31.216 [2024-12-09 17:39:00.078707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.216 [2024-12-09 17:39:00.078745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.216 qpair failed and we were unable to recover it. 00:28:31.216 [2024-12-09 17:39:00.078868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.216 [2024-12-09 17:39:00.078902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.216 qpair failed and we were unable to recover it. 00:28:31.217 [2024-12-09 17:39:00.079041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.217 [2024-12-09 17:39:00.079075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.217 qpair failed and we were unable to recover it. 00:28:31.217 [2024-12-09 17:39:00.079252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.217 [2024-12-09 17:39:00.079287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.217 qpair failed and we were unable to recover it. 00:28:31.217 [2024-12-09 17:39:00.079547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.217 [2024-12-09 17:39:00.079581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.217 qpair failed and we were unable to recover it. 00:28:31.217 [2024-12-09 17:39:00.079824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.217 [2024-12-09 17:39:00.079859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.217 qpair failed and we were unable to recover it. 00:28:31.217 [2024-12-09 17:39:00.080088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.217 [2024-12-09 17:39:00.080126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.217 qpair failed and we were unable to recover it. 00:28:31.217 [2024-12-09 17:39:00.080263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.217 [2024-12-09 17:39:00.080299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.217 qpair failed and we were unable to recover it. 00:28:31.217 [2024-12-09 17:39:00.080509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.217 [2024-12-09 17:39:00.080544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.217 qpair failed and we were unable to recover it. 00:28:31.217 [2024-12-09 17:39:00.080740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.217 [2024-12-09 17:39:00.080775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.217 qpair failed and we were unable to recover it. 00:28:31.217 [2024-12-09 17:39:00.080908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.217 [2024-12-09 17:39:00.080943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.217 qpair failed and we were unable to recover it. 00:28:31.217 [2024-12-09 17:39:00.081077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.217 [2024-12-09 17:39:00.081112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.217 qpair failed and we were unable to recover it. 00:28:31.217 [2024-12-09 17:39:00.081308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.217 [2024-12-09 17:39:00.081343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.217 qpair failed and we were unable to recover it. 00:28:31.217 [2024-12-09 17:39:00.081463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.217 [2024-12-09 17:39:00.081497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.217 qpair failed and we were unable to recover it. 00:28:31.217 [2024-12-09 17:39:00.081612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.217 [2024-12-09 17:39:00.081648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.217 qpair failed and we were unable to recover it. 00:28:31.217 [2024-12-09 17:39:00.081828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.217 [2024-12-09 17:39:00.081861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.217 qpair failed and we were unable to recover it. 00:28:31.217 [2024-12-09 17:39:00.081997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.217 [2024-12-09 17:39:00.082031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.217 qpair failed and we were unable to recover it. 00:28:31.217 [2024-12-09 17:39:00.082282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.217 [2024-12-09 17:39:00.082318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.217 qpair failed and we were unable to recover it. 00:28:31.217 [2024-12-09 17:39:00.082501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.217 [2024-12-09 17:39:00.082535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.217 qpair failed and we were unable to recover it. 00:28:31.217 [2024-12-09 17:39:00.082682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.217 [2024-12-09 17:39:00.082723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.217 qpair failed and we were unable to recover it. 00:28:31.217 [2024-12-09 17:39:00.082829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.217 [2024-12-09 17:39:00.082864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.217 qpair failed and we were unable to recover it. 00:28:31.217 [2024-12-09 17:39:00.083035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.217 [2024-12-09 17:39:00.083068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.217 qpair failed and we were unable to recover it. 00:28:31.217 [2024-12-09 17:39:00.083207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.217 [2024-12-09 17:39:00.083253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.217 qpair failed and we were unable to recover it. 00:28:31.217 [2024-12-09 17:39:00.083438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.217 [2024-12-09 17:39:00.083476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.217 qpair failed and we were unable to recover it. 00:28:31.217 [2024-12-09 17:39:00.083593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.217 [2024-12-09 17:39:00.083628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.217 qpair failed and we were unable to recover it. 00:28:31.217 [2024-12-09 17:39:00.083836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.217 [2024-12-09 17:39:00.083870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.217 qpair failed and we were unable to recover it. 00:28:31.217 [2024-12-09 17:39:00.084002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.217 [2024-12-09 17:39:00.084037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.217 qpair failed and we were unable to recover it. 00:28:31.217 [2024-12-09 17:39:00.084245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.217 [2024-12-09 17:39:00.084281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.217 qpair failed and we were unable to recover it. 00:28:31.217 [2024-12-09 17:39:00.084478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.217 [2024-12-09 17:39:00.084514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.217 qpair failed and we were unable to recover it. 00:28:31.217 [2024-12-09 17:39:00.084758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.217 [2024-12-09 17:39:00.084794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.217 qpair failed and we were unable to recover it. 00:28:31.217 [2024-12-09 17:39:00.084944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.217 [2024-12-09 17:39:00.084980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.217 qpair failed and we were unable to recover it. 00:28:31.217 [2024-12-09 17:39:00.085158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.217 [2024-12-09 17:39:00.085191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.217 qpair failed and we were unable to recover it. 00:28:31.217 [2024-12-09 17:39:00.085323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.217 [2024-12-09 17:39:00.085359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.217 qpair failed and we were unable to recover it. 00:28:31.217 [2024-12-09 17:39:00.085541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.217 [2024-12-09 17:39:00.085576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.217 qpair failed and we were unable to recover it. 00:28:31.217 [2024-12-09 17:39:00.085757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.217 [2024-12-09 17:39:00.085793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.217 qpair failed and we were unable to recover it. 00:28:31.217 [2024-12-09 17:39:00.085978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.217 [2024-12-09 17:39:00.086013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.217 qpair failed and we were unable to recover it. 00:28:31.217 [2024-12-09 17:39:00.086201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.217 [2024-12-09 17:39:00.086247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.217 qpair failed and we were unable to recover it. 00:28:31.217 [2024-12-09 17:39:00.086379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.217 [2024-12-09 17:39:00.086413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.217 qpair failed and we were unable to recover it. 00:28:31.217 [2024-12-09 17:39:00.086658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.217 [2024-12-09 17:39:00.086692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.217 qpair failed and we were unable to recover it. 00:28:31.217 [2024-12-09 17:39:00.086803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.217 [2024-12-09 17:39:00.086837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.217 qpair failed and we were unable to recover it. 00:28:31.217 [2024-12-09 17:39:00.086965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.217 [2024-12-09 17:39:00.087000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.218 qpair failed and we were unable to recover it. 00:28:31.218 [2024-12-09 17:39:00.087248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.218 [2024-12-09 17:39:00.087283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.218 qpair failed and we were unable to recover it. 00:28:31.218 [2024-12-09 17:39:00.087488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.218 [2024-12-09 17:39:00.087523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.218 qpair failed and we were unable to recover it. 00:28:31.218 [2024-12-09 17:39:00.087792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.218 [2024-12-09 17:39:00.087830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.218 qpair failed and we were unable to recover it. 00:28:31.218 [2024-12-09 17:39:00.087962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.218 [2024-12-09 17:39:00.087996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.218 qpair failed and we were unable to recover it. 00:28:31.218 [2024-12-09 17:39:00.088268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.218 [2024-12-09 17:39:00.088306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.218 qpair failed and we were unable to recover it. 00:28:31.218 [2024-12-09 17:39:00.088599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.218 [2024-12-09 17:39:00.088651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.218 qpair failed and we were unable to recover it. 00:28:31.218 [2024-12-09 17:39:00.088846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.218 [2024-12-09 17:39:00.088880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.218 qpair failed and we were unable to recover it. 00:28:31.218 [2024-12-09 17:39:00.089008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.218 [2024-12-09 17:39:00.089043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.218 qpair failed and we were unable to recover it. 00:28:31.218 [2024-12-09 17:39:00.089253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.218 [2024-12-09 17:39:00.089291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.218 qpair failed and we were unable to recover it. 00:28:31.218 [2024-12-09 17:39:00.089468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.218 [2024-12-09 17:39:00.089502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.218 qpair failed and we were unable to recover it. 00:28:31.218 [2024-12-09 17:39:00.089697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.218 [2024-12-09 17:39:00.089731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.218 qpair failed and we were unable to recover it. 00:28:31.218 [2024-12-09 17:39:00.089858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.218 [2024-12-09 17:39:00.089892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.218 qpair failed and we were unable to recover it. 00:28:31.218 [2024-12-09 17:39:00.090076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.218 [2024-12-09 17:39:00.090111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.218 qpair failed and we were unable to recover it. 00:28:31.218 [2024-12-09 17:39:00.090291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.218 [2024-12-09 17:39:00.090327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.218 qpair failed and we were unable to recover it. 00:28:31.218 [2024-12-09 17:39:00.090595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.218 [2024-12-09 17:39:00.090630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.218 qpair failed and we were unable to recover it. 00:28:31.218 [2024-12-09 17:39:00.090753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.218 [2024-12-09 17:39:00.090786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.218 qpair failed and we were unable to recover it. 00:28:31.218 [2024-12-09 17:39:00.091051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.218 [2024-12-09 17:39:00.091087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.218 qpair failed and we were unable to recover it. 00:28:31.218 [2024-12-09 17:39:00.091314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.218 [2024-12-09 17:39:00.091350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.218 qpair failed and we were unable to recover it. 00:28:31.218 [2024-12-09 17:39:00.091549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.218 [2024-12-09 17:39:00.091585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.218 qpair failed and we were unable to recover it. 00:28:31.218 [2024-12-09 17:39:00.091820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.218 [2024-12-09 17:39:00.091857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.218 qpair failed and we were unable to recover it. 00:28:31.218 [2024-12-09 17:39:00.092034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.218 [2024-12-09 17:39:00.092069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.218 qpair failed and we were unable to recover it. 00:28:31.218 [2024-12-09 17:39:00.092254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.218 [2024-12-09 17:39:00.092289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.218 qpair failed and we were unable to recover it. 00:28:31.218 [2024-12-09 17:39:00.092479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.218 [2024-12-09 17:39:00.092515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.218 qpair failed and we were unable to recover it. 00:28:31.218 [2024-12-09 17:39:00.092661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.218 [2024-12-09 17:39:00.092697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.218 qpair failed and we were unable to recover it. 00:28:31.218 [2024-12-09 17:39:00.092949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.218 [2024-12-09 17:39:00.092986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.218 qpair failed and we were unable to recover it. 00:28:31.218 [2024-12-09 17:39:00.093095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.218 [2024-12-09 17:39:00.093130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.218 qpair failed and we were unable to recover it. 00:28:31.218 [2024-12-09 17:39:00.093337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.218 [2024-12-09 17:39:00.093373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.218 qpair failed and we were unable to recover it. 00:28:31.218 [2024-12-09 17:39:00.093506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.218 [2024-12-09 17:39:00.093540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.218 qpair failed and we were unable to recover it. 00:28:31.218 [2024-12-09 17:39:00.093663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.218 [2024-12-09 17:39:00.093697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.218 qpair failed and we were unable to recover it. 00:28:31.218 [2024-12-09 17:39:00.093889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.218 [2024-12-09 17:39:00.093927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.218 qpair failed and we were unable to recover it. 00:28:31.218 [2024-12-09 17:39:00.094132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.218 [2024-12-09 17:39:00.094171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.218 qpair failed and we were unable to recover it. 00:28:31.218 [2024-12-09 17:39:00.094444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.218 [2024-12-09 17:39:00.094482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.218 qpair failed and we were unable to recover it. 00:28:31.218 [2024-12-09 17:39:00.094621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.218 [2024-12-09 17:39:00.094663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.218 qpair failed and we were unable to recover it. 00:28:31.218 [2024-12-09 17:39:00.094849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.218 [2024-12-09 17:39:00.094884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.218 qpair failed and we were unable to recover it. 00:28:31.219 [2024-12-09 17:39:00.095011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.219 [2024-12-09 17:39:00.095044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.219 qpair failed and we were unable to recover it. 00:28:31.219 [2024-12-09 17:39:00.095258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.219 [2024-12-09 17:39:00.095294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.219 qpair failed and we were unable to recover it. 00:28:31.219 [2024-12-09 17:39:00.095486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.219 [2024-12-09 17:39:00.095521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.219 qpair failed and we were unable to recover it. 00:28:31.219 [2024-12-09 17:39:00.095713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.219 [2024-12-09 17:39:00.095747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.219 qpair failed and we were unable to recover it. 00:28:31.219 [2024-12-09 17:39:00.095918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.219 [2024-12-09 17:39:00.095954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.219 qpair failed and we were unable to recover it. 00:28:31.219 [2024-12-09 17:39:00.096141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.219 [2024-12-09 17:39:00.096175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.219 qpair failed and we were unable to recover it. 00:28:31.219 [2024-12-09 17:39:00.096383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.219 [2024-12-09 17:39:00.096418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.219 qpair failed and we were unable to recover it. 00:28:31.219 [2024-12-09 17:39:00.096593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.219 [2024-12-09 17:39:00.096626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.219 qpair failed and we were unable to recover it. 00:28:31.219 [2024-12-09 17:39:00.096765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.219 [2024-12-09 17:39:00.096799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.219 qpair failed and we were unable to recover it. 00:28:31.219 [2024-12-09 17:39:00.096996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.219 [2024-12-09 17:39:00.097030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.219 qpair failed and we were unable to recover it. 00:28:31.219 [2024-12-09 17:39:00.097231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.219 [2024-12-09 17:39:00.097267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.219 qpair failed and we were unable to recover it. 00:28:31.219 [2024-12-09 17:39:00.097373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.219 [2024-12-09 17:39:00.097407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.219 qpair failed and we were unable to recover it. 00:28:31.219 [2024-12-09 17:39:00.097621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.219 [2024-12-09 17:39:00.097655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.219 qpair failed and we were unable to recover it. 00:28:31.219 [2024-12-09 17:39:00.097783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.219 [2024-12-09 17:39:00.097815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.219 qpair failed and we were unable to recover it. 00:28:31.219 [2024-12-09 17:39:00.098028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.219 [2024-12-09 17:39:00.098063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.219 qpair failed and we were unable to recover it. 00:28:31.219 [2024-12-09 17:39:00.098257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.219 [2024-12-09 17:39:00.098291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.219 qpair failed and we were unable to recover it. 00:28:31.219 [2024-12-09 17:39:00.098398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.219 [2024-12-09 17:39:00.098432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.219 qpair failed and we were unable to recover it. 00:28:31.219 [2024-12-09 17:39:00.098579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.219 [2024-12-09 17:39:00.098613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.219 qpair failed and we were unable to recover it. 00:28:31.219 [2024-12-09 17:39:00.098736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.219 [2024-12-09 17:39:00.098769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.219 qpair failed and we were unable to recover it. 00:28:31.219 [2024-12-09 17:39:00.098906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.219 [2024-12-09 17:39:00.098940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.219 qpair failed and we were unable to recover it. 00:28:31.219 [2024-12-09 17:39:00.099116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.219 [2024-12-09 17:39:00.099152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.219 qpair failed and we were unable to recover it. 00:28:31.219 [2024-12-09 17:39:00.099325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.219 [2024-12-09 17:39:00.099361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.219 qpair failed and we were unable to recover it. 00:28:31.219 [2024-12-09 17:39:00.099625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.219 [2024-12-09 17:39:00.099660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.219 qpair failed and we were unable to recover it. 00:28:31.219 [2024-12-09 17:39:00.099777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.219 [2024-12-09 17:39:00.099810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.219 qpair failed and we were unable to recover it. 00:28:31.219 [2024-12-09 17:39:00.099929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.219 [2024-12-09 17:39:00.099963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.219 qpair failed and we were unable to recover it. 00:28:31.219 [2024-12-09 17:39:00.100160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.219 [2024-12-09 17:39:00.100200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.219 qpair failed and we were unable to recover it. 00:28:31.219 [2024-12-09 17:39:00.100332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.219 [2024-12-09 17:39:00.100368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.219 qpair failed and we were unable to recover it. 00:28:31.219 [2024-12-09 17:39:00.100661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.219 [2024-12-09 17:39:00.100707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.219 qpair failed and we were unable to recover it. 00:28:31.219 [2024-12-09 17:39:00.100860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.219 [2024-12-09 17:39:00.100903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.219 qpair failed and we were unable to recover it. 00:28:31.219 [2024-12-09 17:39:00.101046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.219 [2024-12-09 17:39:00.101093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.219 qpair failed and we were unable to recover it. 00:28:31.219 [2024-12-09 17:39:00.101295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.219 [2024-12-09 17:39:00.101353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.219 qpair failed and we were unable to recover it. 00:28:31.219 [2024-12-09 17:39:00.101512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.219 [2024-12-09 17:39:00.101546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.219 qpair failed and we were unable to recover it. 00:28:31.219 [2024-12-09 17:39:00.101681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.219 [2024-12-09 17:39:00.101712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.219 qpair failed and we were unable to recover it. 00:28:31.219 [2024-12-09 17:39:00.101968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.219 [2024-12-09 17:39:00.102002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.219 qpair failed and we were unable to recover it. 00:28:31.219 [2024-12-09 17:39:00.102245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.219 [2024-12-09 17:39:00.102296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.219 qpair failed and we were unable to recover it. 00:28:31.219 [2024-12-09 17:39:00.102510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.219 [2024-12-09 17:39:00.102544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.219 qpair failed and we were unable to recover it. 00:28:31.219 [2024-12-09 17:39:00.102701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.219 [2024-12-09 17:39:00.102733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.219 qpair failed and we were unable to recover it. 00:28:31.219 [2024-12-09 17:39:00.102883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.219 [2024-12-09 17:39:00.102917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.219 qpair failed and we were unable to recover it. 00:28:31.219 [2024-12-09 17:39:00.103050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.219 [2024-12-09 17:39:00.103082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.219 qpair failed and we were unable to recover it. 00:28:31.220 [2024-12-09 17:39:00.103234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.220 [2024-12-09 17:39:00.103267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.220 qpair failed and we were unable to recover it. 00:28:31.220 [2024-12-09 17:39:00.103414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.220 [2024-12-09 17:39:00.103444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.220 qpair failed and we were unable to recover it. 00:28:31.220 [2024-12-09 17:39:00.103608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.220 [2024-12-09 17:39:00.103641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.220 qpair failed and we were unable to recover it. 00:28:31.220 [2024-12-09 17:39:00.103786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.220 [2024-12-09 17:39:00.103818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.220 qpair failed and we were unable to recover it. 00:28:31.220 [2024-12-09 17:39:00.103965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.220 [2024-12-09 17:39:00.103997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.220 qpair failed and we were unable to recover it. 00:28:31.220 [2024-12-09 17:39:00.104124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.220 [2024-12-09 17:39:00.104156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.220 qpair failed and we were unable to recover it. 00:28:31.220 [2024-12-09 17:39:00.104314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.220 [2024-12-09 17:39:00.104346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.220 qpair failed and we were unable to recover it. 00:28:31.220 [2024-12-09 17:39:00.104559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.220 [2024-12-09 17:39:00.104593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.220 qpair failed and we were unable to recover it. 00:28:31.220 [2024-12-09 17:39:00.104739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.220 [2024-12-09 17:39:00.104771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.220 qpair failed and we were unable to recover it. 00:28:31.220 [2024-12-09 17:39:00.104938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.220 [2024-12-09 17:39:00.104972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.220 qpair failed and we were unable to recover it. 00:28:31.220 [2024-12-09 17:39:00.105192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.220 [2024-12-09 17:39:00.105252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.220 qpair failed and we were unable to recover it. 00:28:31.220 [2024-12-09 17:39:00.105424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.220 [2024-12-09 17:39:00.105469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.220 qpair failed and we were unable to recover it. 00:28:31.220 [2024-12-09 17:39:00.105628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.220 [2024-12-09 17:39:00.105672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.220 qpair failed and we were unable to recover it. 00:28:31.220 [2024-12-09 17:39:00.105824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.220 [2024-12-09 17:39:00.105876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.220 qpair failed and we were unable to recover it. 00:28:31.220 [2024-12-09 17:39:00.106009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.220 [2024-12-09 17:39:00.106056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.220 qpair failed and we were unable to recover it. 00:28:31.220 [2024-12-09 17:39:00.106192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.220 [2024-12-09 17:39:00.106238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.220 qpair failed and we were unable to recover it. 00:28:31.220 [2024-12-09 17:39:00.106358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.220 [2024-12-09 17:39:00.106391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.220 qpair failed and we were unable to recover it. 00:28:31.220 [2024-12-09 17:39:00.106633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.220 [2024-12-09 17:39:00.106666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.220 qpair failed and we were unable to recover it. 00:28:31.220 [2024-12-09 17:39:00.106848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.220 [2024-12-09 17:39:00.106888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.220 qpair failed and we were unable to recover it. 00:28:31.220 [2024-12-09 17:39:00.107117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.220 [2024-12-09 17:39:00.107149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.220 qpair failed and we were unable to recover it. 00:28:31.220 [2024-12-09 17:39:00.107260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.220 [2024-12-09 17:39:00.107295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.220 qpair failed and we were unable to recover it. 00:28:31.220 [2024-12-09 17:39:00.107499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.220 [2024-12-09 17:39:00.107530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.220 qpair failed and we were unable to recover it. 00:28:31.220 [2024-12-09 17:39:00.107718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.220 [2024-12-09 17:39:00.107749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.220 qpair failed and we were unable to recover it. 00:28:31.220 [2024-12-09 17:39:00.107928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.220 [2024-12-09 17:39:00.107960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.220 qpair failed and we were unable to recover it. 00:28:31.220 [2024-12-09 17:39:00.108087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.220 [2024-12-09 17:39:00.108119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.220 qpair failed and we were unable to recover it. 00:28:31.220 [2024-12-09 17:39:00.108260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.220 [2024-12-09 17:39:00.108293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.220 qpair failed and we were unable to recover it. 00:28:31.220 [2024-12-09 17:39:00.108471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.220 [2024-12-09 17:39:00.108503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.220 qpair failed and we were unable to recover it. 00:28:31.220 [2024-12-09 17:39:00.108703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.220 [2024-12-09 17:39:00.108735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.220 qpair failed and we were unable to recover it. 00:28:31.220 [2024-12-09 17:39:00.108847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.220 [2024-12-09 17:39:00.108878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.220 qpair failed and we were unable to recover it. 00:28:31.220 [2024-12-09 17:39:00.109154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.220 [2024-12-09 17:39:00.109185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.220 qpair failed and we were unable to recover it. 00:28:31.220 [2024-12-09 17:39:00.109375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.220 [2024-12-09 17:39:00.109407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.220 qpair failed and we were unable to recover it. 00:28:31.220 [2024-12-09 17:39:00.109580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.220 [2024-12-09 17:39:00.109611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.220 qpair failed and we were unable to recover it. 00:28:31.220 [2024-12-09 17:39:00.109731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.220 [2024-12-09 17:39:00.109763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.220 qpair failed and we were unable to recover it. 00:28:31.220 [2024-12-09 17:39:00.109867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.220 [2024-12-09 17:39:00.109899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.220 qpair failed and we were unable to recover it. 00:28:31.220 [2024-12-09 17:39:00.110236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.220 [2024-12-09 17:39:00.110270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.220 qpair failed and we were unable to recover it. 00:28:31.220 [2024-12-09 17:39:00.110472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.220 [2024-12-09 17:39:00.110503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.220 qpair failed and we were unable to recover it. 00:28:31.220 [2024-12-09 17:39:00.110674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.220 [2024-12-09 17:39:00.110705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.220 qpair failed and we were unable to recover it. 00:28:31.220 [2024-12-09 17:39:00.110949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.220 [2024-12-09 17:39:00.110982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.220 qpair failed and we were unable to recover it. 00:28:31.220 [2024-12-09 17:39:00.111179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.220 [2024-12-09 17:39:00.111210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.220 qpair failed and we were unable to recover it. 00:28:31.221 [2024-12-09 17:39:00.111334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.221 [2024-12-09 17:39:00.111366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.221 qpair failed and we were unable to recover it. 00:28:31.221 [2024-12-09 17:39:00.111546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.221 [2024-12-09 17:39:00.111578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.221 qpair failed and we were unable to recover it. 00:28:31.221 [2024-12-09 17:39:00.111744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.221 [2024-12-09 17:39:00.111775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.221 qpair failed and we were unable to recover it. 00:28:31.221 [2024-12-09 17:39:00.111988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.221 [2024-12-09 17:39:00.112019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.221 qpair failed and we were unable to recover it. 00:28:31.221 [2024-12-09 17:39:00.112237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.221 [2024-12-09 17:39:00.112271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.221 qpair failed and we were unable to recover it. 00:28:31.221 [2024-12-09 17:39:00.112404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.221 [2024-12-09 17:39:00.112435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.221 qpair failed and we were unable to recover it. 00:28:31.221 [2024-12-09 17:39:00.112643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.221 [2024-12-09 17:39:00.112674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.221 qpair failed and we were unable to recover it. 00:28:31.221 [2024-12-09 17:39:00.112871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.221 [2024-12-09 17:39:00.112903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.221 qpair failed and we were unable to recover it. 00:28:31.221 [2024-12-09 17:39:00.113116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.221 [2024-12-09 17:39:00.113146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.221 qpair failed and we were unable to recover it. 00:28:31.221 [2024-12-09 17:39:00.113324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.221 [2024-12-09 17:39:00.113358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.221 qpair failed and we were unable to recover it. 00:28:31.221 [2024-12-09 17:39:00.113488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.221 [2024-12-09 17:39:00.113520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.221 qpair failed and we were unable to recover it. 00:28:31.221 [2024-12-09 17:39:00.113700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.221 [2024-12-09 17:39:00.113731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.221 qpair failed and we were unable to recover it. 00:28:31.221 [2024-12-09 17:39:00.113966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.221 [2024-12-09 17:39:00.113998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.221 qpair failed and we were unable to recover it. 00:28:31.221 [2024-12-09 17:39:00.114103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.221 [2024-12-09 17:39:00.114134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.221 qpair failed and we were unable to recover it. 00:28:31.221 [2024-12-09 17:39:00.114268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.221 [2024-12-09 17:39:00.114308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.221 qpair failed and we were unable to recover it. 00:28:31.221 [2024-12-09 17:39:00.114424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.221 [2024-12-09 17:39:00.114457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.221 qpair failed and we were unable to recover it. 00:28:31.221 [2024-12-09 17:39:00.114716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.221 [2024-12-09 17:39:00.114747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.221 qpair failed and we were unable to recover it. 00:28:31.221 [2024-12-09 17:39:00.114940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.221 [2024-12-09 17:39:00.114972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.221 qpair failed and we were unable to recover it. 00:28:31.221 [2024-12-09 17:39:00.115076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.221 [2024-12-09 17:39:00.115109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.221 qpair failed and we were unable to recover it. 00:28:31.221 [2024-12-09 17:39:00.115309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.221 [2024-12-09 17:39:00.115342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.221 qpair failed and we were unable to recover it. 00:28:31.221 [2024-12-09 17:39:00.115462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.221 [2024-12-09 17:39:00.115494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.221 qpair failed and we were unable to recover it. 00:28:31.221 [2024-12-09 17:39:00.115685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.221 [2024-12-09 17:39:00.115717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.221 qpair failed and we were unable to recover it. 00:28:31.221 [2024-12-09 17:39:00.115888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.221 [2024-12-09 17:39:00.115920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.221 qpair failed and we were unable to recover it. 00:28:31.221 [2024-12-09 17:39:00.116093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.221 [2024-12-09 17:39:00.116125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.221 qpair failed and we were unable to recover it. 00:28:31.221 [2024-12-09 17:39:00.116248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.221 [2024-12-09 17:39:00.116281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.221 qpair failed and we were unable to recover it. 00:28:31.221 [2024-12-09 17:39:00.116456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.221 [2024-12-09 17:39:00.116488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.221 qpair failed and we were unable to recover it. 00:28:31.221 [2024-12-09 17:39:00.116657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.221 [2024-12-09 17:39:00.116690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.221 qpair failed and we were unable to recover it. 00:28:31.221 [2024-12-09 17:39:00.116943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.221 [2024-12-09 17:39:00.116975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.221 qpair failed and we were unable to recover it. 00:28:31.221 [2024-12-09 17:39:00.117202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.221 [2024-12-09 17:39:00.117244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.221 qpair failed and we were unable to recover it. 00:28:31.221 [2024-12-09 17:39:00.117355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.221 [2024-12-09 17:39:00.117388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.221 qpair failed and we were unable to recover it. 00:28:31.221 [2024-12-09 17:39:00.117631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.221 [2024-12-09 17:39:00.117664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.221 qpair failed and we were unable to recover it. 00:28:31.221 [2024-12-09 17:39:00.117846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.221 [2024-12-09 17:39:00.117878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.221 qpair failed and we were unable to recover it. 00:28:31.221 [2024-12-09 17:39:00.118174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.221 [2024-12-09 17:39:00.118207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.221 qpair failed and we were unable to recover it. 00:28:31.221 [2024-12-09 17:39:00.118320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.221 [2024-12-09 17:39:00.118353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.221 qpair failed and we were unable to recover it. 00:28:31.221 [2024-12-09 17:39:00.118528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.221 [2024-12-09 17:39:00.118561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.221 qpair failed and we were unable to recover it. 00:28:31.221 [2024-12-09 17:39:00.118705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.221 [2024-12-09 17:39:00.118739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.221 qpair failed and we were unable to recover it. 00:28:31.221 [2024-12-09 17:39:00.118854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.221 [2024-12-09 17:39:00.118887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.221 qpair failed and we were unable to recover it. 00:28:31.221 [2024-12-09 17:39:00.119060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.221 [2024-12-09 17:39:00.119093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.221 qpair failed and we were unable to recover it. 00:28:31.221 [2024-12-09 17:39:00.119288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.221 [2024-12-09 17:39:00.119325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.221 qpair failed and we were unable to recover it. 00:28:31.222 [2024-12-09 17:39:00.119429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.222 [2024-12-09 17:39:00.119462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.222 qpair failed and we were unable to recover it. 00:28:31.222 [2024-12-09 17:39:00.119564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.222 [2024-12-09 17:39:00.119596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.222 qpair failed and we were unable to recover it. 00:28:31.222 [2024-12-09 17:39:00.119898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.222 [2024-12-09 17:39:00.119932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.222 qpair failed and we were unable to recover it. 00:28:31.222 [2024-12-09 17:39:00.120098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.222 [2024-12-09 17:39:00.120130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.222 qpair failed and we were unable to recover it. 00:28:31.222 [2024-12-09 17:39:00.120346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.222 [2024-12-09 17:39:00.120382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.222 qpair failed and we were unable to recover it. 00:28:31.222 [2024-12-09 17:39:00.120514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.222 [2024-12-09 17:39:00.120548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.222 qpair failed and we were unable to recover it. 00:28:31.222 [2024-12-09 17:39:00.120796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.222 [2024-12-09 17:39:00.120828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.222 qpair failed and we were unable to recover it. 00:28:31.222 [2024-12-09 17:39:00.121021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.222 [2024-12-09 17:39:00.121052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.222 qpair failed and we were unable to recover it. 00:28:31.222 [2024-12-09 17:39:00.121322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.222 [2024-12-09 17:39:00.121359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.222 qpair failed and we were unable to recover it. 00:28:31.222 [2024-12-09 17:39:00.121486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.222 [2024-12-09 17:39:00.121517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.222 qpair failed and we were unable to recover it. 00:28:31.222 [2024-12-09 17:39:00.121788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.222 [2024-12-09 17:39:00.121821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.222 qpair failed and we were unable to recover it. 00:28:31.222 [2024-12-09 17:39:00.121940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.222 [2024-12-09 17:39:00.121973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.222 qpair failed and we were unable to recover it. 00:28:31.222 [2024-12-09 17:39:00.122161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.222 [2024-12-09 17:39:00.122194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.222 qpair failed and we were unable to recover it. 00:28:31.222 [2024-12-09 17:39:00.122318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.222 [2024-12-09 17:39:00.122355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.222 qpair failed and we were unable to recover it. 00:28:31.222 [2024-12-09 17:39:00.122486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.222 [2024-12-09 17:39:00.122517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.222 qpair failed and we were unable to recover it. 00:28:31.222 [2024-12-09 17:39:00.122701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.222 [2024-12-09 17:39:00.122738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.222 qpair failed and we were unable to recover it. 00:28:31.222 [2024-12-09 17:39:00.122846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.222 [2024-12-09 17:39:00.122877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.222 qpair failed and we were unable to recover it. 00:28:31.222 [2024-12-09 17:39:00.122988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.222 [2024-12-09 17:39:00.123020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.222 qpair failed and we were unable to recover it. 00:28:31.222 [2024-12-09 17:39:00.123238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.222 [2024-12-09 17:39:00.123273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.222 qpair failed and we were unable to recover it. 00:28:31.222 [2024-12-09 17:39:00.123379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.222 [2024-12-09 17:39:00.123412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.222 qpair failed and we were unable to recover it. 00:28:31.222 [2024-12-09 17:39:00.123587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.222 [2024-12-09 17:39:00.123619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.222 qpair failed and we were unable to recover it. 00:28:31.222 [2024-12-09 17:39:00.123794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.222 [2024-12-09 17:39:00.123827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.222 qpair failed and we were unable to recover it. 00:28:31.222 [2024-12-09 17:39:00.123931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.222 [2024-12-09 17:39:00.123962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.222 qpair failed and we were unable to recover it. 00:28:31.222 [2024-12-09 17:39:00.124136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.222 [2024-12-09 17:39:00.124168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.222 qpair failed and we were unable to recover it. 00:28:31.222 [2024-12-09 17:39:00.124312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.222 [2024-12-09 17:39:00.124346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.222 qpair failed and we were unable to recover it. 00:28:31.222 [2024-12-09 17:39:00.124469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.222 [2024-12-09 17:39:00.124503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.222 qpair failed and we were unable to recover it. 00:28:31.222 [2024-12-09 17:39:00.124681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.222 [2024-12-09 17:39:00.124714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.222 qpair failed and we were unable to recover it. 00:28:31.222 [2024-12-09 17:39:00.124849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.222 [2024-12-09 17:39:00.124881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.222 qpair failed and we were unable to recover it. 00:28:31.222 [2024-12-09 17:39:00.124992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.222 [2024-12-09 17:39:00.125024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.222 qpair failed and we were unable to recover it. 00:28:31.222 [2024-12-09 17:39:00.125196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.222 [2024-12-09 17:39:00.125239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.222 qpair failed and we were unable to recover it. 00:28:31.222 [2024-12-09 17:39:00.125357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.222 [2024-12-09 17:39:00.125389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.222 qpair failed and we were unable to recover it. 00:28:31.222 [2024-12-09 17:39:00.125512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.222 [2024-12-09 17:39:00.125545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.222 qpair failed and we were unable to recover it. 00:28:31.222 [2024-12-09 17:39:00.125721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.222 [2024-12-09 17:39:00.125753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.222 qpair failed and we were unable to recover it. 00:28:31.222 [2024-12-09 17:39:00.125873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.222 [2024-12-09 17:39:00.125904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.222 qpair failed and we were unable to recover it. 00:28:31.222 [2024-12-09 17:39:00.126091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.222 [2024-12-09 17:39:00.126123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.222 qpair failed and we were unable to recover it. 00:28:31.222 [2024-12-09 17:39:00.126257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.222 [2024-12-09 17:39:00.126292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.222 qpair failed and we were unable to recover it. 00:28:31.222 [2024-12-09 17:39:00.126409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.222 [2024-12-09 17:39:00.126442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.222 qpair failed and we were unable to recover it. 00:28:31.222 [2024-12-09 17:39:00.126670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.222 [2024-12-09 17:39:00.126702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.222 qpair failed and we were unable to recover it. 00:28:31.223 [2024-12-09 17:39:00.126891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.223 [2024-12-09 17:39:00.126923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.223 qpair failed and we were unable to recover it. 00:28:31.223 [2024-12-09 17:39:00.127097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.223 [2024-12-09 17:39:00.127129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.223 qpair failed and we were unable to recover it. 00:28:31.223 [2024-12-09 17:39:00.127331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.223 [2024-12-09 17:39:00.127364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.223 qpair failed and we were unable to recover it. 00:28:31.223 [2024-12-09 17:39:00.127601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.223 [2024-12-09 17:39:00.127633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.223 qpair failed and we were unable to recover it. 00:28:31.223 [2024-12-09 17:39:00.127827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.223 [2024-12-09 17:39:00.127857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.223 qpair failed and we were unable to recover it. 00:28:31.223 [2024-12-09 17:39:00.128032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.223 [2024-12-09 17:39:00.128064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.223 qpair failed and we were unable to recover it. 00:28:31.223 [2024-12-09 17:39:00.128247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.223 [2024-12-09 17:39:00.128280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.223 qpair failed and we were unable to recover it. 00:28:31.223 [2024-12-09 17:39:00.128478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.223 [2024-12-09 17:39:00.128510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.223 qpair failed and we were unable to recover it. 00:28:31.223 [2024-12-09 17:39:00.128724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.223 [2024-12-09 17:39:00.128756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.223 qpair failed and we were unable to recover it. 00:28:31.223 [2024-12-09 17:39:00.128939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.223 [2024-12-09 17:39:00.128970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.223 qpair failed and we were unable to recover it. 00:28:31.223 [2024-12-09 17:39:00.129081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.223 [2024-12-09 17:39:00.129113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.223 qpair failed and we were unable to recover it. 00:28:31.223 [2024-12-09 17:39:00.129298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.223 [2024-12-09 17:39:00.129331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.223 qpair failed and we were unable to recover it. 00:28:31.223 [2024-12-09 17:39:00.129441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.223 [2024-12-09 17:39:00.129473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.223 qpair failed and we were unable to recover it. 00:28:31.223 [2024-12-09 17:39:00.129649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.223 [2024-12-09 17:39:00.129681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.223 qpair failed and we were unable to recover it. 00:28:31.223 [2024-12-09 17:39:00.129788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.223 [2024-12-09 17:39:00.129820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.223 qpair failed and we were unable to recover it. 00:28:31.223 [2024-12-09 17:39:00.130007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.223 [2024-12-09 17:39:00.130039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.223 qpair failed and we were unable to recover it. 00:28:31.223 [2024-12-09 17:39:00.130209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.223 [2024-12-09 17:39:00.130265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.223 qpair failed and we were unable to recover it. 00:28:31.223 [2024-12-09 17:39:00.130468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.223 [2024-12-09 17:39:00.130508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.223 qpair failed and we were unable to recover it. 00:28:31.223 [2024-12-09 17:39:00.130683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.223 [2024-12-09 17:39:00.130715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.223 qpair failed and we were unable to recover it. 00:28:31.223 [2024-12-09 17:39:00.130837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.223 [2024-12-09 17:39:00.130870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.223 qpair failed and we were unable to recover it. 00:28:31.223 [2024-12-09 17:39:00.130977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.223 [2024-12-09 17:39:00.131008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.223 qpair failed and we were unable to recover it. 00:28:31.223 [2024-12-09 17:39:00.131247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.223 [2024-12-09 17:39:00.131280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.223 qpair failed and we were unable to recover it. 00:28:31.223 [2024-12-09 17:39:00.131458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.223 [2024-12-09 17:39:00.131489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.223 qpair failed and we were unable to recover it. 00:28:31.223 [2024-12-09 17:39:00.131607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.223 [2024-12-09 17:39:00.131640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.223 qpair failed and we were unable to recover it. 00:28:31.223 [2024-12-09 17:39:00.131830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.223 [2024-12-09 17:39:00.131862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.223 qpair failed and we were unable to recover it. 00:28:31.223 [2024-12-09 17:39:00.132070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.223 [2024-12-09 17:39:00.132101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.223 qpair failed and we were unable to recover it. 00:28:31.223 [2024-12-09 17:39:00.132235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.223 [2024-12-09 17:39:00.132269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.223 qpair failed and we were unable to recover it. 00:28:31.223 [2024-12-09 17:39:00.132456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.223 [2024-12-09 17:39:00.132488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.223 qpair failed and we were unable to recover it. 00:28:31.223 [2024-12-09 17:39:00.132690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.223 [2024-12-09 17:39:00.132721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.223 qpair failed and we were unable to recover it. 00:28:31.223 [2024-12-09 17:39:00.132855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.223 [2024-12-09 17:39:00.132888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.223 qpair failed and we were unable to recover it. 00:28:31.223 [2024-12-09 17:39:00.132997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.223 [2024-12-09 17:39:00.133030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.223 qpair failed and we were unable to recover it. 00:28:31.223 [2024-12-09 17:39:00.133239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.223 [2024-12-09 17:39:00.133271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.223 qpair failed and we were unable to recover it. 00:28:31.223 [2024-12-09 17:39:00.133391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.223 [2024-12-09 17:39:00.133423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.223 qpair failed and we were unable to recover it. 00:28:31.223 [2024-12-09 17:39:00.133559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.223 [2024-12-09 17:39:00.133591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.223 qpair failed and we were unable to recover it. 00:28:31.223 [2024-12-09 17:39:00.133716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.223 [2024-12-09 17:39:00.133749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.224 qpair failed and we were unable to recover it. 00:28:31.224 [2024-12-09 17:39:00.133867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.224 [2024-12-09 17:39:00.133898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.224 qpair failed and we were unable to recover it. 00:28:31.224 [2024-12-09 17:39:00.134141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.224 [2024-12-09 17:39:00.134174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.224 qpair failed and we were unable to recover it. 00:28:31.224 [2024-12-09 17:39:00.134314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.224 [2024-12-09 17:39:00.134347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.224 qpair failed and we were unable to recover it. 00:28:31.224 [2024-12-09 17:39:00.134461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.224 [2024-12-09 17:39:00.134494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.224 qpair failed and we were unable to recover it. 00:28:31.224 [2024-12-09 17:39:00.134614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.224 [2024-12-09 17:39:00.134645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.224 qpair failed and we were unable to recover it. 00:28:31.224 [2024-12-09 17:39:00.134761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.224 [2024-12-09 17:39:00.134794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.224 qpair failed and we were unable to recover it. 00:28:31.224 [2024-12-09 17:39:00.134963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.224 [2024-12-09 17:39:00.134996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.224 qpair failed and we were unable to recover it. 00:28:31.224 [2024-12-09 17:39:00.135184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.224 [2024-12-09 17:39:00.135229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.224 qpair failed and we were unable to recover it. 00:28:31.224 [2024-12-09 17:39:00.135348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.224 [2024-12-09 17:39:00.135381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.224 qpair failed and we were unable to recover it. 00:28:31.224 [2024-12-09 17:39:00.135542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.224 [2024-12-09 17:39:00.135606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.224 qpair failed and we were unable to recover it. 00:28:31.224 [2024-12-09 17:39:00.135818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.224 [2024-12-09 17:39:00.135878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.224 qpair failed and we were unable to recover it. 00:28:31.224 [2024-12-09 17:39:00.136105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.224 [2024-12-09 17:39:00.136168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.224 qpair failed and we were unable to recover it. 00:28:31.224 [2024-12-09 17:39:00.136369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.224 [2024-12-09 17:39:00.136404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.224 qpair failed and we were unable to recover it. 00:28:31.224 [2024-12-09 17:39:00.136623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.224 [2024-12-09 17:39:00.136655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.224 qpair failed and we were unable to recover it. 00:28:31.224 [2024-12-09 17:39:00.136840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.224 [2024-12-09 17:39:00.136872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.224 qpair failed and we were unable to recover it. 00:28:31.224 [2024-12-09 17:39:00.136974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.224 [2024-12-09 17:39:00.137006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.224 qpair failed and we were unable to recover it. 00:28:31.224 [2024-12-09 17:39:00.137143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.224 [2024-12-09 17:39:00.137175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.224 qpair failed and we were unable to recover it. 00:28:31.224 [2024-12-09 17:39:00.137313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.224 [2024-12-09 17:39:00.137346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.224 qpair failed and we were unable to recover it. 00:28:31.224 [2024-12-09 17:39:00.137535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.224 [2024-12-09 17:39:00.137566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.224 qpair failed and we were unable to recover it. 00:28:31.224 [2024-12-09 17:39:00.137681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.224 [2024-12-09 17:39:00.137713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.224 qpair failed and we were unable to recover it. 00:28:31.224 [2024-12-09 17:39:00.137929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.224 [2024-12-09 17:39:00.137961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.224 qpair failed and we were unable to recover it. 00:28:31.224 [2024-12-09 17:39:00.138261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.224 [2024-12-09 17:39:00.138297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.224 qpair failed and we were unable to recover it. 00:28:31.224 [2024-12-09 17:39:00.138477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.224 [2024-12-09 17:39:00.138514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.224 qpair failed and we were unable to recover it. 00:28:31.224 [2024-12-09 17:39:00.138627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.224 [2024-12-09 17:39:00.138657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.224 qpair failed and we were unable to recover it. 00:28:31.224 [2024-12-09 17:39:00.138817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.224 [2024-12-09 17:39:00.138849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.224 qpair failed and we were unable to recover it. 00:28:31.224 [2024-12-09 17:39:00.139047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.224 [2024-12-09 17:39:00.139077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.224 qpair failed and we were unable to recover it. 00:28:31.224 [2024-12-09 17:39:00.139190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.224 [2024-12-09 17:39:00.139233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.224 qpair failed and we were unable to recover it. 00:28:31.224 [2024-12-09 17:39:00.139418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.224 [2024-12-09 17:39:00.139448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.224 qpair failed and we were unable to recover it. 00:28:31.224 [2024-12-09 17:39:00.139663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.224 [2024-12-09 17:39:00.139694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.224 qpair failed and we were unable to recover it. 00:28:31.224 [2024-12-09 17:39:00.139883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.224 [2024-12-09 17:39:00.139915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.224 qpair failed and we were unable to recover it. 00:28:31.224 [2024-12-09 17:39:00.140118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.224 [2024-12-09 17:39:00.140150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.224 qpair failed and we were unable to recover it. 00:28:31.224 [2024-12-09 17:39:00.140343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.224 [2024-12-09 17:39:00.140376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.224 qpair failed and we were unable to recover it. 00:28:31.224 [2024-12-09 17:39:00.140570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.224 [2024-12-09 17:39:00.140602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.224 qpair failed and we were unable to recover it. 00:28:31.224 [2024-12-09 17:39:00.140774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.224 [2024-12-09 17:39:00.140804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.224 qpair failed and we were unable to recover it. 00:28:31.224 [2024-12-09 17:39:00.140972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.224 [2024-12-09 17:39:00.141006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.224 qpair failed and we were unable to recover it. 00:28:31.224 [2024-12-09 17:39:00.141205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.224 [2024-12-09 17:39:00.141246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.224 qpair failed and we were unable to recover it. 00:28:31.224 [2024-12-09 17:39:00.141497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.224 [2024-12-09 17:39:00.141530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.224 qpair failed and we were unable to recover it. 00:28:31.224 [2024-12-09 17:39:00.141723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.224 [2024-12-09 17:39:00.141754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.224 qpair failed and we were unable to recover it. 00:28:31.224 [2024-12-09 17:39:00.141868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.224 [2024-12-09 17:39:00.141899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.225 qpair failed and we were unable to recover it. 00:28:31.225 [2024-12-09 17:39:00.142006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.225 [2024-12-09 17:39:00.142036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.225 qpair failed and we were unable to recover it. 00:28:31.225 [2024-12-09 17:39:00.142248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.225 [2024-12-09 17:39:00.142280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.225 qpair failed and we were unable to recover it. 00:28:31.225 [2024-12-09 17:39:00.142445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.225 [2024-12-09 17:39:00.142476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.225 qpair failed and we were unable to recover it. 00:28:31.225 [2024-12-09 17:39:00.142609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.225 [2024-12-09 17:39:00.142638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.225 qpair failed and we were unable to recover it. 00:28:31.225 [2024-12-09 17:39:00.142760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.225 [2024-12-09 17:39:00.142790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.225 qpair failed and we were unable to recover it. 00:28:31.225 [2024-12-09 17:39:00.142965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.225 [2024-12-09 17:39:00.142994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.225 qpair failed and we were unable to recover it. 00:28:31.225 [2024-12-09 17:39:00.143243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.225 [2024-12-09 17:39:00.143276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.225 qpair failed and we were unable to recover it. 00:28:31.225 [2024-12-09 17:39:00.143437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.225 [2024-12-09 17:39:00.143468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.225 qpair failed and we were unable to recover it. 00:28:31.225 [2024-12-09 17:39:00.143586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.225 [2024-12-09 17:39:00.143617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.225 qpair failed and we were unable to recover it. 00:28:31.225 [2024-12-09 17:39:00.143742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.225 [2024-12-09 17:39:00.143772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.225 qpair failed and we were unable to recover it. 00:28:31.225 [2024-12-09 17:39:00.143976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.225 [2024-12-09 17:39:00.144011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.225 qpair failed and we were unable to recover it. 00:28:31.225 [2024-12-09 17:39:00.144155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.225 [2024-12-09 17:39:00.144189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.225 qpair failed and we were unable to recover it. 00:28:31.225 [2024-12-09 17:39:00.144499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.225 [2024-12-09 17:39:00.144541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.225 qpair failed and we were unable to recover it. 00:28:31.225 [2024-12-09 17:39:00.144683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.225 [2024-12-09 17:39:00.144716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.225 qpair failed and we were unable to recover it. 00:28:31.225 [2024-12-09 17:39:00.144852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.225 [2024-12-09 17:39:00.144886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.225 qpair failed and we were unable to recover it. 00:28:31.225 [2024-12-09 17:39:00.145007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.225 [2024-12-09 17:39:00.145039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.225 qpair failed and we were unable to recover it. 00:28:31.225 [2024-12-09 17:39:00.145233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.225 [2024-12-09 17:39:00.145268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.225 qpair failed and we were unable to recover it. 00:28:31.225 [2024-12-09 17:39:00.145402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.225 [2024-12-09 17:39:00.145435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.225 qpair failed and we were unable to recover it. 00:28:31.225 [2024-12-09 17:39:00.145603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.225 [2024-12-09 17:39:00.145638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.225 qpair failed and we were unable to recover it. 00:28:31.225 [2024-12-09 17:39:00.145881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.225 [2024-12-09 17:39:00.145914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.225 qpair failed and we were unable to recover it. 00:28:31.225 [2024-12-09 17:39:00.146039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.225 [2024-12-09 17:39:00.146072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.225 qpair failed and we were unable to recover it. 00:28:31.225 [2024-12-09 17:39:00.146339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.225 [2024-12-09 17:39:00.146374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.225 qpair failed and we were unable to recover it. 00:28:31.225 [2024-12-09 17:39:00.146607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.225 [2024-12-09 17:39:00.146639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.225 qpair failed and we were unable to recover it. 00:28:31.225 [2024-12-09 17:39:00.146907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.225 [2024-12-09 17:39:00.146947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.225 qpair failed and we were unable to recover it. 00:28:31.225 [2024-12-09 17:39:00.147213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.225 [2024-12-09 17:39:00.147257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.225 qpair failed and we were unable to recover it. 00:28:31.225 [2024-12-09 17:39:00.147378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.225 [2024-12-09 17:39:00.147411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.225 qpair failed and we were unable to recover it. 00:28:31.225 [2024-12-09 17:39:00.147653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.225 [2024-12-09 17:39:00.147686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.225 qpair failed and we were unable to recover it. 00:28:31.225 [2024-12-09 17:39:00.147856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.225 [2024-12-09 17:39:00.147888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.225 qpair failed and we were unable to recover it. 00:28:31.225 [2024-12-09 17:39:00.147999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.225 [2024-12-09 17:39:00.148032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.225 qpair failed and we were unable to recover it. 00:28:31.225 [2024-12-09 17:39:00.148246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.225 [2024-12-09 17:39:00.148279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.225 qpair failed and we were unable to recover it. 00:28:31.225 [2024-12-09 17:39:00.148410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.225 [2024-12-09 17:39:00.148441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.225 qpair failed and we were unable to recover it. 00:28:31.225 [2024-12-09 17:39:00.148647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.225 [2024-12-09 17:39:00.148680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.225 qpair failed and we were unable to recover it. 00:28:31.225 [2024-12-09 17:39:00.148811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.225 [2024-12-09 17:39:00.148842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.225 qpair failed and we were unable to recover it. 00:28:31.225 [2024-12-09 17:39:00.148964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.225 [2024-12-09 17:39:00.148996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.225 qpair failed and we were unable to recover it. 00:28:31.225 [2024-12-09 17:39:00.149216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.225 [2024-12-09 17:39:00.149258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.225 qpair failed and we were unable to recover it. 00:28:31.225 [2024-12-09 17:39:00.149394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.225 [2024-12-09 17:39:00.149426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.225 qpair failed and we were unable to recover it. 00:28:31.225 [2024-12-09 17:39:00.149550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.225 [2024-12-09 17:39:00.149582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.225 qpair failed and we were unable to recover it. 00:28:31.225 [2024-12-09 17:39:00.149781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.225 [2024-12-09 17:39:00.149813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.225 qpair failed and we were unable to recover it. 00:28:31.225 [2024-12-09 17:39:00.149954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.225 [2024-12-09 17:39:00.149986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.226 qpair failed and we were unable to recover it. 00:28:31.226 [2024-12-09 17:39:00.150252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.226 [2024-12-09 17:39:00.150286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.226 qpair failed and we were unable to recover it. 00:28:31.226 [2024-12-09 17:39:00.150431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.226 [2024-12-09 17:39:00.150465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.226 qpair failed and we were unable to recover it. 00:28:31.226 [2024-12-09 17:39:00.150580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.226 [2024-12-09 17:39:00.150612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.226 qpair failed and we were unable to recover it. 00:28:31.226 [2024-12-09 17:39:00.150826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.226 [2024-12-09 17:39:00.150864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.226 qpair failed and we were unable to recover it. 00:28:31.226 [2024-12-09 17:39:00.150999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.226 [2024-12-09 17:39:00.151032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.226 qpair failed and we were unable to recover it. 00:28:31.226 [2024-12-09 17:39:00.151231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.226 [2024-12-09 17:39:00.151264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.226 qpair failed and we were unable to recover it. 00:28:31.226 [2024-12-09 17:39:00.151454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.226 [2024-12-09 17:39:00.151497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.226 qpair failed and we were unable to recover it. 00:28:31.226 [2024-12-09 17:39:00.151696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.226 [2024-12-09 17:39:00.151730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.226 qpair failed and we were unable to recover it. 00:28:31.226 [2024-12-09 17:39:00.151915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.226 [2024-12-09 17:39:00.151952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.226 qpair failed and we were unable to recover it. 00:28:31.226 [2024-12-09 17:39:00.152111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.226 [2024-12-09 17:39:00.152143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.226 qpair failed and we were unable to recover it. 00:28:31.226 [2024-12-09 17:39:00.152273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.226 [2024-12-09 17:39:00.152309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.226 qpair failed and we were unable to recover it. 00:28:31.226 [2024-12-09 17:39:00.152486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.226 [2024-12-09 17:39:00.152521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.226 qpair failed and we were unable to recover it. 00:28:31.226 [2024-12-09 17:39:00.152699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.226 [2024-12-09 17:39:00.152730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.226 qpair failed and we were unable to recover it. 00:28:31.226 [2024-12-09 17:39:00.152909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.226 [2024-12-09 17:39:00.152940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.226 qpair failed and we were unable to recover it. 00:28:31.226 [2024-12-09 17:39:00.153074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.226 [2024-12-09 17:39:00.153106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.226 qpair failed and we were unable to recover it. 00:28:31.226 [2024-12-09 17:39:00.153232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.226 [2024-12-09 17:39:00.153267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.226 qpair failed and we were unable to recover it. 00:28:31.226 [2024-12-09 17:39:00.153372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.226 [2024-12-09 17:39:00.153405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.226 qpair failed and we were unable to recover it. 00:28:31.226 [2024-12-09 17:39:00.153540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.226 [2024-12-09 17:39:00.153575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.226 qpair failed and we were unable to recover it. 00:28:31.226 [2024-12-09 17:39:00.153699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.226 [2024-12-09 17:39:00.153732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.226 qpair failed and we were unable to recover it. 00:28:31.226 [2024-12-09 17:39:00.153992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.226 [2024-12-09 17:39:00.154024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.226 qpair failed and we were unable to recover it. 00:28:31.226 [2024-12-09 17:39:00.154154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.226 [2024-12-09 17:39:00.154187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.226 qpair failed and we were unable to recover it. 00:28:31.226 [2024-12-09 17:39:00.154304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.226 [2024-12-09 17:39:00.154336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.226 qpair failed and we were unable to recover it. 00:28:31.226 [2024-12-09 17:39:00.154535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.226 [2024-12-09 17:39:00.154569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.226 qpair failed and we were unable to recover it. 00:28:31.226 [2024-12-09 17:39:00.154813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.226 [2024-12-09 17:39:00.154845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.226 qpair failed and we were unable to recover it. 00:28:31.226 [2024-12-09 17:39:00.154975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.226 [2024-12-09 17:39:00.155016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.226 qpair failed and we were unable to recover it. 00:28:31.226 [2024-12-09 17:39:00.155200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.226 [2024-12-09 17:39:00.155249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.226 qpair failed and we were unable to recover it. 00:28:31.226 [2024-12-09 17:39:00.155360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.226 [2024-12-09 17:39:00.155392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.226 qpair failed and we were unable to recover it. 00:28:31.226 [2024-12-09 17:39:00.155496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.226 [2024-12-09 17:39:00.155529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.226 qpair failed and we were unable to recover it. 00:28:31.226 [2024-12-09 17:39:00.155708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.226 [2024-12-09 17:39:00.155740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.226 qpair failed and we were unable to recover it. 00:28:31.226 [2024-12-09 17:39:00.155924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.226 [2024-12-09 17:39:00.155957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.226 qpair failed and we were unable to recover it. 00:28:31.226 [2024-12-09 17:39:00.156057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.226 [2024-12-09 17:39:00.156090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.226 qpair failed and we were unable to recover it. 00:28:31.226 [2024-12-09 17:39:00.156244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.226 [2024-12-09 17:39:00.156279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.226 qpair failed and we were unable to recover it. 00:28:31.226 [2024-12-09 17:39:00.156381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.226 [2024-12-09 17:39:00.156414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.226 qpair failed and we were unable to recover it. 00:28:31.226 [2024-12-09 17:39:00.156587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.226 [2024-12-09 17:39:00.156620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.226 qpair failed and we were unable to recover it. 00:28:31.226 [2024-12-09 17:39:00.156835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.226 [2024-12-09 17:39:00.156868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.226 qpair failed and we were unable to recover it. 00:28:31.226 [2024-12-09 17:39:00.157080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.226 [2024-12-09 17:39:00.157112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.226 qpair failed and we were unable to recover it. 00:28:31.226 [2024-12-09 17:39:00.157234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.226 [2024-12-09 17:39:00.157267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.226 qpair failed and we were unable to recover it. 00:28:31.226 [2024-12-09 17:39:00.157534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.226 [2024-12-09 17:39:00.157567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.226 qpair failed and we were unable to recover it. 00:28:31.226 [2024-12-09 17:39:00.157751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.227 [2024-12-09 17:39:00.157784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.227 qpair failed and we were unable to recover it. 00:28:31.227 [2024-12-09 17:39:00.158008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.227 [2024-12-09 17:39:00.158040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.227 qpair failed and we were unable to recover it. 00:28:31.227 [2024-12-09 17:39:00.158161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.227 [2024-12-09 17:39:00.158193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.227 qpair failed and we were unable to recover it. 00:28:31.227 [2024-12-09 17:39:00.158378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.227 [2024-12-09 17:39:00.158411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.227 qpair failed and we were unable to recover it. 00:28:31.227 [2024-12-09 17:39:00.158595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.227 [2024-12-09 17:39:00.158627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.227 qpair failed and we were unable to recover it. 00:28:31.227 [2024-12-09 17:39:00.158766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.227 [2024-12-09 17:39:00.158799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.227 qpair failed and we were unable to recover it. 00:28:31.227 [2024-12-09 17:39:00.158929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.227 [2024-12-09 17:39:00.158961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.227 qpair failed and we were unable to recover it. 00:28:31.227 [2024-12-09 17:39:00.159156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.227 [2024-12-09 17:39:00.159188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.227 qpair failed and we were unable to recover it. 00:28:31.227 [2024-12-09 17:39:00.159429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.227 [2024-12-09 17:39:00.159473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.227 qpair failed and we were unable to recover it. 00:28:31.227 [2024-12-09 17:39:00.159603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.227 [2024-12-09 17:39:00.159640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.227 qpair failed and we were unable to recover it. 00:28:31.227 [2024-12-09 17:39:00.159786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.227 [2024-12-09 17:39:00.159818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.227 qpair failed and we were unable to recover it. 00:28:31.227 [2024-12-09 17:39:00.159941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.227 [2024-12-09 17:39:00.159975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.227 qpair failed and we were unable to recover it. 00:28:31.227 [2024-12-09 17:39:00.160101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.227 [2024-12-09 17:39:00.160133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.227 qpair failed and we were unable to recover it. 00:28:31.227 [2024-12-09 17:39:00.160339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.227 [2024-12-09 17:39:00.160392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.227 qpair failed and we were unable to recover it. 00:28:31.227 [2024-12-09 17:39:00.160585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.227 [2024-12-09 17:39:00.160621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.227 qpair failed and we were unable to recover it. 00:28:31.227 [2024-12-09 17:39:00.160833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.227 [2024-12-09 17:39:00.160866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.227 qpair failed and we were unable to recover it. 00:28:31.227 [2024-12-09 17:39:00.161042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.227 [2024-12-09 17:39:00.161074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.227 qpair failed and we were unable to recover it. 00:28:31.227 [2024-12-09 17:39:00.161246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.227 [2024-12-09 17:39:00.161280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.227 qpair failed and we were unable to recover it. 00:28:31.227 [2024-12-09 17:39:00.161482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.227 [2024-12-09 17:39:00.161516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.227 qpair failed and we were unable to recover it. 00:28:31.227 [2024-12-09 17:39:00.161699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.227 [2024-12-09 17:39:00.161731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.227 qpair failed and we were unable to recover it. 00:28:31.227 [2024-12-09 17:39:00.161922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.227 [2024-12-09 17:39:00.161954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.227 qpair failed and we were unable to recover it. 00:28:31.227 [2024-12-09 17:39:00.162079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.227 [2024-12-09 17:39:00.162112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.227 qpair failed and we were unable to recover it. 00:28:31.227 [2024-12-09 17:39:00.162394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.227 [2024-12-09 17:39:00.162426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.227 qpair failed and we were unable to recover it. 00:28:31.227 [2024-12-09 17:39:00.162535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.227 [2024-12-09 17:39:00.162567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.227 qpair failed and we were unable to recover it. 00:28:31.227 [2024-12-09 17:39:00.162826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.227 [2024-12-09 17:39:00.162859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.227 qpair failed and we were unable to recover it. 00:28:31.227 [2024-12-09 17:39:00.163039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.227 [2024-12-09 17:39:00.163071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.227 qpair failed and we were unable to recover it. 00:28:31.227 [2024-12-09 17:39:00.163245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.227 [2024-12-09 17:39:00.163285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.227 qpair failed and we were unable to recover it. 00:28:31.227 [2024-12-09 17:39:00.163421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.227 [2024-12-09 17:39:00.163454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.227 qpair failed and we were unable to recover it. 00:28:31.227 [2024-12-09 17:39:00.163657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.227 [2024-12-09 17:39:00.163690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.227 qpair failed and we were unable to recover it. 00:28:31.227 [2024-12-09 17:39:00.163888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.227 [2024-12-09 17:39:00.163921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.227 qpair failed and we were unable to recover it. 00:28:31.227 [2024-12-09 17:39:00.164092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.227 [2024-12-09 17:39:00.164124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.227 qpair failed and we were unable to recover it. 00:28:31.227 [2024-12-09 17:39:00.164364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.227 [2024-12-09 17:39:00.164398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.227 qpair failed and we were unable to recover it. 00:28:31.227 [2024-12-09 17:39:00.164508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.227 [2024-12-09 17:39:00.164540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.227 qpair failed and we were unable to recover it. 00:28:31.227 [2024-12-09 17:39:00.164743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.227 [2024-12-09 17:39:00.164779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.227 qpair failed and we were unable to recover it. 00:28:31.227 [2024-12-09 17:39:00.164946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.227 [2024-12-09 17:39:00.164979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.227 qpair failed and we were unable to recover it. 00:28:31.227 [2024-12-09 17:39:00.165189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.227 [2024-12-09 17:39:00.165243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.228 qpair failed and we were unable to recover it. 00:28:31.228 [2024-12-09 17:39:00.165376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.228 [2024-12-09 17:39:00.165409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.228 qpair failed and we were unable to recover it. 00:28:31.228 [2024-12-09 17:39:00.165517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.228 [2024-12-09 17:39:00.165550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.228 qpair failed and we were unable to recover it. 00:28:31.228 [2024-12-09 17:39:00.165791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.228 [2024-12-09 17:39:00.165825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.228 qpair failed and we were unable to recover it. 00:28:31.228 [2024-12-09 17:39:00.166026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.228 [2024-12-09 17:39:00.166059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.228 qpair failed and we were unable to recover it. 00:28:31.228 [2024-12-09 17:39:00.166178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.228 [2024-12-09 17:39:00.166211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.228 qpair failed and we were unable to recover it. 00:28:31.228 [2024-12-09 17:39:00.166337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.228 [2024-12-09 17:39:00.166368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.228 qpair failed and we were unable to recover it. 00:28:31.228 [2024-12-09 17:39:00.166557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.228 [2024-12-09 17:39:00.166590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.228 qpair failed and we were unable to recover it. 00:28:31.228 [2024-12-09 17:39:00.166770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.228 [2024-12-09 17:39:00.166803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.228 qpair failed and we were unable to recover it. 00:28:31.228 [2024-12-09 17:39:00.166934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.228 [2024-12-09 17:39:00.166967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.228 qpair failed and we were unable to recover it. 00:28:31.228 [2024-12-09 17:39:00.167291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.228 [2024-12-09 17:39:00.167324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.228 qpair failed and we were unable to recover it. 00:28:31.228 [2024-12-09 17:39:00.167517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.228 [2024-12-09 17:39:00.167551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.228 qpair failed and we were unable to recover it. 00:28:31.228 [2024-12-09 17:39:00.167677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.228 [2024-12-09 17:39:00.167710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.228 qpair failed and we were unable to recover it. 00:28:31.228 [2024-12-09 17:39:00.167923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.228 [2024-12-09 17:39:00.167955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.228 qpair failed and we were unable to recover it. 00:28:31.228 [2024-12-09 17:39:00.168171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.228 [2024-12-09 17:39:00.168205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.228 qpair failed and we were unable to recover it. 00:28:31.228 [2024-12-09 17:39:00.168460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.228 [2024-12-09 17:39:00.168494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.228 qpair failed and we were unable to recover it. 00:28:31.228 [2024-12-09 17:39:00.168747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.228 [2024-12-09 17:39:00.168781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.228 qpair failed and we were unable to recover it. 00:28:31.228 [2024-12-09 17:39:00.168899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.228 [2024-12-09 17:39:00.168932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.228 qpair failed and we were unable to recover it. 00:28:31.228 [2024-12-09 17:39:00.169073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.228 [2024-12-09 17:39:00.169111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.228 qpair failed and we were unable to recover it. 00:28:31.228 [2024-12-09 17:39:00.169299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.228 [2024-12-09 17:39:00.169336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.228 qpair failed and we were unable to recover it. 00:28:31.228 [2024-12-09 17:39:00.169553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.228 [2024-12-09 17:39:00.169586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.228 qpair failed and we were unable to recover it. 00:28:31.228 [2024-12-09 17:39:00.169702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.228 [2024-12-09 17:39:00.169735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.228 qpair failed and we were unable to recover it. 00:28:31.228 [2024-12-09 17:39:00.169923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.228 [2024-12-09 17:39:00.169955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.228 qpair failed and we were unable to recover it. 00:28:31.228 [2024-12-09 17:39:00.170065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.228 [2024-12-09 17:39:00.170097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.228 qpair failed and we were unable to recover it. 00:28:31.228 [2024-12-09 17:39:00.170322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.228 [2024-12-09 17:39:00.170356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.228 qpair failed and we were unable to recover it. 00:28:31.228 [2024-12-09 17:39:00.170564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.228 [2024-12-09 17:39:00.170596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.228 qpair failed and we were unable to recover it. 00:28:31.228 [2024-12-09 17:39:00.170708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.228 [2024-12-09 17:39:00.170741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.228 qpair failed and we were unable to recover it. 00:28:31.228 [2024-12-09 17:39:00.170914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.228 [2024-12-09 17:39:00.170945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.228 qpair failed and we were unable to recover it. 00:28:31.228 [2024-12-09 17:39:00.171122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.228 [2024-12-09 17:39:00.171154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.228 qpair failed and we were unable to recover it. 00:28:31.228 [2024-12-09 17:39:00.171279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.228 [2024-12-09 17:39:00.171312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.228 qpair failed and we were unable to recover it. 00:28:31.228 [2024-12-09 17:39:00.171443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.228 [2024-12-09 17:39:00.171474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.228 qpair failed and we were unable to recover it. 00:28:31.228 [2024-12-09 17:39:00.171575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.228 [2024-12-09 17:39:00.171607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.228 qpair failed and we were unable to recover it. 00:28:31.228 [2024-12-09 17:39:00.171800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.228 [2024-12-09 17:39:00.171833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.228 qpair failed and we were unable to recover it. 00:28:31.228 [2024-12-09 17:39:00.172024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.228 [2024-12-09 17:39:00.172057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.228 qpair failed and we were unable to recover it. 00:28:31.228 [2024-12-09 17:39:00.172235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.228 [2024-12-09 17:39:00.172268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.228 qpair failed and we were unable to recover it. 00:28:31.228 [2024-12-09 17:39:00.172508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.228 [2024-12-09 17:39:00.172540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.228 qpair failed and we were unable to recover it. 00:28:31.228 [2024-12-09 17:39:00.172717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.228 [2024-12-09 17:39:00.172748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.228 qpair failed and we were unable to recover it. 00:28:31.228 [2024-12-09 17:39:00.172885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.228 [2024-12-09 17:39:00.172918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.228 qpair failed and we were unable to recover it. 00:28:31.228 [2024-12-09 17:39:00.173089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.228 [2024-12-09 17:39:00.173122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.228 qpair failed and we were unable to recover it. 00:28:31.228 [2024-12-09 17:39:00.173327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.228 [2024-12-09 17:39:00.173364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.229 qpair failed and we were unable to recover it. 00:28:31.229 [2024-12-09 17:39:00.173558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.229 [2024-12-09 17:39:00.173592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.229 qpair failed and we were unable to recover it. 00:28:31.229 [2024-12-09 17:39:00.173855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.229 [2024-12-09 17:39:00.173888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.229 qpair failed and we were unable to recover it. 00:28:31.229 [2024-12-09 17:39:00.174061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.229 [2024-12-09 17:39:00.174095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.229 qpair failed and we were unable to recover it. 00:28:31.229 [2024-12-09 17:39:00.174215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.229 [2024-12-09 17:39:00.174258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.229 qpair failed and we were unable to recover it. 00:28:31.229 [2024-12-09 17:39:00.174430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.229 [2024-12-09 17:39:00.174462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.229 qpair failed and we were unable to recover it. 00:28:31.229 [2024-12-09 17:39:00.174717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.229 [2024-12-09 17:39:00.174750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.229 qpair failed and we were unable to recover it. 00:28:31.229 [2024-12-09 17:39:00.174934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.229 [2024-12-09 17:39:00.174966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.229 qpair failed and we were unable to recover it. 00:28:31.229 [2024-12-09 17:39:00.175081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.229 [2024-12-09 17:39:00.175114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.229 qpair failed and we were unable to recover it. 00:28:31.229 [2024-12-09 17:39:00.175236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.229 [2024-12-09 17:39:00.175271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.229 qpair failed and we were unable to recover it. 00:28:31.229 [2024-12-09 17:39:00.175446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.229 [2024-12-09 17:39:00.175478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.229 qpair failed and we were unable to recover it. 00:28:31.229 [2024-12-09 17:39:00.175696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.229 [2024-12-09 17:39:00.175729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.229 qpair failed and we were unable to recover it. 00:28:31.229 17:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:31.229 [2024-12-09 17:39:00.175866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.229 [2024-12-09 17:39:00.175900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.229 qpair failed and we were unable to recover it. 00:28:31.229 17:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:28:31.229 [2024-12-09 17:39:00.176075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.229 [2024-12-09 17:39:00.176109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.229 qpair failed and we were unable to recover it. 00:28:31.229 17:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:31.229 [2024-12-09 17:39:00.176383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.229 [2024-12-09 17:39:00.176417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.229 qpair failed and we were unable to recover it. 00:28:31.229 17:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:31.229 [2024-12-09 17:39:00.176543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.229 [2024-12-09 17:39:00.176577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.229 qpair failed and we were unable to recover it. 00:28:31.229 17:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:31.229 [2024-12-09 17:39:00.176685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.229 [2024-12-09 17:39:00.176719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.229 qpair failed and we were unable to recover it. 00:28:31.229 [2024-12-09 17:39:00.177023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.229 [2024-12-09 17:39:00.177059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.229 qpair failed and we were unable to recover it. 00:28:31.229 [2024-12-09 17:39:00.177247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.229 [2024-12-09 17:39:00.177281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.229 qpair failed and we were unable to recover it. 00:28:31.229 [2024-12-09 17:39:00.177422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.229 [2024-12-09 17:39:00.177455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.229 qpair failed and we were unable to recover it. 00:28:31.229 [2024-12-09 17:39:00.177571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.229 [2024-12-09 17:39:00.177603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.229 qpair failed and we were unable to recover it. 00:28:31.229 [2024-12-09 17:39:00.177790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.229 [2024-12-09 17:39:00.177823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.229 qpair failed and we were unable to recover it. 00:28:31.229 [2024-12-09 17:39:00.177930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.229 [2024-12-09 17:39:00.177963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.229 qpair failed and we were unable to recover it. 00:28:31.229 [2024-12-09 17:39:00.178084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.229 [2024-12-09 17:39:00.178117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.229 qpair failed and we were unable to recover it. 00:28:31.229 [2024-12-09 17:39:00.178242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.229 [2024-12-09 17:39:00.178276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.229 qpair failed and we were unable to recover it. 00:28:31.229 [2024-12-09 17:39:00.178455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.229 [2024-12-09 17:39:00.178487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.229 qpair failed and we were unable to recover it. 00:28:31.229 [2024-12-09 17:39:00.178659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.229 [2024-12-09 17:39:00.178692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.229 qpair failed and we were unable to recover it. 00:28:31.229 [2024-12-09 17:39:00.178834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.229 [2024-12-09 17:39:00.178867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.229 qpair failed and we were unable to recover it. 00:28:31.229 [2024-12-09 17:39:00.178984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.229 [2024-12-09 17:39:00.179016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.229 qpair failed and we were unable to recover it. 00:28:31.229 [2024-12-09 17:39:00.179144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.229 [2024-12-09 17:39:00.179177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.229 qpair failed and we were unable to recover it. 00:28:31.229 [2024-12-09 17:39:00.179374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.229 [2024-12-09 17:39:00.179418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.229 qpair failed and we were unable to recover it. 00:28:31.229 [2024-12-09 17:39:00.179599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.229 [2024-12-09 17:39:00.179631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.229 qpair failed and we were unable to recover it. 00:28:31.229 [2024-12-09 17:39:00.179804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.229 [2024-12-09 17:39:00.179836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.229 qpair failed and we were unable to recover it. 00:28:31.229 [2024-12-09 17:39:00.180007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.229 [2024-12-09 17:39:00.180041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.229 qpair failed and we were unable to recover it. 00:28:31.229 [2024-12-09 17:39:00.180237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.229 [2024-12-09 17:39:00.180283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.229 qpair failed and we were unable to recover it. 00:28:31.229 [2024-12-09 17:39:00.180505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.229 [2024-12-09 17:39:00.180544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.229 qpair failed and we were unable to recover it. 00:28:31.229 [2024-12-09 17:39:00.180820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.229 [2024-12-09 17:39:00.180855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.229 qpair failed and we were unable to recover it. 00:28:31.229 [2024-12-09 17:39:00.181047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.230 [2024-12-09 17:39:00.181079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.230 qpair failed and we were unable to recover it. 00:28:31.230 [2024-12-09 17:39:00.181202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.230 [2024-12-09 17:39:00.181255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.230 qpair failed and we were unable to recover it. 00:28:31.230 [2024-12-09 17:39:00.181429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.230 [2024-12-09 17:39:00.181462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.230 qpair failed and we were unable to recover it. 00:28:31.230 [2024-12-09 17:39:00.181659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.230 [2024-12-09 17:39:00.181692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.230 qpair failed and we were unable to recover it. 00:28:31.230 [2024-12-09 17:39:00.181825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.230 [2024-12-09 17:39:00.181858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.230 qpair failed and we were unable to recover it. 00:28:31.230 [2024-12-09 17:39:00.182039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.230 [2024-12-09 17:39:00.182072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.230 qpair failed and we were unable to recover it. 00:28:31.230 [2024-12-09 17:39:00.182251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.230 [2024-12-09 17:39:00.182286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.230 qpair failed and we were unable to recover it. 00:28:31.230 [2024-12-09 17:39:00.182416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.230 [2024-12-09 17:39:00.182449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.230 qpair failed and we were unable to recover it. 00:28:31.230 [2024-12-09 17:39:00.182586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.230 [2024-12-09 17:39:00.182620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.230 qpair failed and we were unable to recover it. 00:28:31.230 [2024-12-09 17:39:00.182742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.230 [2024-12-09 17:39:00.182776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.230 qpair failed and we were unable to recover it. 00:28:31.230 [2024-12-09 17:39:00.182960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.230 [2024-12-09 17:39:00.182993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.230 qpair failed and we were unable to recover it. 00:28:31.230 [2024-12-09 17:39:00.183128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.230 [2024-12-09 17:39:00.183160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.230 qpair failed and we were unable to recover it. 00:28:31.230 [2024-12-09 17:39:00.183349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.230 [2024-12-09 17:39:00.183384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.230 qpair failed and we were unable to recover it. 00:28:31.230 [2024-12-09 17:39:00.183571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.230 [2024-12-09 17:39:00.183602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.230 qpair failed and we were unable to recover it. 00:28:31.230 [2024-12-09 17:39:00.183795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.230 [2024-12-09 17:39:00.183828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.230 qpair failed and we were unable to recover it. 00:28:31.230 [2024-12-09 17:39:00.183945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.230 [2024-12-09 17:39:00.183979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.230 qpair failed and we were unable to recover it. 00:28:31.230 [2024-12-09 17:39:00.184112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.230 [2024-12-09 17:39:00.184145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.230 qpair failed and we were unable to recover it. 00:28:31.230 [2024-12-09 17:39:00.184333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.230 [2024-12-09 17:39:00.184368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.230 qpair failed and we were unable to recover it. 00:28:31.230 [2024-12-09 17:39:00.184486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.230 [2024-12-09 17:39:00.184519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.230 qpair failed and we were unable to recover it. 00:28:31.230 [2024-12-09 17:39:00.184646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.230 [2024-12-09 17:39:00.184679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.230 qpair failed and we were unable to recover it. 00:28:31.230 [2024-12-09 17:39:00.184858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.230 [2024-12-09 17:39:00.184894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.230 qpair failed and we were unable to recover it. 00:28:31.230 [2024-12-09 17:39:00.185072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.230 [2024-12-09 17:39:00.185104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.230 qpair failed and we were unable to recover it. 00:28:31.230 [2024-12-09 17:39:00.185279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.230 [2024-12-09 17:39:00.185313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.230 qpair failed and we were unable to recover it. 00:28:31.230 [2024-12-09 17:39:00.185587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.230 [2024-12-09 17:39:00.185618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.230 qpair failed and we were unable to recover it. 00:28:31.230 [2024-12-09 17:39:00.185862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.230 [2024-12-09 17:39:00.185896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.230 qpair failed and we were unable to recover it. 00:28:31.230 [2024-12-09 17:39:00.186085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.230 [2024-12-09 17:39:00.186117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.230 qpair failed and we were unable to recover it. 00:28:31.230 [2024-12-09 17:39:00.186314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.230 [2024-12-09 17:39:00.186347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.230 qpair failed and we were unable to recover it. 00:28:31.230 [2024-12-09 17:39:00.186540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.230 [2024-12-09 17:39:00.186572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.230 qpair failed and we were unable to recover it. 00:28:31.230 [2024-12-09 17:39:00.186687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.230 [2024-12-09 17:39:00.186719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.230 qpair failed and we were unable to recover it. 00:28:31.230 [2024-12-09 17:39:00.186853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.230 [2024-12-09 17:39:00.186885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.230 qpair failed and we were unable to recover it. 00:28:31.230 [2024-12-09 17:39:00.187062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.230 [2024-12-09 17:39:00.187094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.230 qpair failed and we were unable to recover it. 00:28:31.230 [2024-12-09 17:39:00.187226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.230 [2024-12-09 17:39:00.187259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.230 qpair failed and we were unable to recover it. 00:28:31.230 [2024-12-09 17:39:00.187385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.230 [2024-12-09 17:39:00.187418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.230 qpair failed and we were unable to recover it. 00:28:31.230 [2024-12-09 17:39:00.187599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.230 [2024-12-09 17:39:00.187637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.230 qpair failed and we were unable to recover it. 00:28:31.230 [2024-12-09 17:39:00.187828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.230 [2024-12-09 17:39:00.187861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.230 qpair failed and we were unable to recover it. 00:28:31.230 [2024-12-09 17:39:00.187975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.230 [2024-12-09 17:39:00.188008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.230 qpair failed and we were unable to recover it. 00:28:31.230 [2024-12-09 17:39:00.188201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.230 [2024-12-09 17:39:00.188247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.230 qpair failed and we were unable to recover it. 00:28:31.230 [2024-12-09 17:39:00.188511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.230 [2024-12-09 17:39:00.188544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.230 qpair failed and we were unable to recover it. 00:28:31.230 [2024-12-09 17:39:00.188716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.230 [2024-12-09 17:39:00.188748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.230 qpair failed and we were unable to recover it. 00:28:31.230 [2024-12-09 17:39:00.188921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.231 [2024-12-09 17:39:00.188953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.231 qpair failed and we were unable to recover it. 00:28:31.231 [2024-12-09 17:39:00.189055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.231 [2024-12-09 17:39:00.189087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.231 qpair failed and we were unable to recover it. 00:28:31.231 [2024-12-09 17:39:00.189199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.231 [2024-12-09 17:39:00.189249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.231 qpair failed and we were unable to recover it. 00:28:31.231 [2024-12-09 17:39:00.189439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.231 [2024-12-09 17:39:00.189471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.231 qpair failed and we were unable to recover it. 00:28:31.231 [2024-12-09 17:39:00.189646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.231 [2024-12-09 17:39:00.189677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.231 qpair failed and we were unable to recover it. 00:28:31.231 [2024-12-09 17:39:00.189796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.231 [2024-12-09 17:39:00.189828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.231 qpair failed and we were unable to recover it. 00:28:31.231 [2024-12-09 17:39:00.189963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.231 [2024-12-09 17:39:00.189996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.231 qpair failed and we were unable to recover it. 00:28:31.231 [2024-12-09 17:39:00.190213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.231 [2024-12-09 17:39:00.190258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.231 qpair failed and we were unable to recover it. 00:28:31.231 [2024-12-09 17:39:00.190460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.231 [2024-12-09 17:39:00.190492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.231 qpair failed and we were unable to recover it. 00:28:31.231 [2024-12-09 17:39:00.190685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.231 [2024-12-09 17:39:00.190717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.231 qpair failed and we were unable to recover it. 00:28:31.231 [2024-12-09 17:39:00.190843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.231 [2024-12-09 17:39:00.190874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.231 qpair failed and we were unable to recover it. 00:28:31.231 [2024-12-09 17:39:00.190989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.231 [2024-12-09 17:39:00.191021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.231 qpair failed and we were unable to recover it. 00:28:31.231 [2024-12-09 17:39:00.191227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.231 [2024-12-09 17:39:00.191260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.231 qpair failed and we were unable to recover it. 00:28:31.231 [2024-12-09 17:39:00.191381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.231 [2024-12-09 17:39:00.191412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.231 qpair failed and we were unable to recover it. 00:28:31.231 [2024-12-09 17:39:00.191537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.231 [2024-12-09 17:39:00.191568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.231 qpair failed and we were unable to recover it. 00:28:31.231 [2024-12-09 17:39:00.191705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.231 [2024-12-09 17:39:00.191737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.231 qpair failed and we were unable to recover it. 00:28:31.231 [2024-12-09 17:39:00.191980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.231 [2024-12-09 17:39:00.192010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.231 qpair failed and we were unable to recover it. 00:28:31.231 [2024-12-09 17:39:00.192128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.231 [2024-12-09 17:39:00.192159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.231 qpair failed and we were unable to recover it. 00:28:31.231 [2024-12-09 17:39:00.192290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.231 [2024-12-09 17:39:00.192324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.231 qpair failed and we were unable to recover it. 00:28:31.231 [2024-12-09 17:39:00.192510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.231 [2024-12-09 17:39:00.192542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.231 qpair failed and we were unable to recover it. 00:28:31.231 [2024-12-09 17:39:00.192828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.231 [2024-12-09 17:39:00.192860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.231 qpair failed and we were unable to recover it. 00:28:31.231 [2024-12-09 17:39:00.192996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.231 [2024-12-09 17:39:00.193044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.231 qpair failed and we were unable to recover it. 00:28:31.231 [2024-12-09 17:39:00.193171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.231 [2024-12-09 17:39:00.193205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.231 qpair failed and we were unable to recover it. 00:28:31.231 [2024-12-09 17:39:00.193418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.231 [2024-12-09 17:39:00.193450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.231 qpair failed and we were unable to recover it. 00:28:31.231 [2024-12-09 17:39:00.193625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.231 [2024-12-09 17:39:00.193658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.231 qpair failed and we were unable to recover it. 00:28:31.231 [2024-12-09 17:39:00.193782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.231 [2024-12-09 17:39:00.193813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.231 qpair failed and we were unable to recover it. 00:28:31.231 [2024-12-09 17:39:00.193985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.231 [2024-12-09 17:39:00.194019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.231 qpair failed and we were unable to recover it. 00:28:31.231 [2024-12-09 17:39:00.194130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.231 [2024-12-09 17:39:00.194162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.231 qpair failed and we were unable to recover it. 00:28:31.231 [2024-12-09 17:39:00.194298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.231 [2024-12-09 17:39:00.194332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.231 qpair failed and we were unable to recover it. 00:28:31.231 [2024-12-09 17:39:00.194457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.231 [2024-12-09 17:39:00.194489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.231 qpair failed and we were unable to recover it. 00:28:31.231 [2024-12-09 17:39:00.194610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.231 [2024-12-09 17:39:00.194643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.231 qpair failed and we were unable to recover it. 00:28:31.231 [2024-12-09 17:39:00.194816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.231 [2024-12-09 17:39:00.194848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.231 qpair failed and we were unable to recover it. 00:28:31.231 [2024-12-09 17:39:00.195034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.231 [2024-12-09 17:39:00.195070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.231 qpair failed and we were unable to recover it. 00:28:31.231 [2024-12-09 17:39:00.195308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.231 [2024-12-09 17:39:00.195346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.231 qpair failed and we were unable to recover it. 00:28:31.231 [2024-12-09 17:39:00.195465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.231 [2024-12-09 17:39:00.195499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.231 qpair failed and we were unable to recover it. 00:28:31.231 [2024-12-09 17:39:00.195643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.231 [2024-12-09 17:39:00.195676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.231 qpair failed and we were unable to recover it. 00:28:31.231 [2024-12-09 17:39:00.195793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.231 [2024-12-09 17:39:00.195826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.231 qpair failed and we were unable to recover it. 00:28:31.231 [2024-12-09 17:39:00.196093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.231 [2024-12-09 17:39:00.196125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.231 qpair failed and we were unable to recover it. 00:28:31.231 [2024-12-09 17:39:00.196246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.231 [2024-12-09 17:39:00.196280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.231 qpair failed and we were unable to recover it. 00:28:31.231 [2024-12-09 17:39:00.196491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.232 [2024-12-09 17:39:00.196525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.232 qpair failed and we were unable to recover it. 00:28:31.232 [2024-12-09 17:39:00.196641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.232 [2024-12-09 17:39:00.196673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.232 qpair failed and we were unable to recover it. 00:28:31.232 [2024-12-09 17:39:00.196810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.232 [2024-12-09 17:39:00.196843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.232 qpair failed and we were unable to recover it. 00:28:31.232 [2024-12-09 17:39:00.196969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.232 [2024-12-09 17:39:00.197002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.232 qpair failed and we were unable to recover it. 00:28:31.232 [2024-12-09 17:39:00.197133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.232 [2024-12-09 17:39:00.197166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.232 qpair failed and we were unable to recover it. 00:28:31.232 [2024-12-09 17:39:00.197299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.232 [2024-12-09 17:39:00.197332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.232 qpair failed and we were unable to recover it. 00:28:31.232 [2024-12-09 17:39:00.197510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.232 [2024-12-09 17:39:00.197544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.232 qpair failed and we were unable to recover it. 00:28:31.232 [2024-12-09 17:39:00.197678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.232 [2024-12-09 17:39:00.197710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.232 qpair failed and we were unable to recover it. 00:28:31.232 [2024-12-09 17:39:00.197919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.232 [2024-12-09 17:39:00.197952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.232 qpair failed and we were unable to recover it. 00:28:31.232 [2024-12-09 17:39:00.198075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.232 [2024-12-09 17:39:00.198114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.232 qpair failed and we were unable to recover it. 00:28:31.232 [2024-12-09 17:39:00.198245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.232 [2024-12-09 17:39:00.198279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.232 qpair failed and we were unable to recover it. 00:28:31.232 [2024-12-09 17:39:00.198463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.232 [2024-12-09 17:39:00.198495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.232 qpair failed and we were unable to recover it. 00:28:31.232 [2024-12-09 17:39:00.198685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.232 [2024-12-09 17:39:00.198718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.232 qpair failed and we were unable to recover it. 00:28:31.232 [2024-12-09 17:39:00.198920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.232 [2024-12-09 17:39:00.198953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.232 qpair failed and we were unable to recover it. 00:28:31.232 [2024-12-09 17:39:00.199083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.232 [2024-12-09 17:39:00.199116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.232 qpair failed and we were unable to recover it. 00:28:31.232 [2024-12-09 17:39:00.199236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.232 [2024-12-09 17:39:00.199270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.232 qpair failed and we were unable to recover it. 00:28:31.232 [2024-12-09 17:39:00.199452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.232 [2024-12-09 17:39:00.199484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.232 qpair failed and we were unable to recover it. 00:28:31.232 [2024-12-09 17:39:00.199659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.232 [2024-12-09 17:39:00.199691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.232 qpair failed and we were unable to recover it. 00:28:31.232 [2024-12-09 17:39:00.199826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.232 [2024-12-09 17:39:00.199861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.232 qpair failed and we were unable to recover it. 00:28:31.232 [2024-12-09 17:39:00.200001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.232 [2024-12-09 17:39:00.200034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.232 qpair failed and we were unable to recover it. 00:28:31.232 [2024-12-09 17:39:00.200265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.232 [2024-12-09 17:39:00.200300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.232 qpair failed and we were unable to recover it. 00:28:31.232 [2024-12-09 17:39:00.200410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.232 [2024-12-09 17:39:00.200441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.232 qpair failed and we were unable to recover it. 00:28:31.232 [2024-12-09 17:39:00.200668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.232 [2024-12-09 17:39:00.200700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.232 qpair failed and we were unable to recover it. 00:28:31.232 [2024-12-09 17:39:00.200915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.232 [2024-12-09 17:39:00.200950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.232 qpair failed and we were unable to recover it. 00:28:31.232 [2024-12-09 17:39:00.201212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.232 [2024-12-09 17:39:00.201262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.232 qpair failed and we were unable to recover it. 00:28:31.232 [2024-12-09 17:39:00.201398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.232 [2024-12-09 17:39:00.201430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.232 qpair failed and we were unable to recover it. 00:28:31.232 [2024-12-09 17:39:00.201603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.232 [2024-12-09 17:39:00.201634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.232 qpair failed and we were unable to recover it. 00:28:31.232 [2024-12-09 17:39:00.201826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.232 [2024-12-09 17:39:00.201857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.232 qpair failed and we were unable to recover it. 00:28:31.232 [2024-12-09 17:39:00.201973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.232 [2024-12-09 17:39:00.202007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.232 qpair failed and we were unable to recover it. 00:28:31.232 [2024-12-09 17:39:00.202200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.232 [2024-12-09 17:39:00.202242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.232 qpair failed and we were unable to recover it. 00:28:31.232 [2024-12-09 17:39:00.202419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.232 [2024-12-09 17:39:00.202452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.232 qpair failed and we were unable to recover it. 00:28:31.232 [2024-12-09 17:39:00.202639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.232 [2024-12-09 17:39:00.202671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.232 qpair failed and we were unable to recover it. 00:28:31.232 [2024-12-09 17:39:00.202928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.232 [2024-12-09 17:39:00.202959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.232 qpair failed and we were unable to recover it. 00:28:31.232 [2024-12-09 17:39:00.203149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.232 [2024-12-09 17:39:00.203181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.232 qpair failed and we were unable to recover it. 00:28:31.232 [2024-12-09 17:39:00.203452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.232 [2024-12-09 17:39:00.203484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.233 qpair failed and we were unable to recover it. 00:28:31.233 [2024-12-09 17:39:00.203608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.233 [2024-12-09 17:39:00.203640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.233 qpair failed and we were unable to recover it. 00:28:31.233 [2024-12-09 17:39:00.203757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.233 [2024-12-09 17:39:00.203791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.233 qpair failed and we were unable to recover it. 00:28:31.233 [2024-12-09 17:39:00.203993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.233 [2024-12-09 17:39:00.204026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.233 qpair failed and we were unable to recover it. 00:28:31.233 [2024-12-09 17:39:00.204292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.233 [2024-12-09 17:39:00.204326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.233 qpair failed and we were unable to recover it. 00:28:31.233 [2024-12-09 17:39:00.204456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.233 [2024-12-09 17:39:00.204488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.233 qpair failed and we were unable to recover it. 00:28:31.233 [2024-12-09 17:39:00.204672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.233 [2024-12-09 17:39:00.204705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.233 qpair failed and we were unable to recover it. 00:28:31.233 [2024-12-09 17:39:00.204918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.233 [2024-12-09 17:39:00.204949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.233 qpair failed and we were unable to recover it. 00:28:31.233 [2024-12-09 17:39:00.205075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.233 [2024-12-09 17:39:00.205107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.233 qpair failed and we were unable to recover it. 00:28:31.233 [2024-12-09 17:39:00.205292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.233 [2024-12-09 17:39:00.205325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.233 qpair failed and we were unable to recover it. 00:28:31.233 [2024-12-09 17:39:00.205519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.233 [2024-12-09 17:39:00.205552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.233 qpair failed and we were unable to recover it. 00:28:31.233 [2024-12-09 17:39:00.205840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.233 [2024-12-09 17:39:00.205871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.233 qpair failed and we were unable to recover it. 00:28:31.233 [2024-12-09 17:39:00.206072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.233 [2024-12-09 17:39:00.206104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.233 qpair failed and we were unable to recover it. 00:28:31.233 [2024-12-09 17:39:00.206245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.233 [2024-12-09 17:39:00.206280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.233 qpair failed and we were unable to recover it. 00:28:31.233 [2024-12-09 17:39:00.206547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.233 [2024-12-09 17:39:00.206580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.233 qpair failed and we were unable to recover it. 00:28:31.233 [2024-12-09 17:39:00.206739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.233 [2024-12-09 17:39:00.206771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.233 qpair failed and we were unable to recover it. 00:28:31.233 [2024-12-09 17:39:00.206994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.233 [2024-12-09 17:39:00.207032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.233 qpair failed and we were unable to recover it. 00:28:31.233 [2024-12-09 17:39:00.207214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.233 [2024-12-09 17:39:00.207256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.233 qpair failed and we were unable to recover it. 00:28:31.233 [2024-12-09 17:39:00.207494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.233 [2024-12-09 17:39:00.207527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.233 qpair failed and we were unable to recover it. 00:28:31.233 [2024-12-09 17:39:00.207815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.233 [2024-12-09 17:39:00.207849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.233 qpair failed and we were unable to recover it. 00:28:31.233 [2024-12-09 17:39:00.208114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.233 [2024-12-09 17:39:00.208145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.233 qpair failed and we were unable to recover it. 00:28:31.233 [2024-12-09 17:39:00.208414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.233 [2024-12-09 17:39:00.208448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.233 qpair failed and we were unable to recover it. 00:28:31.233 [2024-12-09 17:39:00.208575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.233 [2024-12-09 17:39:00.208607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.233 qpair failed and we were unable to recover it. 00:28:31.233 [2024-12-09 17:39:00.208873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.233 [2024-12-09 17:39:00.208906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.233 qpair failed and we were unable to recover it. 00:28:31.233 [2024-12-09 17:39:00.209180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.233 [2024-12-09 17:39:00.209213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.233 qpair failed and we were unable to recover it. 00:28:31.233 [2024-12-09 17:39:00.209411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.233 [2024-12-09 17:39:00.209443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.233 qpair failed and we were unable to recover it. 00:28:31.233 [2024-12-09 17:39:00.209709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.233 [2024-12-09 17:39:00.209741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.233 qpair failed and we were unable to recover it. 00:28:31.233 [2024-12-09 17:39:00.210033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.233 [2024-12-09 17:39:00.210066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.233 qpair failed and we were unable to recover it. 00:28:31.233 [2024-12-09 17:39:00.210252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.233 [2024-12-09 17:39:00.210286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.233 qpair failed and we were unable to recover it. 00:28:31.233 [2024-12-09 17:39:00.210459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.233 [2024-12-09 17:39:00.210491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.233 qpair failed and we were unable to recover it. 00:28:31.233 [2024-12-09 17:39:00.210631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.233 [2024-12-09 17:39:00.210664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.233 qpair failed and we were unable to recover it. 00:28:31.233 [2024-12-09 17:39:00.210905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.233 [2024-12-09 17:39:00.210937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.233 qpair failed and we were unable to recover it. 00:28:31.233 17:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:31.233 [2024-12-09 17:39:00.211198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.233 [2024-12-09 17:39:00.211251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.233 qpair failed and we were unable to recover it. 00:28:31.233 17:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:31.233 [2024-12-09 17:39:00.211517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.233 [2024-12-09 17:39:00.211551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.233 qpair failed and we were unable to recover it. 00:28:31.233 [2024-12-09 17:39:00.211687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.233 [2024-12-09 17:39:00.211719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.233 qpair failed and we were unable to recover it. 00:28:31.233 17:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.233 [2024-12-09 17:39:00.211969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.233 [2024-12-09 17:39:00.212002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.233 qpair failed and we were unable to recover it. 00:28:31.233 17:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:31.233 [2024-12-09 17:39:00.212240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.233 [2024-12-09 17:39:00.212275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.233 qpair failed and we were unable to recover it. 00:28:31.233 [2024-12-09 17:39:00.212460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.233 [2024-12-09 17:39:00.212492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.233 qpair failed and we were unable to recover it. 00:28:31.233 [2024-12-09 17:39:00.212741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.234 [2024-12-09 17:39:00.212773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.234 qpair failed and we were unable to recover it. 00:28:31.234 [2024-12-09 17:39:00.213034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.234 [2024-12-09 17:39:00.213067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.234 qpair failed and we were unable to recover it. 00:28:31.234 [2024-12-09 17:39:00.213292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.234 [2024-12-09 17:39:00.213326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.234 qpair failed and we were unable to recover it. 00:28:31.234 [2024-12-09 17:39:00.213574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.234 [2024-12-09 17:39:00.213612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.234 qpair failed and we were unable to recover it. 00:28:31.234 [2024-12-09 17:39:00.213792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.234 [2024-12-09 17:39:00.213824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.234 qpair failed and we were unable to recover it. 00:28:31.234 [2024-12-09 17:39:00.214087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.234 [2024-12-09 17:39:00.214119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.234 qpair failed and we were unable to recover it. 00:28:31.234 [2024-12-09 17:39:00.214360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.234 [2024-12-09 17:39:00.214393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.234 qpair failed and we were unable to recover it. 00:28:31.234 [2024-12-09 17:39:00.214576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.234 [2024-12-09 17:39:00.214609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.234 qpair failed and we were unable to recover it. 00:28:31.234 [2024-12-09 17:39:00.214791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.234 [2024-12-09 17:39:00.214823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.234 qpair failed and we were unable to recover it. 00:28:31.234 [2024-12-09 17:39:00.214953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.234 [2024-12-09 17:39:00.214985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.234 qpair failed and we were unable to recover it. 00:28:31.234 [2024-12-09 17:39:00.215180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.234 [2024-12-09 17:39:00.215212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.234 qpair failed and we were unable to recover it. 00:28:31.234 [2024-12-09 17:39:00.215435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.234 [2024-12-09 17:39:00.215468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.234 qpair failed and we were unable to recover it. 00:28:31.234 [2024-12-09 17:39:00.215716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.234 [2024-12-09 17:39:00.215749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.234 qpair failed and we were unable to recover it. 00:28:31.234 [2024-12-09 17:39:00.215924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.234 [2024-12-09 17:39:00.215957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.234 qpair failed and we were unable to recover it. 00:28:31.234 [2024-12-09 17:39:00.216244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.234 [2024-12-09 17:39:00.216277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.234 qpair failed and we were unable to recover it. 00:28:31.234 [2024-12-09 17:39:00.216546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.234 [2024-12-09 17:39:00.216578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.234 qpair failed and we were unable to recover it. 00:28:31.234 [2024-12-09 17:39:00.216702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.234 [2024-12-09 17:39:00.216734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.234 qpair failed and we were unable to recover it. 00:28:31.234 [2024-12-09 17:39:00.216945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.234 [2024-12-09 17:39:00.216976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.234 qpair failed and we were unable to recover it. 00:28:31.234 [2024-12-09 17:39:00.217168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.234 [2024-12-09 17:39:00.217200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.234 qpair failed and we were unable to recover it. 00:28:31.234 [2024-12-09 17:39:00.217404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.234 [2024-12-09 17:39:00.217436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.234 qpair failed and we were unable to recover it. 00:28:31.234 [2024-12-09 17:39:00.217559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.234 [2024-12-09 17:39:00.217592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.234 qpair failed and we were unable to recover it. 00:28:31.234 [2024-12-09 17:39:00.217739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.234 [2024-12-09 17:39:00.217771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.234 qpair failed and we were unable to recover it. 00:28:31.234 [2024-12-09 17:39:00.218025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.234 [2024-12-09 17:39:00.218057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.234 qpair failed and we were unable to recover it. 00:28:31.234 [2024-12-09 17:39:00.218245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.234 [2024-12-09 17:39:00.218282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.234 qpair failed and we were unable to recover it. 00:28:31.234 [2024-12-09 17:39:00.218412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.234 [2024-12-09 17:39:00.218444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.234 qpair failed and we were unable to recover it. 00:28:31.234 [2024-12-09 17:39:00.218584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.234 [2024-12-09 17:39:00.218616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.234 qpair failed and we were unable to recover it. 00:28:31.234 [2024-12-09 17:39:00.218872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.234 [2024-12-09 17:39:00.218904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.234 qpair failed and we were unable to recover it. 00:28:31.234 [2024-12-09 17:39:00.219076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.234 [2024-12-09 17:39:00.219108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.234 qpair failed and we were unable to recover it. 00:28:31.234 [2024-12-09 17:39:00.219320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.234 [2024-12-09 17:39:00.219353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.234 qpair failed and we were unable to recover it. 00:28:31.234 [2024-12-09 17:39:00.219495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.234 [2024-12-09 17:39:00.219527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.234 qpair failed and we were unable to recover it. 00:28:31.234 [2024-12-09 17:39:00.219785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.234 [2024-12-09 17:39:00.219817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.234 qpair failed and we were unable to recover it. 00:28:31.234 [2024-12-09 17:39:00.220004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.234 [2024-12-09 17:39:00.220036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.234 qpair failed and we were unable to recover it. 00:28:31.234 [2024-12-09 17:39:00.220167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.234 [2024-12-09 17:39:00.220199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.234 qpair failed and we were unable to recover it. 00:28:31.234 [2024-12-09 17:39:00.220435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.234 [2024-12-09 17:39:00.220468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.234 qpair failed and we were unable to recover it. 00:28:31.234 [2024-12-09 17:39:00.220655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.234 [2024-12-09 17:39:00.220688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.234 qpair failed and we were unable to recover it. 00:28:31.234 [2024-12-09 17:39:00.220883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.234 [2024-12-09 17:39:00.220915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.234 qpair failed and we were unable to recover it. 00:28:31.234 [2024-12-09 17:39:00.221153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.234 [2024-12-09 17:39:00.221185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.234 qpair failed and we were unable to recover it. 00:28:31.234 [2024-12-09 17:39:00.221327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.234 [2024-12-09 17:39:00.221359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.234 qpair failed and we were unable to recover it. 00:28:31.234 [2024-12-09 17:39:00.221510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.234 [2024-12-09 17:39:00.221543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.234 qpair failed and we were unable to recover it. 00:28:31.234 [2024-12-09 17:39:00.221670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.234 [2024-12-09 17:39:00.221702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.234 qpair failed and we were unable to recover it. 00:28:31.235 [2024-12-09 17:39:00.221830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.235 [2024-12-09 17:39:00.221862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.235 qpair failed and we were unable to recover it. 00:28:31.235 [2024-12-09 17:39:00.222070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.235 [2024-12-09 17:39:00.222102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.235 qpair failed and we were unable to recover it. 00:28:31.235 [2024-12-09 17:39:00.222343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.235 [2024-12-09 17:39:00.222377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.235 qpair failed and we were unable to recover it. 00:28:31.235 [2024-12-09 17:39:00.222568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.235 [2024-12-09 17:39:00.222601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.235 qpair failed and we were unable to recover it. 00:28:31.235 [2024-12-09 17:39:00.222739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.235 [2024-12-09 17:39:00.222777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.235 qpair failed and we were unable to recover it. 00:28:31.235 [2024-12-09 17:39:00.222911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.235 [2024-12-09 17:39:00.222942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.235 qpair failed and we were unable to recover it. 00:28:31.235 [2024-12-09 17:39:00.223116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.235 [2024-12-09 17:39:00.223147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.235 qpair failed and we were unable to recover it. 00:28:31.235 [2024-12-09 17:39:00.223288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.235 [2024-12-09 17:39:00.223322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.235 qpair failed and we were unable to recover it. 00:28:31.235 [2024-12-09 17:39:00.223456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.235 [2024-12-09 17:39:00.223489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.235 qpair failed and we were unable to recover it. 00:28:31.235 [2024-12-09 17:39:00.223672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.235 [2024-12-09 17:39:00.223704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.235 qpair failed and we were unable to recover it. 00:28:31.235 [2024-12-09 17:39:00.224007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.235 [2024-12-09 17:39:00.224037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.235 qpair failed and we were unable to recover it. 00:28:31.235 [2024-12-09 17:39:00.224229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.235 [2024-12-09 17:39:00.224262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.235 qpair failed and we were unable to recover it. 00:28:31.235 [2024-12-09 17:39:00.224397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.235 [2024-12-09 17:39:00.224430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.235 qpair failed and we were unable to recover it. 00:28:31.235 [2024-12-09 17:39:00.224619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.235 [2024-12-09 17:39:00.224651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.235 qpair failed and we were unable to recover it. 00:28:31.235 [2024-12-09 17:39:00.224908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.235 [2024-12-09 17:39:00.224939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.235 qpair failed and we were unable to recover it. 00:28:31.235 [2024-12-09 17:39:00.225076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.235 [2024-12-09 17:39:00.225109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.235 qpair failed and we were unable to recover it. 00:28:31.235 [2024-12-09 17:39:00.225301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.235 [2024-12-09 17:39:00.225334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.235 qpair failed and we were unable to recover it. 00:28:31.235 [2024-12-09 17:39:00.225463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.235 [2024-12-09 17:39:00.225495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.235 qpair failed and we were unable to recover it. 00:28:31.235 [2024-12-09 17:39:00.225699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.235 [2024-12-09 17:39:00.225732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.235 qpair failed and we were unable to recover it. 00:28:31.235 [2024-12-09 17:39:00.225849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.235 [2024-12-09 17:39:00.225881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.235 qpair failed and we were unable to recover it. 00:28:31.235 [2024-12-09 17:39:00.226072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.235 [2024-12-09 17:39:00.226104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.235 qpair failed and we were unable to recover it. 00:28:31.235 [2024-12-09 17:39:00.226364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.235 [2024-12-09 17:39:00.226399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.235 qpair failed and we were unable to recover it. 00:28:31.235 [2024-12-09 17:39:00.226640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.235 [2024-12-09 17:39:00.226673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.235 qpair failed and we were unable to recover it. 00:28:31.235 [2024-12-09 17:39:00.226924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.235 [2024-12-09 17:39:00.226957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.235 qpair failed and we were unable to recover it. 00:28:31.235 [2024-12-09 17:39:00.227161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.235 [2024-12-09 17:39:00.227193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.235 qpair failed and we were unable to recover it. 00:28:31.235 [2024-12-09 17:39:00.227451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.235 [2024-12-09 17:39:00.227485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.235 qpair failed and we were unable to recover it. 00:28:31.235 [2024-12-09 17:39:00.227636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.235 [2024-12-09 17:39:00.227668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.235 qpair failed and we were unable to recover it. 00:28:31.235 [2024-12-09 17:39:00.227785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.235 [2024-12-09 17:39:00.227817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.235 qpair failed and we were unable to recover it. 00:28:31.235 [2024-12-09 17:39:00.228006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.235 [2024-12-09 17:39:00.228038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.235 qpair failed and we were unable to recover it. 00:28:31.235 [2024-12-09 17:39:00.228228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.235 [2024-12-09 17:39:00.228262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.235 qpair failed and we were unable to recover it. 00:28:31.235 [2024-12-09 17:39:00.228395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.235 [2024-12-09 17:39:00.228427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.235 qpair failed and we were unable to recover it. 00:28:31.235 [2024-12-09 17:39:00.228600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.235 [2024-12-09 17:39:00.228638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.235 qpair failed and we were unable to recover it. 00:28:31.235 [2024-12-09 17:39:00.228782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.235 [2024-12-09 17:39:00.228815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.235 qpair failed and we were unable to recover it. 00:28:31.235 [2024-12-09 17:39:00.229096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.235 [2024-12-09 17:39:00.229128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.235 qpair failed and we were unable to recover it. 00:28:31.235 [2024-12-09 17:39:00.229309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.235 [2024-12-09 17:39:00.229343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.235 qpair failed and we were unable to recover it. 00:28:31.235 [2024-12-09 17:39:00.229471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.235 [2024-12-09 17:39:00.229503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.235 qpair failed and we were unable to recover it. 00:28:31.235 [2024-12-09 17:39:00.229698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.235 [2024-12-09 17:39:00.229730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.235 qpair failed and we were unable to recover it. 00:28:31.235 [2024-12-09 17:39:00.229940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.235 [2024-12-09 17:39:00.229973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.235 qpair failed and we were unable to recover it. 00:28:31.235 [2024-12-09 17:39:00.230158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.235 [2024-12-09 17:39:00.230191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.235 qpair failed and we were unable to recover it. 00:28:31.235 [2024-12-09 17:39:00.230404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.236 [2024-12-09 17:39:00.230443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.236 qpair failed and we were unable to recover it. 00:28:31.236 [2024-12-09 17:39:00.230670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.236 [2024-12-09 17:39:00.230703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.236 qpair failed and we were unable to recover it. 00:28:31.236 [2024-12-09 17:39:00.231014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.236 [2024-12-09 17:39:00.231047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.236 qpair failed and we were unable to recover it. 00:28:31.236 [2024-12-09 17:39:00.231297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.236 [2024-12-09 17:39:00.231332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.236 qpair failed and we were unable to recover it. 00:28:31.236 [2024-12-09 17:39:00.231470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.236 [2024-12-09 17:39:00.231503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.236 qpair failed and we were unable to recover it. 00:28:31.236 [2024-12-09 17:39:00.231699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.236 [2024-12-09 17:39:00.231731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.236 qpair failed and we were unable to recover it. 00:28:31.236 [2024-12-09 17:39:00.231942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.236 [2024-12-09 17:39:00.231976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.236 qpair failed and we were unable to recover it. 00:28:31.236 [2024-12-09 17:39:00.232243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.236 [2024-12-09 17:39:00.232304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.236 qpair failed and we were unable to recover it. 00:28:31.236 [2024-12-09 17:39:00.232450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.236 [2024-12-09 17:39:00.232482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.236 qpair failed and we were unable to recover it. 00:28:31.236 [2024-12-09 17:39:00.232656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.236 [2024-12-09 17:39:00.232688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.236 qpair failed and we were unable to recover it. 00:28:31.236 [2024-12-09 17:39:00.232970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.236 [2024-12-09 17:39:00.233002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.236 qpair failed and we were unable to recover it. 00:28:31.236 [2024-12-09 17:39:00.233187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.236 [2024-12-09 17:39:00.233227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.236 qpair failed and we were unable to recover it. 00:28:31.236 [2024-12-09 17:39:00.233420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.236 [2024-12-09 17:39:00.233453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.236 qpair failed and we were unable to recover it. 00:28:31.236 [2024-12-09 17:39:00.233576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.236 [2024-12-09 17:39:00.233609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.236 qpair failed and we were unable to recover it. 00:28:31.236 [2024-12-09 17:39:00.233802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.236 [2024-12-09 17:39:00.233834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.236 qpair failed and we were unable to recover it. 00:28:31.236 [2024-12-09 17:39:00.234167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.236 [2024-12-09 17:39:00.234199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.236 qpair failed and we were unable to recover it. 00:28:31.236 [2024-12-09 17:39:00.234513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.236 [2024-12-09 17:39:00.234546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.236 qpair failed and we were unable to recover it. 00:28:31.236 [2024-12-09 17:39:00.234741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.236 [2024-12-09 17:39:00.234773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.236 qpair failed and we were unable to recover it. 00:28:31.236 [2024-12-09 17:39:00.234946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.236 [2024-12-09 17:39:00.234979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.236 qpair failed and we were unable to recover it. 00:28:31.236 [2024-12-09 17:39:00.235255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.236 [2024-12-09 17:39:00.235296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.236 qpair failed and we were unable to recover it. 00:28:31.236 [2024-12-09 17:39:00.235514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.236 [2024-12-09 17:39:00.235545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.236 qpair failed and we were unable to recover it. 00:28:31.236 [2024-12-09 17:39:00.235692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.236 [2024-12-09 17:39:00.235724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.236 qpair failed and we were unable to recover it. 00:28:31.236 [2024-12-09 17:39:00.235914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.236 [2024-12-09 17:39:00.235947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.236 qpair failed and we were unable to recover it. 00:28:31.236 [2024-12-09 17:39:00.236140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.236 [2024-12-09 17:39:00.236173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.236 qpair failed and we were unable to recover it. 00:28:31.236 [2024-12-09 17:39:00.236380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.236 [2024-12-09 17:39:00.236413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.236 qpair failed and we were unable to recover it. 00:28:31.236 [2024-12-09 17:39:00.236652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.236 [2024-12-09 17:39:00.236684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.236 qpair failed and we were unable to recover it. 00:28:31.236 [2024-12-09 17:39:00.236791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.236 [2024-12-09 17:39:00.236823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.236 qpair failed and we were unable to recover it. 00:28:31.236 [2024-12-09 17:39:00.237034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.236 [2024-12-09 17:39:00.237066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.236 qpair failed and we were unable to recover it. 00:28:31.236 [2024-12-09 17:39:00.237356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.236 [2024-12-09 17:39:00.237390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.236 qpair failed and we were unable to recover it. 00:28:31.236 [2024-12-09 17:39:00.237519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.236 [2024-12-09 17:39:00.237551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.236 qpair failed and we were unable to recover it. 00:28:31.236 [2024-12-09 17:39:00.237695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.236 [2024-12-09 17:39:00.237728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.236 qpair failed and we were unable to recover it. 00:28:31.236 [2024-12-09 17:39:00.237867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.236 [2024-12-09 17:39:00.237899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.236 qpair failed and we were unable to recover it. 00:28:31.236 [2024-12-09 17:39:00.238015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.236 [2024-12-09 17:39:00.238048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.236 qpair failed and we were unable to recover it. 00:28:31.236 [2024-12-09 17:39:00.238181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.236 [2024-12-09 17:39:00.238214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.236 qpair failed and we were unable to recover it. 00:28:31.236 [2024-12-09 17:39:00.238421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.237 [2024-12-09 17:39:00.238454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.237 qpair failed and we were unable to recover it. 00:28:31.237 [2024-12-09 17:39:00.238670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.237 [2024-12-09 17:39:00.238702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.237 qpair failed and we were unable to recover it. 00:28:31.237 [2024-12-09 17:39:00.238929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.237 [2024-12-09 17:39:00.238961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.237 qpair failed and we were unable to recover it. 00:28:31.237 [2024-12-09 17:39:00.239238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.237 [2024-12-09 17:39:00.239272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.237 qpair failed and we were unable to recover it. 00:28:31.237 [2024-12-09 17:39:00.239478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.237 [2024-12-09 17:39:00.239510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.237 qpair failed and we were unable to recover it. 00:28:31.237 [2024-12-09 17:39:00.239728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.237 [2024-12-09 17:39:00.239760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.237 qpair failed and we were unable to recover it. 00:28:31.237 [2024-12-09 17:39:00.240048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.237 [2024-12-09 17:39:00.240080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.237 qpair failed and we were unable to recover it. 00:28:31.237 [2024-12-09 17:39:00.240326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.237 [2024-12-09 17:39:00.240360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.237 qpair failed and we were unable to recover it. 00:28:31.237 [2024-12-09 17:39:00.240553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.237 [2024-12-09 17:39:00.240586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.237 qpair failed and we were unable to recover it. 00:28:31.237 [2024-12-09 17:39:00.240803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.237 [2024-12-09 17:39:00.240836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.237 qpair failed and we were unable to recover it. 00:28:31.237 [2024-12-09 17:39:00.241041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.237 [2024-12-09 17:39:00.241073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.237 qpair failed and we were unable to recover it. 00:28:31.237 [2024-12-09 17:39:00.241271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.237 [2024-12-09 17:39:00.241306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.237 qpair failed and we were unable to recover it. 00:28:31.237 [2024-12-09 17:39:00.241459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.237 [2024-12-09 17:39:00.241492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.237 qpair failed and we were unable to recover it. 00:28:31.237 [2024-12-09 17:39:00.241690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.237 [2024-12-09 17:39:00.241723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.237 qpair failed and we were unable to recover it. 00:28:31.237 [2024-12-09 17:39:00.241986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.237 [2024-12-09 17:39:00.242020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.237 qpair failed and we were unable to recover it. 00:28:31.237 [2024-12-09 17:39:00.242200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.237 [2024-12-09 17:39:00.242254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.237 qpair failed and we were unable to recover it. 00:28:31.237 [2024-12-09 17:39:00.242383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.237 [2024-12-09 17:39:00.242416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.237 qpair failed and we were unable to recover it. 00:28:31.237 [2024-12-09 17:39:00.242537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.237 [2024-12-09 17:39:00.242570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.237 qpair failed and we were unable to recover it. 00:28:31.237 [2024-12-09 17:39:00.242766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.237 [2024-12-09 17:39:00.242799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.237 qpair failed and we were unable to recover it. 00:28:31.237 [2024-12-09 17:39:00.242976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.237 [2024-12-09 17:39:00.243009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.237 qpair failed and we were unable to recover it. 00:28:31.237 [2024-12-09 17:39:00.243208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.237 [2024-12-09 17:39:00.243252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.237 qpair failed and we were unable to recover it. 00:28:31.237 [2024-12-09 17:39:00.243446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.237 [2024-12-09 17:39:00.243480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.237 qpair failed and we were unable to recover it. 00:28:31.237 [2024-12-09 17:39:00.243727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.237 [2024-12-09 17:39:00.243762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.237 qpair failed and we were unable to recover it. 00:28:31.237 [2024-12-09 17:39:00.243954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.237 [2024-12-09 17:39:00.243989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.237 qpair failed and we were unable to recover it. 00:28:31.237 [2024-12-09 17:39:00.244253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.237 [2024-12-09 17:39:00.244288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.237 qpair failed and we were unable to recover it. 00:28:31.237 [2024-12-09 17:39:00.244428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.237 [2024-12-09 17:39:00.244468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.237 qpair failed and we were unable to recover it. 00:28:31.237 [2024-12-09 17:39:00.244647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.237 [2024-12-09 17:39:00.244681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.237 qpair failed and we were unable to recover it. 00:28:31.237 [2024-12-09 17:39:00.244926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.237 [2024-12-09 17:39:00.244959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.237 qpair failed and we were unable to recover it. 00:28:31.237 [2024-12-09 17:39:00.245065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.237 [2024-12-09 17:39:00.245098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.237 qpair failed and we were unable to recover it. 00:28:31.237 [2024-12-09 17:39:00.245291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.237 [2024-12-09 17:39:00.245326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.237 qpair failed and we were unable to recover it. 00:28:31.237 [2024-12-09 17:39:00.245467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.237 [2024-12-09 17:39:00.245502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.237 qpair failed and we were unable to recover it. 00:28:31.237 [2024-12-09 17:39:00.245743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.237 [2024-12-09 17:39:00.245778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.237 qpair failed and we were unable to recover it. 00:28:31.237 [2024-12-09 17:39:00.245905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.237 [2024-12-09 17:39:00.245939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.237 qpair failed and we were unable to recover it. 00:28:31.237 [2024-12-09 17:39:00.246126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.237 [2024-12-09 17:39:00.246160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.237 qpair failed and we were unable to recover it. 00:28:31.237 [2024-12-09 17:39:00.246294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.237 [2024-12-09 17:39:00.246329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.237 qpair failed and we were unable to recover it. 00:28:31.237 [2024-12-09 17:39:00.246508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.237 [2024-12-09 17:39:00.246542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.237 qpair failed and we were unable to recover it. 00:28:31.237 [2024-12-09 17:39:00.246717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.237 [2024-12-09 17:39:00.246752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.237 qpair failed and we were unable to recover it. 00:28:31.237 [2024-12-09 17:39:00.247006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.237 [2024-12-09 17:39:00.247039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.237 qpair failed and we were unable to recover it. 00:28:31.237 [2024-12-09 17:39:00.247226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.237 [2024-12-09 17:39:00.247261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.237 qpair failed and we were unable to recover it. 00:28:31.237 [2024-12-09 17:39:00.247453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.238 [2024-12-09 17:39:00.247484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.238 qpair failed and we were unable to recover it. 00:28:31.238 [2024-12-09 17:39:00.247672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.238 [2024-12-09 17:39:00.247705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.238 qpair failed and we were unable to recover it. 00:28:31.238 [2024-12-09 17:39:00.247825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.238 [2024-12-09 17:39:00.247858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.238 qpair failed and we were unable to recover it. 00:28:31.238 [2024-12-09 17:39:00.247988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.238 [2024-12-09 17:39:00.248020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.238 qpair failed and we were unable to recover it. 00:28:31.238 [2024-12-09 17:39:00.248265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.238 [2024-12-09 17:39:00.248300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.238 qpair failed and we were unable to recover it. 00:28:31.238 [2024-12-09 17:39:00.248543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.238 [2024-12-09 17:39:00.248575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.238 qpair failed and we were unable to recover it. 00:28:31.238 [2024-12-09 17:39:00.248702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.238 [2024-12-09 17:39:00.248735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.238 qpair failed and we were unable to recover it. 00:28:31.238 [2024-12-09 17:39:00.248862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.238 [2024-12-09 17:39:00.248895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.238 qpair failed and we were unable to recover it. 00:28:31.238 [2024-12-09 17:39:00.249025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.238 [2024-12-09 17:39:00.249057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.238 qpair failed and we were unable to recover it. 00:28:31.238 [2024-12-09 17:39:00.249250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.238 [2024-12-09 17:39:00.249284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.238 qpair failed and we were unable to recover it. 00:28:31.238 [2024-12-09 17:39:00.249403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.238 [2024-12-09 17:39:00.249436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.238 qpair failed and we were unable to recover it. 00:28:31.238 [2024-12-09 17:39:00.249610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.238 [2024-12-09 17:39:00.249643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.238 qpair failed and we were unable to recover it. 00:28:31.238 [2024-12-09 17:39:00.249834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.238 [2024-12-09 17:39:00.249867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.238 qpair failed and we were unable to recover it. 00:28:31.238 [2024-12-09 17:39:00.250046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.238 [2024-12-09 17:39:00.250079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.238 qpair failed and we were unable to recover it. 00:28:31.238 [2024-12-09 17:39:00.250270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.238 [2024-12-09 17:39:00.250305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.238 qpair failed and we were unable to recover it. 00:28:31.238 Malloc0 00:28:31.238 [2024-12-09 17:39:00.250425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.238 [2024-12-09 17:39:00.250458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.238 qpair failed and we were unable to recover it. 00:28:31.238 [2024-12-09 17:39:00.250585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.238 [2024-12-09 17:39:00.250619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.238 qpair failed and we were unable to recover it. 00:28:31.238 [2024-12-09 17:39:00.250809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.238 [2024-12-09 17:39:00.250842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.238 qpair failed and we were unable to recover it. 00:28:31.238 [2024-12-09 17:39:00.251102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.238 [2024-12-09 17:39:00.251135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.238 qpair failed and we were unable to recover it. 00:28:31.238 17:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.238 [2024-12-09 17:39:00.251263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.238 [2024-12-09 17:39:00.251297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.238 qpair failed and we were unable to recover it. 00:28:31.238 [2024-12-09 17:39:00.251469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.238 [2024-12-09 17:39:00.251503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.238 qpair failed and we were unable to recover it. 00:28:31.238 17:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:31.238 [2024-12-09 17:39:00.251674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.238 [2024-12-09 17:39:00.251707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.238 qpair failed and we were unable to recover it. 00:28:31.238 [2024-12-09 17:39:00.251876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.238 17:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.238 [2024-12-09 17:39:00.251909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.238 qpair failed and we were unable to recover it. 00:28:31.238 [2024-12-09 17:39:00.252152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.238 [2024-12-09 17:39:00.252184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b9 17:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:31.238 0 with addr=10.0.0.2, port=4420 00:28:31.238 qpair failed and we were unable to recover it. 00:28:31.238 [2024-12-09 17:39:00.252385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.238 [2024-12-09 17:39:00.252419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.238 qpair failed and we were unable to recover it. 00:28:31.238 [2024-12-09 17:39:00.252546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.238 [2024-12-09 17:39:00.252579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.238 qpair failed and we were unable to recover it. 00:28:31.238 [2024-12-09 17:39:00.252765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.238 [2024-12-09 17:39:00.252797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.238 qpair failed and we were unable to recover it. 00:28:31.238 [2024-12-09 17:39:00.253037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.238 [2024-12-09 17:39:00.253069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.238 qpair failed and we were unable to recover it. 00:28:31.238 [2024-12-09 17:39:00.253201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.238 [2024-12-09 17:39:00.253245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.238 qpair failed and we were unable to recover it. 00:28:31.238 [2024-12-09 17:39:00.253357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.238 [2024-12-09 17:39:00.253389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.238 qpair failed and we were unable to recover it. 00:28:31.238 [2024-12-09 17:39:00.253592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.238 [2024-12-09 17:39:00.253625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.238 qpair failed and we were unable to recover it. 00:28:31.238 [2024-12-09 17:39:00.253891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.238 [2024-12-09 17:39:00.253924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.238 qpair failed and we were unable to recover it. 00:28:31.238 [2024-12-09 17:39:00.254140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.238 [2024-12-09 17:39:00.254172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.238 qpair failed and we were unable to recover it. 00:28:31.238 [2024-12-09 17:39:00.254313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.238 [2024-12-09 17:39:00.254346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.238 qpair failed and we were unable to recover it. 00:28:31.238 [2024-12-09 17:39:00.254525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.238 [2024-12-09 17:39:00.254557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.238 qpair failed and we were unable to recover it. 00:28:31.238 [2024-12-09 17:39:00.254732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.238 [2024-12-09 17:39:00.254765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.238 qpair failed and we were unable to recover it. 00:28:31.238 [2024-12-09 17:39:00.255023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.238 [2024-12-09 17:39:00.255056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.238 qpair failed and we were unable to recover it. 00:28:31.238 [2024-12-09 17:39:00.255182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.239 [2024-12-09 17:39:00.255215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.239 qpair failed and we were unable to recover it. 00:28:31.239 [2024-12-09 17:39:00.255433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.239 [2024-12-09 17:39:00.255466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.239 qpair failed and we were unable to recover it. 00:28:31.239 [2024-12-09 17:39:00.255587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.239 [2024-12-09 17:39:00.255619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.239 qpair failed and we were unable to recover it. 00:28:31.239 [2024-12-09 17:39:00.255804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.239 [2024-12-09 17:39:00.255836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.239 qpair failed and we were unable to recover it. 00:28:31.239 [2024-12-09 17:39:00.255977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.239 [2024-12-09 17:39:00.256010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.239 qpair failed and we were unable to recover it. 00:28:31.239 [2024-12-09 17:39:00.256154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.239 [2024-12-09 17:39:00.256187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.239 qpair failed and we were unable to recover it. 00:28:31.239 [2024-12-09 17:39:00.256334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.239 [2024-12-09 17:39:00.256379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.239 qpair failed and we were unable to recover it. 00:28:31.239 [2024-12-09 17:39:00.256572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.239 [2024-12-09 17:39:00.256605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.239 qpair failed and we were unable to recover it. 00:28:31.239 [2024-12-09 17:39:00.256780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.239 [2024-12-09 17:39:00.256811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.239 qpair failed and we were unable to recover it. 00:28:31.239 [2024-12-09 17:39:00.256947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.239 [2024-12-09 17:39:00.256980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.239 qpair failed and we were unable to recover it. 00:28:31.239 [2024-12-09 17:39:00.257104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.239 [2024-12-09 17:39:00.257136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.239 qpair failed and we were unable to recover it. 00:28:31.239 [2024-12-09 17:39:00.257245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.239 [2024-12-09 17:39:00.257280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.239 qpair failed and we were unable to recover it. 00:28:31.239 [2024-12-09 17:39:00.257465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.239 [2024-12-09 17:39:00.257497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.239 qpair failed and we were unable to recover it. 00:28:31.239 [2024-12-09 17:39:00.257636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.239 [2024-12-09 17:39:00.257669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.239 qpair failed and we were unable to recover it. 00:28:31.239 [2024-12-09 17:39:00.257856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.239 [2024-12-09 17:39:00.257896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.239 qpair failed and we were unable to recover it. 00:28:31.239 [2024-12-09 17:39:00.257981] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:31.239 [2024-12-09 17:39:00.258072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.239 [2024-12-09 17:39:00.258105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.239 qpair failed and we were unable to recover it. 00:28:31.239 [2024-12-09 17:39:00.258292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.239 [2024-12-09 17:39:00.258328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.239 qpair failed and we were unable to recover it. 00:28:31.239 [2024-12-09 17:39:00.258452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.239 [2024-12-09 17:39:00.258485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.239 qpair failed and we were unable to recover it. 00:28:31.239 [2024-12-09 17:39:00.258670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.239 [2024-12-09 17:39:00.258703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.239 qpair failed and we were unable to recover it. 00:28:31.239 [2024-12-09 17:39:00.258919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.239 [2024-12-09 17:39:00.258950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.239 qpair failed and we were unable to recover it. 00:28:31.239 [2024-12-09 17:39:00.259235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.239 [2024-12-09 17:39:00.259270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.239 qpair failed and we were unable to recover it. 00:28:31.239 [2024-12-09 17:39:00.259399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.239 [2024-12-09 17:39:00.259432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.239 qpair failed and we were unable to recover it. 00:28:31.239 [2024-12-09 17:39:00.259622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.239 [2024-12-09 17:39:00.259654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.239 qpair failed and we were unable to recover it. 00:28:31.239 [2024-12-09 17:39:00.259831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.239 [2024-12-09 17:39:00.259863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.239 qpair failed and we were unable to recover it. 00:28:31.239 [2024-12-09 17:39:00.259986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.239 [2024-12-09 17:39:00.260018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.239 qpair failed and we were unable to recover it. 00:28:31.239 [2024-12-09 17:39:00.260213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.239 [2024-12-09 17:39:00.260253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.239 qpair failed and we were unable to recover it. 00:28:31.239 [2024-12-09 17:39:00.260433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.239 [2024-12-09 17:39:00.260465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.239 qpair failed and we were unable to recover it. 00:28:31.239 [2024-12-09 17:39:00.260638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.239 [2024-12-09 17:39:00.260670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.239 qpair failed and we were unable to recover it. 00:28:31.239 [2024-12-09 17:39:00.260853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.239 [2024-12-09 17:39:00.260886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.239 qpair failed and we were unable to recover it. 00:28:31.239 [2024-12-09 17:39:00.261149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.239 [2024-12-09 17:39:00.261182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.239 qpair failed and we were unable to recover it. 00:28:31.239 [2024-12-09 17:39:00.261375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.239 [2024-12-09 17:39:00.261408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.239 qpair failed and we were unable to recover it. 00:28:31.239 [2024-12-09 17:39:00.261654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.239 [2024-12-09 17:39:00.261686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.239 qpair failed and we were unable to recover it. 00:28:31.239 [2024-12-09 17:39:00.261895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.239 [2024-12-09 17:39:00.261926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.239 qpair failed and we were unable to recover it. 00:28:31.239 [2024-12-09 17:39:00.262100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.239 [2024-12-09 17:39:00.262133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.239 qpair failed and we were unable to recover it. 00:28:31.239 [2024-12-09 17:39:00.262389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.239 [2024-12-09 17:39:00.262422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.239 qpair failed and we were unable to recover it. 00:28:31.239 [2024-12-09 17:39:00.262695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.239 [2024-12-09 17:39:00.262728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.239 qpair failed and we were unable to recover it. 00:28:31.239 [2024-12-09 17:39:00.262919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.239 [2024-12-09 17:39:00.262952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.239 qpair failed and we were unable to recover it. 00:28:31.239 [2024-12-09 17:39:00.263214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.239 [2024-12-09 17:39:00.263254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.239 qpair failed and we were unable to recover it. 00:28:31.239 [2024-12-09 17:39:00.263440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.239 [2024-12-09 17:39:00.263472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.239 qpair failed and we were unable to recover it. 00:28:31.239 [2024-12-09 17:39:00.263600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.240 [2024-12-09 17:39:00.263633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.240 qpair failed and we were unable to recover it. 00:28:31.240 [2024-12-09 17:39:00.263770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.240 [2024-12-09 17:39:00.263801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.240 qpair failed and we were unable to recover it. 00:28:31.240 [2024-12-09 17:39:00.263986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.240 [2024-12-09 17:39:00.264025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.240 qpair failed and we were unable to recover it. 00:28:31.240 [2024-12-09 17:39:00.264287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.240 [2024-12-09 17:39:00.264319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.240 qpair failed and we were unable to recover it. 00:28:31.240 [2024-12-09 17:39:00.264558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.240 [2024-12-09 17:39:00.264591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.240 qpair failed and we were unable to recover it. 00:28:31.240 [2024-12-09 17:39:00.264758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.240 [2024-12-09 17:39:00.264790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.240 qpair failed and we were unable to recover it. 00:28:31.240 [2024-12-09 17:39:00.264997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.240 [2024-12-09 17:39:00.265028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.240 qpair failed and we were unable to recover it. 00:28:31.240 [2024-12-09 17:39:00.265138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.240 [2024-12-09 17:39:00.265171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.240 qpair failed and we were unable to recover it. 00:28:31.240 [2024-12-09 17:39:00.265359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.240 [2024-12-09 17:39:00.265391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.240 qpair failed and we were unable to recover it. 00:28:31.240 [2024-12-09 17:39:00.265572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.240 [2024-12-09 17:39:00.265604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.240 qpair failed and we were unable to recover it. 00:28:31.240 [2024-12-09 17:39:00.265735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.240 [2024-12-09 17:39:00.265767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.240 qpair failed and we were unable to recover it. 00:28:31.240 [2024-12-09 17:39:00.265889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.240 [2024-12-09 17:39:00.265920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.240 qpair failed and we were unable to recover it. 00:28:31.240 [2024-12-09 17:39:00.266091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.240 [2024-12-09 17:39:00.266123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.240 qpair failed and we were unable to recover it. 00:28:31.240 [2024-12-09 17:39:00.266259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.240 [2024-12-09 17:39:00.266292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.240 qpair failed and we were unable to recover it. 00:28:31.240 17:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.240 [2024-12-09 17:39:00.266572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.240 [2024-12-09 17:39:00.266606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.240 qpair failed and we were unable to recover it. 00:28:31.240 [2024-12-09 17:39:00.266846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.240 [2024-12-09 17:39:00.266891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.240 qpair failed and we were unable to recover it. 00:28:31.240 17:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:31.240 [2024-12-09 17:39:00.267069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.240 [2024-12-09 17:39:00.267100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.240 qpair failed and we were unable to recover it. 00:28:31.240 [2024-12-09 17:39:00.267206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.240 [2024-12-09 17:39:00.267250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.240 qpair failed and we were unable to recover it. 00:28:31.240 17:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.240 [2024-12-09 17:39:00.267425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.240 [2024-12-09 17:39:00.267458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.240 qpair failed and we were unable to recover it. 00:28:31.240 17:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:31.240 [2024-12-09 17:39:00.267712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.240 [2024-12-09 17:39:00.267745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.240 qpair failed and we were unable to recover it. 00:28:31.240 [2024-12-09 17:39:00.267957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.240 [2024-12-09 17:39:00.267989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.240 qpair failed and we were unable to recover it. 00:28:31.240 [2024-12-09 17:39:00.268210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.240 [2024-12-09 17:39:00.268251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.240 qpair failed and we were unable to recover it. 00:28:31.240 [2024-12-09 17:39:00.268387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.240 [2024-12-09 17:39:00.268419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.240 qpair failed and we were unable to recover it. 00:28:31.240 [2024-12-09 17:39:00.268530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.240 [2024-12-09 17:39:00.268562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.240 qpair failed and we were unable to recover it. 00:28:31.240 [2024-12-09 17:39:00.268800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.240 [2024-12-09 17:39:00.268831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.240 qpair failed and we were unable to recover it. 00:28:31.240 [2024-12-09 17:39:00.269004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.240 [2024-12-09 17:39:00.269036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.240 qpair failed and we were unable to recover it. 00:28:31.240 [2024-12-09 17:39:00.269274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.240 [2024-12-09 17:39:00.269307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.240 qpair failed and we were unable to recover it. 00:28:31.240 [2024-12-09 17:39:00.269494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.240 [2024-12-09 17:39:00.269527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.240 qpair failed and we were unable to recover it. 00:28:31.240 [2024-12-09 17:39:00.269722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.240 [2024-12-09 17:39:00.269755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.240 qpair failed and we were unable to recover it. 00:28:31.240 [2024-12-09 17:39:00.269877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.240 [2024-12-09 17:39:00.269913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.240 qpair failed and we were unable to recover it. 00:28:31.240 [2024-12-09 17:39:00.270032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.240 [2024-12-09 17:39:00.270064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.240 qpair failed and we were unable to recover it. 00:28:31.240 [2024-12-09 17:39:00.270176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.240 [2024-12-09 17:39:00.270208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.240 qpair failed and we were unable to recover it. 00:28:31.240 [2024-12-09 17:39:00.270327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.240 [2024-12-09 17:39:00.270360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.240 qpair failed and we were unable to recover it. 00:28:31.240 [2024-12-09 17:39:00.270597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.240 [2024-12-09 17:39:00.270629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.240 qpair failed and we were unable to recover it. 00:28:31.240 [2024-12-09 17:39:00.270827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.240 [2024-12-09 17:39:00.270860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.240 qpair failed and we were unable to recover it. 00:28:31.240 [2024-12-09 17:39:00.271034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.240 [2024-12-09 17:39:00.271065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.240 qpair failed and we were unable to recover it. 00:28:31.240 [2024-12-09 17:39:00.271189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.240 [2024-12-09 17:39:00.271231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.240 qpair failed and we were unable to recover it. 00:28:31.240 [2024-12-09 17:39:00.271424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.240 [2024-12-09 17:39:00.271456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.240 qpair failed and we were unable to recover it. 00:28:31.240 [2024-12-09 17:39:00.271628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.241 [2024-12-09 17:39:00.271661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.241 qpair failed and we were unable to recover it. 00:28:31.241 [2024-12-09 17:39:00.271907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.241 [2024-12-09 17:39:00.271938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.241 qpair failed and we were unable to recover it. 00:28:31.241 [2024-12-09 17:39:00.272054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.241 [2024-12-09 17:39:00.272086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.241 qpair failed and we were unable to recover it. 00:28:31.241 [2024-12-09 17:39:00.272273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.241 [2024-12-09 17:39:00.272311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.241 qpair failed and we were unable to recover it. 00:28:31.241 [2024-12-09 17:39:00.272528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.241 [2024-12-09 17:39:00.272560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.241 qpair failed and we were unable to recover it. 00:28:31.241 [2024-12-09 17:39:00.272686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.241 [2024-12-09 17:39:00.272718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.241 qpair failed and we were unable to recover it. 00:28:31.241 [2024-12-09 17:39:00.272902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.241 [2024-12-09 17:39:00.272933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.241 qpair failed and we were unable to recover it. 00:28:31.241 [2024-12-09 17:39:00.273038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.241 [2024-12-09 17:39:00.273070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.241 qpair failed and we were unable to recover it. 00:28:31.241 [2024-12-09 17:39:00.273193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.241 [2024-12-09 17:39:00.273233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.241 qpair failed and we were unable to recover it. 00:28:31.241 [2024-12-09 17:39:00.273419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.241 [2024-12-09 17:39:00.273451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.241 qpair failed and we were unable to recover it. 00:28:31.241 [2024-12-09 17:39:00.273638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.241 [2024-12-09 17:39:00.273670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.241 qpair failed and we were unable to recover it. 00:28:31.241 [2024-12-09 17:39:00.273840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.241 [2024-12-09 17:39:00.273872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.241 qpair failed and we were unable to recover it. 00:28:31.241 [2024-12-09 17:39:00.273984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.241 [2024-12-09 17:39:00.274016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.241 qpair failed and we were unable to recover it. 00:28:31.241 [2024-12-09 17:39:00.274193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.241 [2024-12-09 17:39:00.274240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.241 qpair failed and we were unable to recover it. 00:28:31.241 [2024-12-09 17:39:00.274431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.241 [2024-12-09 17:39:00.274464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.241 qpair failed and we were unable to recover it. 00:28:31.241 17:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.241 [2024-12-09 17:39:00.274667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.241 [2024-12-09 17:39:00.274699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.241 qpair failed and we were unable to recover it. 00:28:31.241 [2024-12-09 17:39:00.274819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.241 [2024-12-09 17:39:00.274856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.241 qpair failed and we were unable to recover it. 00:28:31.241 17:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:31.241 [2024-12-09 17:39:00.275092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.241 [2024-12-09 17:39:00.275124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.241 qpair failed and we were unable to recover it. 00:28:31.241 17:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.241 [2024-12-09 17:39:00.275304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.241 [2024-12-09 17:39:00.275337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.241 qpair failed and we were unable to recover it. 00:28:31.241 [2024-12-09 17:39:00.275525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.241 17:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:31.241 [2024-12-09 17:39:00.275557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.241 qpair failed and we were unable to recover it. 00:28:31.241 [2024-12-09 17:39:00.275765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.241 [2024-12-09 17:39:00.275798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.241 qpair failed and we were unable to recover it. 00:28:31.241 [2024-12-09 17:39:00.276004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.241 [2024-12-09 17:39:00.276036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.241 qpair failed and we were unable to recover it. 00:28:31.241 [2024-12-09 17:39:00.276210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.241 [2024-12-09 17:39:00.276251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.241 qpair failed and we were unable to recover it. 00:28:31.241 [2024-12-09 17:39:00.276456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.241 [2024-12-09 17:39:00.276489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.241 qpair failed and we were unable to recover it. 00:28:31.241 [2024-12-09 17:39:00.276627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.241 [2024-12-09 17:39:00.276660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.241 qpair failed and we were unable to recover it. 00:28:31.241 [2024-12-09 17:39:00.276800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.241 [2024-12-09 17:39:00.276831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.241 qpair failed and we were unable to recover it. 00:28:31.241 [2024-12-09 17:39:00.277022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.241 [2024-12-09 17:39:00.277054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.241 qpair failed and we were unable to recover it. 00:28:31.241 [2024-12-09 17:39:00.277243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.241 [2024-12-09 17:39:00.277276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.241 qpair failed and we were unable to recover it. 00:28:31.241 [2024-12-09 17:39:00.277516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.241 [2024-12-09 17:39:00.277549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.241 qpair failed and we were unable to recover it. 00:28:31.241 [2024-12-09 17:39:00.277792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.241 [2024-12-09 17:39:00.277825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.241 qpair failed and we were unable to recover it. 00:28:31.241 [2024-12-09 17:39:00.278021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.241 [2024-12-09 17:39:00.278052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.241 qpair failed and we were unable to recover it. 00:28:31.241 [2024-12-09 17:39:00.278174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.241 [2024-12-09 17:39:00.278206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.241 qpair failed and we were unable to recover it. 00:28:31.241 [2024-12-09 17:39:00.278417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.241 [2024-12-09 17:39:00.278450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.241 qpair failed and we were unable to recover it. 00:28:31.241 [2024-12-09 17:39:00.278698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.241 [2024-12-09 17:39:00.278731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.242 qpair failed and we were unable to recover it. 00:28:31.242 [2024-12-09 17:39:00.278900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.242 [2024-12-09 17:39:00.278932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.242 qpair failed and we were unable to recover it. 00:28:31.242 [2024-12-09 17:39:00.279125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.242 [2024-12-09 17:39:00.279157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.242 qpair failed and we were unable to recover it. 00:28:31.242 [2024-12-09 17:39:00.279339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.242 [2024-12-09 17:39:00.279372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.242 qpair failed and we were unable to recover it. 00:28:31.242 [2024-12-09 17:39:00.279578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.242 [2024-12-09 17:39:00.279610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.242 qpair failed and we were unable to recover it. 00:28:31.242 [2024-12-09 17:39:00.279730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.242 [2024-12-09 17:39:00.279763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.242 qpair failed and we were unable to recover it. 00:28:31.242 [2024-12-09 17:39:00.279952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.242 [2024-12-09 17:39:00.279984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.242 qpair failed and we were unable to recover it. 00:28:31.242 [2024-12-09 17:39:00.280177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.242 [2024-12-09 17:39:00.280209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.242 qpair failed and we were unable to recover it. 00:28:31.242 [2024-12-09 17:39:00.280339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.242 [2024-12-09 17:39:00.280372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.242 qpair failed and we were unable to recover it. 00:28:31.242 [2024-12-09 17:39:00.280616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.242 [2024-12-09 17:39:00.280686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.242 qpair failed and we were unable to recover it. 00:28:31.242 [2024-12-09 17:39:00.280854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.242 [2024-12-09 17:39:00.280923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.242 qpair failed and we were unable to recover it. 00:28:31.242 [2024-12-09 17:39:00.281163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.242 [2024-12-09 17:39:00.281232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8048000b90 with addr=10.0.0.2, port=4420 00:28:31.242 qpair failed and we were unable to recover it. 00:28:31.242 [2024-12-09 17:39:00.281511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.242 [2024-12-09 17:39:00.281547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.242 qpair failed and we were unable to recover it. 00:28:31.242 [2024-12-09 17:39:00.281837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.242 [2024-12-09 17:39:00.281870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.242 qpair failed and we were unable to recover it. 00:28:31.242 [2024-12-09 17:39:00.282152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.242 [2024-12-09 17:39:00.282183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.242 qpair failed and we were unable to recover it. 00:28:31.242 [2024-12-09 17:39:00.282389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.242 [2024-12-09 17:39:00.282422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.242 qpair failed and we were unable to recover it. 00:28:31.242 [2024-12-09 17:39:00.282609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.242 17:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.242 [2024-12-09 17:39:00.282643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.242 qpair failed and we were unable to recover it. 00:28:31.242 [2024-12-09 17:39:00.282886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.242 [2024-12-09 17:39:00.282919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.242 qpair failed and we were unable to recover it. 00:28:31.242 17:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:31.242 [2024-12-09 17:39:00.283178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.242 [2024-12-09 17:39:00.283210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.242 qpair failed and we were unable to recover it. 00:28:31.242 17:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.242 [2024-12-09 17:39:00.283400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.242 [2024-12-09 17:39:00.283433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.242 qpair failed and we were unable to recover it. 00:28:31.242 [2024-12-09 17:39:00.283554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.242 [2024-12-09 17:39:00.283586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.242 qpair failed and we were unable to recover it. 00:28:31.242 17:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:31.242 [2024-12-09 17:39:00.283724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.242 [2024-12-09 17:39:00.283758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.242 qpair failed and we were unable to recover it. 00:28:31.242 [2024-12-09 17:39:00.283999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.242 [2024-12-09 17:39:00.284031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.242 qpair failed and we were unable to recover it. 00:28:31.242 [2024-12-09 17:39:00.284204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.242 [2024-12-09 17:39:00.284266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.242 qpair failed and we were unable to recover it. 00:28:31.242 [2024-12-09 17:39:00.284386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.242 [2024-12-09 17:39:00.284418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.242 qpair failed and we were unable to recover it. 00:28:31.242 [2024-12-09 17:39:00.284612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.242 [2024-12-09 17:39:00.284644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.242 qpair failed and we were unable to recover it. 00:28:31.242 [2024-12-09 17:39:00.284782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.242 [2024-12-09 17:39:00.284813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.242 qpair failed and we were unable to recover it. 00:28:31.242 [2024-12-09 17:39:00.284937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.242 [2024-12-09 17:39:00.284970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.242 qpair failed and we were unable to recover it. 00:28:31.242 [2024-12-09 17:39:00.285156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.242 [2024-12-09 17:39:00.285188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x511500 with addr=10.0.0.2, port=4420 00:28:31.242 qpair failed and we were unable to recover it. 00:28:31.242 [2024-12-09 17:39:00.285349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.242 [2024-12-09 17:39:00.285388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f804c000b90 with addr=10.0.0.2, port=4420 00:28:31.242 qpair failed and we were unable to recover it. 00:28:31.242 [2024-12-09 17:39:00.285666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.242 [2024-12-09 17:39:00.285710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.242 qpair failed and we were unable to recover it. 00:28:31.242 [2024-12-09 17:39:00.285930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.242 [2024-12-09 17:39:00.285964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.242 qpair failed and we were unable to recover it. 00:28:31.242 [2024-12-09 17:39:00.286177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.242 [2024-12-09 17:39:00.286196] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:31.242 [2024-12-09 17:39:00.286209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8054000b90 with addr=10.0.0.2, port=4420 00:28:31.242 qpair failed and we were unable to recover it. 00:28:31.242 [2024-12-09 17:39:00.288647] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.242 [2024-12-09 17:39:00.288815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.242 [2024-12-09 17:39:00.288871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.242 [2024-12-09 17:39:00.288895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.242 [2024-12-09 17:39:00.288916] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:31.242 [2024-12-09 17:39:00.288968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:31.242 qpair failed and we were unable to recover it. 00:28:31.242 17:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.242 17:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:31.242 17:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.243 17:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:31.243 [2024-12-09 17:39:00.298571] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.243 17:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.243 [2024-12-09 17:39:00.298675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.243 [2024-12-09 17:39:00.298714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.243 [2024-12-09 17:39:00.298736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.243 [2024-12-09 17:39:00.298755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:31.243 [2024-12-09 17:39:00.298799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:31.243 qpair failed and we were unable to recover it. 00:28:31.243 17:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2738511 00:28:31.243 [2024-12-09 17:39:00.308590] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.243 [2024-12-09 17:39:00.308668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.243 [2024-12-09 17:39:00.308695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.243 [2024-12-09 17:39:00.308710] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.243 [2024-12-09 17:39:00.308723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:31.243 [2024-12-09 17:39:00.308753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:31.243 qpair failed and we were unable to recover it. 00:28:31.243 [2024-12-09 17:39:00.318577] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.243 [2024-12-09 17:39:00.318645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.243 [2024-12-09 17:39:00.318665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.243 [2024-12-09 17:39:00.318676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.243 [2024-12-09 17:39:00.318686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:31.243 [2024-12-09 17:39:00.318718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:31.243 qpair failed and we were unable to recover it. 00:28:31.243 [2024-12-09 17:39:00.328522] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.243 [2024-12-09 17:39:00.328586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.243 [2024-12-09 17:39:00.328601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.243 [2024-12-09 17:39:00.328609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.243 [2024-12-09 17:39:00.328616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:31.243 [2024-12-09 17:39:00.328632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:31.243 qpair failed and we were unable to recover it. 00:28:31.243 [2024-12-09 17:39:00.338599] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.243 [2024-12-09 17:39:00.338654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.243 [2024-12-09 17:39:00.338668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.243 [2024-12-09 17:39:00.338675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.243 [2024-12-09 17:39:00.338682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:31.243 [2024-12-09 17:39:00.338697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:31.243 qpair failed and we were unable to recover it. 00:28:31.243 [2024-12-09 17:39:00.348552] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.243 [2024-12-09 17:39:00.348607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.243 [2024-12-09 17:39:00.348621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.243 [2024-12-09 17:39:00.348628] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.243 [2024-12-09 17:39:00.348634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:31.243 [2024-12-09 17:39:00.348649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:31.243 qpair failed and we were unable to recover it. 00:28:31.243 [2024-12-09 17:39:00.358666] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.243 [2024-12-09 17:39:00.358721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.243 [2024-12-09 17:39:00.358735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.243 [2024-12-09 17:39:00.358741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.243 [2024-12-09 17:39:00.358748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:31.243 [2024-12-09 17:39:00.358762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:31.243 qpair failed and we were unable to recover it. 00:28:31.243 [2024-12-09 17:39:00.368616] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.243 [2024-12-09 17:39:00.368673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.243 [2024-12-09 17:39:00.368688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.243 [2024-12-09 17:39:00.368696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.243 [2024-12-09 17:39:00.368702] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:31.243 [2024-12-09 17:39:00.368717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:31.243 qpair failed and we were unable to recover it. 00:28:31.503 [2024-12-09 17:39:00.378716] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.503 [2024-12-09 17:39:00.378781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.503 [2024-12-09 17:39:00.378795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.503 [2024-12-09 17:39:00.378802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.503 [2024-12-09 17:39:00.378808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:31.503 [2024-12-09 17:39:00.378823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:31.503 qpair failed and we were unable to recover it. 00:28:31.503 [2024-12-09 17:39:00.388741] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.503 [2024-12-09 17:39:00.388797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.503 [2024-12-09 17:39:00.388811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.503 [2024-12-09 17:39:00.388819] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.503 [2024-12-09 17:39:00.388825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:31.503 [2024-12-09 17:39:00.388841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:31.503 qpair failed and we were unable to recover it. 00:28:31.503 [2024-12-09 17:39:00.398711] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.503 [2024-12-09 17:39:00.398767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.503 [2024-12-09 17:39:00.398780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.503 [2024-12-09 17:39:00.398787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.503 [2024-12-09 17:39:00.398795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:31.503 [2024-12-09 17:39:00.398810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:31.503 qpair failed and we were unable to recover it. 00:28:31.503 [2024-12-09 17:39:00.408773] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.503 [2024-12-09 17:39:00.408835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.503 [2024-12-09 17:39:00.408852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.503 [2024-12-09 17:39:00.408860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.503 [2024-12-09 17:39:00.408867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:31.503 [2024-12-09 17:39:00.408882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:31.503 qpair failed and we were unable to recover it. 00:28:31.503 [2024-12-09 17:39:00.418713] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.503 [2024-12-09 17:39:00.418773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.503 [2024-12-09 17:39:00.418786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.503 [2024-12-09 17:39:00.418793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.503 [2024-12-09 17:39:00.418800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:31.503 [2024-12-09 17:39:00.418814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:31.503 qpair failed and we were unable to recover it. 00:28:31.503 [2024-12-09 17:39:00.428818] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.503 [2024-12-09 17:39:00.428875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.503 [2024-12-09 17:39:00.428888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.503 [2024-12-09 17:39:00.428895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.503 [2024-12-09 17:39:00.428902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:31.503 [2024-12-09 17:39:00.428916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:31.503 qpair failed and we were unable to recover it. 00:28:31.503 [2024-12-09 17:39:00.438847] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.503 [2024-12-09 17:39:00.438921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.503 [2024-12-09 17:39:00.438935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.503 [2024-12-09 17:39:00.438943] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.503 [2024-12-09 17:39:00.438949] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:31.503 [2024-12-09 17:39:00.438964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:31.503 qpair failed and we were unable to recover it. 00:28:31.503 [2024-12-09 17:39:00.448871] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.503 [2024-12-09 17:39:00.448937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.503 [2024-12-09 17:39:00.448951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.503 [2024-12-09 17:39:00.448959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.503 [2024-12-09 17:39:00.448969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:31.503 [2024-12-09 17:39:00.448986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:31.503 qpair failed and we were unable to recover it. 00:28:31.503 [2024-12-09 17:39:00.458913] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.503 [2024-12-09 17:39:00.458970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.503 [2024-12-09 17:39:00.458984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.503 [2024-12-09 17:39:00.458991] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.503 [2024-12-09 17:39:00.458998] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:31.503 [2024-12-09 17:39:00.459013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:31.503 qpair failed and we were unable to recover it. 00:28:31.503 [2024-12-09 17:39:00.468914] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.503 [2024-12-09 17:39:00.468970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.503 [2024-12-09 17:39:00.468985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.503 [2024-12-09 17:39:00.468993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.503 [2024-12-09 17:39:00.468999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:31.503 [2024-12-09 17:39:00.469016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:31.503 qpair failed and we were unable to recover it. 00:28:31.503 [2024-12-09 17:39:00.478940] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.503 [2024-12-09 17:39:00.479002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.503 [2024-12-09 17:39:00.479016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.503 [2024-12-09 17:39:00.479023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.503 [2024-12-09 17:39:00.479030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:31.503 [2024-12-09 17:39:00.479045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:31.503 qpair failed and we were unable to recover it. 00:28:31.503 [2024-12-09 17:39:00.488973] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.503 [2024-12-09 17:39:00.489035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.503 [2024-12-09 17:39:00.489049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.503 [2024-12-09 17:39:00.489056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.503 [2024-12-09 17:39:00.489063] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:31.503 [2024-12-09 17:39:00.489078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:31.503 qpair failed and we were unable to recover it. 00:28:31.503 [2024-12-09 17:39:00.498920] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.503 [2024-12-09 17:39:00.498977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.503 [2024-12-09 17:39:00.498991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.503 [2024-12-09 17:39:00.498998] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.503 [2024-12-09 17:39:00.499004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:31.503 [2024-12-09 17:39:00.499019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:31.504 qpair failed and we were unable to recover it. 00:28:31.504 [2024-12-09 17:39:00.509055] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.504 [2024-12-09 17:39:00.509111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.504 [2024-12-09 17:39:00.509124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.504 [2024-12-09 17:39:00.509131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.504 [2024-12-09 17:39:00.509137] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:31.504 [2024-12-09 17:39:00.509152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:31.504 qpair failed and we were unable to recover it. 00:28:31.504 [2024-12-09 17:39:00.519064] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.504 [2024-12-09 17:39:00.519124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.504 [2024-12-09 17:39:00.519138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.504 [2024-12-09 17:39:00.519144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.504 [2024-12-09 17:39:00.519151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:31.504 [2024-12-09 17:39:00.519166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:31.504 qpair failed and we were unable to recover it. 00:28:31.504 [2024-12-09 17:39:00.529086] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.504 [2024-12-09 17:39:00.529142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.504 [2024-12-09 17:39:00.529155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.504 [2024-12-09 17:39:00.529162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.504 [2024-12-09 17:39:00.529168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:31.504 [2024-12-09 17:39:00.529183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:31.504 qpair failed and we were unable to recover it. 00:28:31.504 [2024-12-09 17:39:00.539108] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.504 [2024-12-09 17:39:00.539161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.504 [2024-12-09 17:39:00.539179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.504 [2024-12-09 17:39:00.539187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.504 [2024-12-09 17:39:00.539193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:31.504 [2024-12-09 17:39:00.539208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:31.504 qpair failed and we were unable to recover it. 00:28:31.504 [2024-12-09 17:39:00.549136] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.504 [2024-12-09 17:39:00.549191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.504 [2024-12-09 17:39:00.549205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.504 [2024-12-09 17:39:00.549213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.504 [2024-12-09 17:39:00.549225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:31.504 [2024-12-09 17:39:00.549240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:31.504 qpair failed and we were unable to recover it. 00:28:31.504 [2024-12-09 17:39:00.559162] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.504 [2024-12-09 17:39:00.559232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.504 [2024-12-09 17:39:00.559246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.504 [2024-12-09 17:39:00.559254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.504 [2024-12-09 17:39:00.559260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:31.504 [2024-12-09 17:39:00.559276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:31.504 qpair failed and we were unable to recover it. 00:28:31.504 [2024-12-09 17:39:00.569148] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.504 [2024-12-09 17:39:00.569207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.504 [2024-12-09 17:39:00.569225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.504 [2024-12-09 17:39:00.569233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.504 [2024-12-09 17:39:00.569239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:31.504 [2024-12-09 17:39:00.569256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:31.504 qpair failed and we were unable to recover it. 00:28:31.504 [2024-12-09 17:39:00.579204] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.504 [2024-12-09 17:39:00.579268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.504 [2024-12-09 17:39:00.579282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.504 [2024-12-09 17:39:00.579290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.504 [2024-12-09 17:39:00.579299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:31.504 [2024-12-09 17:39:00.579315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:31.504 qpair failed and we were unable to recover it. 00:28:31.504 [2024-12-09 17:39:00.589254] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.504 [2024-12-09 17:39:00.589306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.504 [2024-12-09 17:39:00.589320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.504 [2024-12-09 17:39:00.589327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.504 [2024-12-09 17:39:00.589333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:31.504 [2024-12-09 17:39:00.589349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:31.504 qpair failed and we were unable to recover it. 00:28:31.504 [2024-12-09 17:39:00.599277] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.504 [2024-12-09 17:39:00.599347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.504 [2024-12-09 17:39:00.599362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.504 [2024-12-09 17:39:00.599369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.504 [2024-12-09 17:39:00.599375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:31.504 [2024-12-09 17:39:00.599391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:31.504 qpair failed and we were unable to recover it. 00:28:31.504 [2024-12-09 17:39:00.609342] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.504 [2024-12-09 17:39:00.609400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.504 [2024-12-09 17:39:00.609413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.504 [2024-12-09 17:39:00.609421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.504 [2024-12-09 17:39:00.609428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:31.504 [2024-12-09 17:39:00.609443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:31.504 qpair failed and we were unable to recover it. 00:28:31.504 [2024-12-09 17:39:00.619377] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.504 [2024-12-09 17:39:00.619432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.504 [2024-12-09 17:39:00.619445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.504 [2024-12-09 17:39:00.619452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.504 [2024-12-09 17:39:00.619459] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:31.504 [2024-12-09 17:39:00.619475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:31.504 qpair failed and we were unable to recover it. 00:28:31.504 [2024-12-09 17:39:00.629367] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.504 [2024-12-09 17:39:00.629424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.505 [2024-12-09 17:39:00.629437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.505 [2024-12-09 17:39:00.629445] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.505 [2024-12-09 17:39:00.629452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:31.505 [2024-12-09 17:39:00.629467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:31.505 qpair failed and we were unable to recover it. 00:28:31.505 [2024-12-09 17:39:00.639360] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.505 [2024-12-09 17:39:00.639417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.505 [2024-12-09 17:39:00.639431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.505 [2024-12-09 17:39:00.639438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.505 [2024-12-09 17:39:00.639445] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:31.505 [2024-12-09 17:39:00.639460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:31.505 qpair failed and we were unable to recover it. 00:28:31.505 [2024-12-09 17:39:00.649374] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.505 [2024-12-09 17:39:00.649431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.505 [2024-12-09 17:39:00.649444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.505 [2024-12-09 17:39:00.649451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.505 [2024-12-09 17:39:00.649458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:31.505 [2024-12-09 17:39:00.649473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:31.505 qpair failed and we were unable to recover it. 00:28:31.505 [2024-12-09 17:39:00.659464] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.505 [2024-12-09 17:39:00.659518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.505 [2024-12-09 17:39:00.659532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.505 [2024-12-09 17:39:00.659539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.505 [2024-12-09 17:39:00.659545] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:31.505 [2024-12-09 17:39:00.659560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:31.505 qpair failed and we were unable to recover it. 00:28:31.505 [2024-12-09 17:39:00.669500] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.505 [2024-12-09 17:39:00.669562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.505 [2024-12-09 17:39:00.669577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.505 [2024-12-09 17:39:00.669584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.505 [2024-12-09 17:39:00.669590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:31.505 [2024-12-09 17:39:00.669605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:31.505 qpair failed and we were unable to recover it. 00:28:31.764 [2024-12-09 17:39:00.679595] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.764 [2024-12-09 17:39:00.679658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.764 [2024-12-09 17:39:00.679671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.764 [2024-12-09 17:39:00.679678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.764 [2024-12-09 17:39:00.679684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:31.764 [2024-12-09 17:39:00.679700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:31.764 qpair failed and we were unable to recover it. 00:28:31.764 [2024-12-09 17:39:00.689516] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.764 [2024-12-09 17:39:00.689578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.764 [2024-12-09 17:39:00.689592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.764 [2024-12-09 17:39:00.689599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.764 [2024-12-09 17:39:00.689606] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:31.764 [2024-12-09 17:39:00.689622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:31.764 qpair failed and we were unable to recover it. 00:28:31.764 [2024-12-09 17:39:00.699583] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.764 [2024-12-09 17:39:00.699645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.764 [2024-12-09 17:39:00.699659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.764 [2024-12-09 17:39:00.699666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.764 [2024-12-09 17:39:00.699672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:31.764 [2024-12-09 17:39:00.699687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:31.764 qpair failed and we were unable to recover it. 00:28:31.764 [2024-12-09 17:39:00.709592] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.764 [2024-12-09 17:39:00.709650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.764 [2024-12-09 17:39:00.709663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.764 [2024-12-09 17:39:00.709675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.764 [2024-12-09 17:39:00.709681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:31.764 [2024-12-09 17:39:00.709696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:31.764 qpair failed and we were unable to recover it. 00:28:31.764 [2024-12-09 17:39:00.719592] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.764 [2024-12-09 17:39:00.719666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.764 [2024-12-09 17:39:00.719680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.764 [2024-12-09 17:39:00.719688] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.764 [2024-12-09 17:39:00.719695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:31.764 [2024-12-09 17:39:00.719710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:31.764 qpair failed and we were unable to recover it. 00:28:31.764 [2024-12-09 17:39:00.729723] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.764 [2024-12-09 17:39:00.729781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.764 [2024-12-09 17:39:00.729794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.764 [2024-12-09 17:39:00.729802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.764 [2024-12-09 17:39:00.729809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:31.764 [2024-12-09 17:39:00.729824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:31.764 qpair failed and we were unable to recover it. 00:28:31.764 [2024-12-09 17:39:00.739637] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.764 [2024-12-09 17:39:00.739688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.764 [2024-12-09 17:39:00.739702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.764 [2024-12-09 17:39:00.739709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.764 [2024-12-09 17:39:00.739715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:31.764 [2024-12-09 17:39:00.739730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:31.764 qpair failed and we were unable to recover it. 00:28:31.764 [2024-12-09 17:39:00.749660] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.764 [2024-12-09 17:39:00.749715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.764 [2024-12-09 17:39:00.749729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.764 [2024-12-09 17:39:00.749736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.764 [2024-12-09 17:39:00.749742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:31.764 [2024-12-09 17:39:00.749760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:31.764 qpair failed and we were unable to recover it. 00:28:31.764 [2024-12-09 17:39:00.759764] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.764 [2024-12-09 17:39:00.759833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.764 [2024-12-09 17:39:00.759846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.764 [2024-12-09 17:39:00.759854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.764 [2024-12-09 17:39:00.759860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:31.764 [2024-12-09 17:39:00.759875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:31.764 qpair failed and we were unable to recover it. 00:28:31.764 [2024-12-09 17:39:00.769732] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.764 [2024-12-09 17:39:00.769788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.764 [2024-12-09 17:39:00.769802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.764 [2024-12-09 17:39:00.769810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.764 [2024-12-09 17:39:00.769816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:31.764 [2024-12-09 17:39:00.769833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:31.764 qpair failed and we were unable to recover it. 00:28:31.764 [2024-12-09 17:39:00.779751] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.764 [2024-12-09 17:39:00.779806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.764 [2024-12-09 17:39:00.779820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.764 [2024-12-09 17:39:00.779828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.764 [2024-12-09 17:39:00.779834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:31.764 [2024-12-09 17:39:00.779850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:31.764 qpair failed and we were unable to recover it. 00:28:31.764 [2024-12-09 17:39:00.789759] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.764 [2024-12-09 17:39:00.789824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.764 [2024-12-09 17:39:00.789838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.764 [2024-12-09 17:39:00.789846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.764 [2024-12-09 17:39:00.789853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:31.764 [2024-12-09 17:39:00.789869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:31.764 qpair failed and we were unable to recover it. 00:28:31.765 [2024-12-09 17:39:00.799915] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.765 [2024-12-09 17:39:00.799983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.765 [2024-12-09 17:39:00.799998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.765 [2024-12-09 17:39:00.800005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.765 [2024-12-09 17:39:00.800011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:31.765 [2024-12-09 17:39:00.800026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:31.765 qpair failed and we were unable to recover it. 00:28:31.765 [2024-12-09 17:39:00.809896] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.765 [2024-12-09 17:39:00.809948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.765 [2024-12-09 17:39:00.809962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.765 [2024-12-09 17:39:00.809969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.765 [2024-12-09 17:39:00.809975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:31.765 [2024-12-09 17:39:00.809991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:31.765 qpair failed and we were unable to recover it. 00:28:31.765 [2024-12-09 17:39:00.819947] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.765 [2024-12-09 17:39:00.820002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.765 [2024-12-09 17:39:00.820015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.765 [2024-12-09 17:39:00.820022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.765 [2024-12-09 17:39:00.820028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:31.765 [2024-12-09 17:39:00.820043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:31.765 qpair failed and we were unable to recover it. 00:28:31.765 [2024-12-09 17:39:00.829994] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.765 [2024-12-09 17:39:00.830050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.765 [2024-12-09 17:39:00.830064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.765 [2024-12-09 17:39:00.830071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.765 [2024-12-09 17:39:00.830078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:31.765 [2024-12-09 17:39:00.830094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:31.765 qpair failed and we were unable to recover it. 00:28:31.765 [2024-12-09 17:39:00.840017] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.765 [2024-12-09 17:39:00.840075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.765 [2024-12-09 17:39:00.840092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.765 [2024-12-09 17:39:00.840100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.765 [2024-12-09 17:39:00.840106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:31.765 [2024-12-09 17:39:00.840122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:31.765 qpair failed and we were unable to recover it. 00:28:31.765 [2024-12-09 17:39:00.850017] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.765 [2024-12-09 17:39:00.850096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.765 [2024-12-09 17:39:00.850110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.765 [2024-12-09 17:39:00.850117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.765 [2024-12-09 17:39:00.850124] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:31.765 [2024-12-09 17:39:00.850139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:31.765 qpair failed and we were unable to recover it. 00:28:31.765 [2024-12-09 17:39:00.860074] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.765 [2024-12-09 17:39:00.860130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.765 [2024-12-09 17:39:00.860145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.765 [2024-12-09 17:39:00.860152] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.765 [2024-12-09 17:39:00.860158] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:31.765 [2024-12-09 17:39:00.860174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:31.765 qpair failed and we were unable to recover it. 00:28:31.765 [2024-12-09 17:39:00.870021] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.765 [2024-12-09 17:39:00.870077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.765 [2024-12-09 17:39:00.870091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.765 [2024-12-09 17:39:00.870099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.765 [2024-12-09 17:39:00.870106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:31.765 [2024-12-09 17:39:00.870121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:31.765 qpair failed and we were unable to recover it. 00:28:31.765 [2024-12-09 17:39:00.880084] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.765 [2024-12-09 17:39:00.880149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.765 [2024-12-09 17:39:00.880162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.765 [2024-12-09 17:39:00.880169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.765 [2024-12-09 17:39:00.880175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:31.765 [2024-12-09 17:39:00.880194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:31.765 qpair failed and we were unable to recover it. 00:28:31.765 [2024-12-09 17:39:00.890075] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.765 [2024-12-09 17:39:00.890133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.765 [2024-12-09 17:39:00.890146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.765 [2024-12-09 17:39:00.890153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.765 [2024-12-09 17:39:00.890160] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:31.765 [2024-12-09 17:39:00.890175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:31.765 qpair failed and we were unable to recover it. 00:28:31.765 [2024-12-09 17:39:00.900173] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.765 [2024-12-09 17:39:00.900240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.765 [2024-12-09 17:39:00.900254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.765 [2024-12-09 17:39:00.900261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.765 [2024-12-09 17:39:00.900268] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:31.765 [2024-12-09 17:39:00.900283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:31.765 qpair failed and we were unable to recover it. 00:28:31.765 [2024-12-09 17:39:00.910198] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.765 [2024-12-09 17:39:00.910260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.765 [2024-12-09 17:39:00.910273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.765 [2024-12-09 17:39:00.910280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.765 [2024-12-09 17:39:00.910286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:31.765 [2024-12-09 17:39:00.910302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:31.765 qpair failed and we were unable to recover it. 00:28:31.765 [2024-12-09 17:39:00.920232] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.765 [2024-12-09 17:39:00.920290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.765 [2024-12-09 17:39:00.920304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.765 [2024-12-09 17:39:00.920311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.765 [2024-12-09 17:39:00.920318] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:31.765 [2024-12-09 17:39:00.920333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:31.765 qpair failed and we were unable to recover it. 00:28:31.765 [2024-12-09 17:39:00.930255] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.765 [2024-12-09 17:39:00.930314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.766 [2024-12-09 17:39:00.930327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.766 [2024-12-09 17:39:00.930335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.766 [2024-12-09 17:39:00.930341] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:31.766 [2024-12-09 17:39:00.930356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:31.766 qpair failed and we were unable to recover it. 00:28:31.766 [2024-12-09 17:39:00.940303] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.025 [2024-12-09 17:39:00.940367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.025 [2024-12-09 17:39:00.940380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.025 [2024-12-09 17:39:00.940387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.025 [2024-12-09 17:39:00.940398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.025 [2024-12-09 17:39:00.940414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.025 qpair failed and we were unable to recover it. 00:28:32.025 [2024-12-09 17:39:00.950329] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.025 [2024-12-09 17:39:00.950387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.025 [2024-12-09 17:39:00.950400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.025 [2024-12-09 17:39:00.950407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.025 [2024-12-09 17:39:00.950414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.025 [2024-12-09 17:39:00.950429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.025 qpair failed and we were unable to recover it. 00:28:32.025 [2024-12-09 17:39:00.960346] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.025 [2024-12-09 17:39:00.960417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.025 [2024-12-09 17:39:00.960430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.025 [2024-12-09 17:39:00.960437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.025 [2024-12-09 17:39:00.960443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.025 [2024-12-09 17:39:00.960457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.025 qpair failed and we were unable to recover it. 00:28:32.025 [2024-12-09 17:39:00.970378] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.025 [2024-12-09 17:39:00.970460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.025 [2024-12-09 17:39:00.970478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.025 [2024-12-09 17:39:00.970485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.025 [2024-12-09 17:39:00.970491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.025 [2024-12-09 17:39:00.970506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.025 qpair failed and we were unable to recover it. 00:28:32.025 [2024-12-09 17:39:00.980422] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.025 [2024-12-09 17:39:00.980480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.025 [2024-12-09 17:39:00.980494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.025 [2024-12-09 17:39:00.980501] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.025 [2024-12-09 17:39:00.980507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.025 [2024-12-09 17:39:00.980522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.025 qpair failed and we were unable to recover it. 00:28:32.025 [2024-12-09 17:39:00.990474] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.025 [2024-12-09 17:39:00.990526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.025 [2024-12-09 17:39:00.990539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.025 [2024-12-09 17:39:00.990546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.025 [2024-12-09 17:39:00.990553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.025 [2024-12-09 17:39:00.990568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.025 qpair failed and we were unable to recover it. 00:28:32.025 [2024-12-09 17:39:01.000465] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.025 [2024-12-09 17:39:01.000522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.025 [2024-12-09 17:39:01.000535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.025 [2024-12-09 17:39:01.000543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.025 [2024-12-09 17:39:01.000550] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.025 [2024-12-09 17:39:01.000565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.025 qpair failed and we were unable to recover it. 00:28:32.025 [2024-12-09 17:39:01.010501] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.025 [2024-12-09 17:39:01.010562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.025 [2024-12-09 17:39:01.010575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.025 [2024-12-09 17:39:01.010583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.025 [2024-12-09 17:39:01.010593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.025 [2024-12-09 17:39:01.010608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.025 qpair failed and we were unable to recover it. 00:28:32.025 [2024-12-09 17:39:01.020511] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.025 [2024-12-09 17:39:01.020591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.025 [2024-12-09 17:39:01.020604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.025 [2024-12-09 17:39:01.020611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.025 [2024-12-09 17:39:01.020617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.025 [2024-12-09 17:39:01.020631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.025 qpair failed and we were unable to recover it. 00:28:32.025 [2024-12-09 17:39:01.030531] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.025 [2024-12-09 17:39:01.030585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.025 [2024-12-09 17:39:01.030598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.025 [2024-12-09 17:39:01.030605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.025 [2024-12-09 17:39:01.030612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.025 [2024-12-09 17:39:01.030627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.025 qpair failed and we were unable to recover it. 00:28:32.025 [2024-12-09 17:39:01.040571] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.025 [2024-12-09 17:39:01.040631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.025 [2024-12-09 17:39:01.040644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.025 [2024-12-09 17:39:01.040651] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.025 [2024-12-09 17:39:01.040657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.026 [2024-12-09 17:39:01.040672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.026 qpair failed and we were unable to recover it. 00:28:32.026 [2024-12-09 17:39:01.050635] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.026 [2024-12-09 17:39:01.050694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.026 [2024-12-09 17:39:01.050708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.026 [2024-12-09 17:39:01.050715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.026 [2024-12-09 17:39:01.050721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.026 [2024-12-09 17:39:01.050736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.026 qpair failed and we were unable to recover it. 00:28:32.026 [2024-12-09 17:39:01.060631] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.026 [2024-12-09 17:39:01.060716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.026 [2024-12-09 17:39:01.060730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.026 [2024-12-09 17:39:01.060737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.026 [2024-12-09 17:39:01.060743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.026 [2024-12-09 17:39:01.060758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.026 qpair failed and we were unable to recover it. 00:28:32.026 [2024-12-09 17:39:01.070590] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.026 [2024-12-09 17:39:01.070650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.026 [2024-12-09 17:39:01.070664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.026 [2024-12-09 17:39:01.070672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.026 [2024-12-09 17:39:01.070679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.026 [2024-12-09 17:39:01.070694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.026 qpair failed and we were unable to recover it. 00:28:32.026 [2024-12-09 17:39:01.080764] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.026 [2024-12-09 17:39:01.080831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.026 [2024-12-09 17:39:01.080844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.026 [2024-12-09 17:39:01.080851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.026 [2024-12-09 17:39:01.080858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.026 [2024-12-09 17:39:01.080873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.026 qpair failed and we were unable to recover it. 00:28:32.026 [2024-12-09 17:39:01.090746] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.026 [2024-12-09 17:39:01.090805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.026 [2024-12-09 17:39:01.090819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.026 [2024-12-09 17:39:01.090826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.026 [2024-12-09 17:39:01.090833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.026 [2024-12-09 17:39:01.090848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.026 qpair failed and we were unable to recover it. 00:28:32.026 [2024-12-09 17:39:01.100796] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.026 [2024-12-09 17:39:01.100851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.026 [2024-12-09 17:39:01.100866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.026 [2024-12-09 17:39:01.100874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.026 [2024-12-09 17:39:01.100879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.026 [2024-12-09 17:39:01.100894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.026 qpair failed and we were unable to recover it. 00:28:32.026 [2024-12-09 17:39:01.110849] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.026 [2024-12-09 17:39:01.110908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.026 [2024-12-09 17:39:01.110921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.026 [2024-12-09 17:39:01.110929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.026 [2024-12-09 17:39:01.110935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.026 [2024-12-09 17:39:01.110950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.026 qpair failed and we were unable to recover it. 00:28:32.026 [2024-12-09 17:39:01.120802] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.026 [2024-12-09 17:39:01.120859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.026 [2024-12-09 17:39:01.120872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.026 [2024-12-09 17:39:01.120878] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.026 [2024-12-09 17:39:01.120885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.026 [2024-12-09 17:39:01.120900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.026 qpair failed and we were unable to recover it. 00:28:32.026 [2024-12-09 17:39:01.130762] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.026 [2024-12-09 17:39:01.130819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.026 [2024-12-09 17:39:01.130832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.026 [2024-12-09 17:39:01.130839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.026 [2024-12-09 17:39:01.130845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.026 [2024-12-09 17:39:01.130860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.026 qpair failed and we were unable to recover it. 00:28:32.026 [2024-12-09 17:39:01.140789] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.026 [2024-12-09 17:39:01.140874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.026 [2024-12-09 17:39:01.140887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.026 [2024-12-09 17:39:01.140898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.026 [2024-12-09 17:39:01.140904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.026 [2024-12-09 17:39:01.140919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.026 qpair failed and we were unable to recover it. 00:28:32.026 [2024-12-09 17:39:01.150880] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.026 [2024-12-09 17:39:01.150934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.026 [2024-12-09 17:39:01.150950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.026 [2024-12-09 17:39:01.150959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.026 [2024-12-09 17:39:01.150966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.026 [2024-12-09 17:39:01.150981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.026 qpair failed and we were unable to recover it. 00:28:32.026 [2024-12-09 17:39:01.160908] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.026 [2024-12-09 17:39:01.161005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.026 [2024-12-09 17:39:01.161019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.026 [2024-12-09 17:39:01.161026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.026 [2024-12-09 17:39:01.161033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.026 [2024-12-09 17:39:01.161049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.026 qpair failed and we were unable to recover it. 00:28:32.026 [2024-12-09 17:39:01.170929] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.026 [2024-12-09 17:39:01.170983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.026 [2024-12-09 17:39:01.170997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.026 [2024-12-09 17:39:01.171003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.026 [2024-12-09 17:39:01.171010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.026 [2024-12-09 17:39:01.171025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.026 qpair failed and we were unable to recover it. 00:28:32.026 [2024-12-09 17:39:01.181029] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.027 [2024-12-09 17:39:01.181119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.027 [2024-12-09 17:39:01.181133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.027 [2024-12-09 17:39:01.181140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.027 [2024-12-09 17:39:01.181146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.027 [2024-12-09 17:39:01.181161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.027 qpair failed and we were unable to recover it. 00:28:32.027 [2024-12-09 17:39:01.190987] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.027 [2024-12-09 17:39:01.191045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.027 [2024-12-09 17:39:01.191059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.027 [2024-12-09 17:39:01.191067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.027 [2024-12-09 17:39:01.191073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.027 [2024-12-09 17:39:01.191089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.027 qpair failed and we were unable to recover it. 00:28:32.027 [2024-12-09 17:39:01.201024] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.027 [2024-12-09 17:39:01.201084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.027 [2024-12-09 17:39:01.201097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.027 [2024-12-09 17:39:01.201104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.027 [2024-12-09 17:39:01.201111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.027 [2024-12-09 17:39:01.201125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.027 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-09 17:39:01.211019] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.285 [2024-12-09 17:39:01.211108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.285 [2024-12-09 17:39:01.211121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.285 [2024-12-09 17:39:01.211128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.285 [2024-12-09 17:39:01.211134] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.285 [2024-12-09 17:39:01.211148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-09 17:39:01.221027] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.285 [2024-12-09 17:39:01.221078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.285 [2024-12-09 17:39:01.221091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.285 [2024-12-09 17:39:01.221098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.285 [2024-12-09 17:39:01.221104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.285 [2024-12-09 17:39:01.221119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-09 17:39:01.231095] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.285 [2024-12-09 17:39:01.231168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.285 [2024-12-09 17:39:01.231182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.285 [2024-12-09 17:39:01.231189] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.285 [2024-12-09 17:39:01.231196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.285 [2024-12-09 17:39:01.231211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-09 17:39:01.241139] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.285 [2024-12-09 17:39:01.241195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.286 [2024-12-09 17:39:01.241209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.286 [2024-12-09 17:39:01.241216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.286 [2024-12-09 17:39:01.241226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.286 [2024-12-09 17:39:01.241241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-09 17:39:01.251164] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.286 [2024-12-09 17:39:01.251233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.286 [2024-12-09 17:39:01.251246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.286 [2024-12-09 17:39:01.251253] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.286 [2024-12-09 17:39:01.251260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.286 [2024-12-09 17:39:01.251274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-09 17:39:01.261194] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.286 [2024-12-09 17:39:01.261256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.286 [2024-12-09 17:39:01.261269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.286 [2024-12-09 17:39:01.261277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.286 [2024-12-09 17:39:01.261283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.286 [2024-12-09 17:39:01.261297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-09 17:39:01.271245] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.286 [2024-12-09 17:39:01.271317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.286 [2024-12-09 17:39:01.271331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.286 [2024-12-09 17:39:01.271342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.286 [2024-12-09 17:39:01.271348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.286 [2024-12-09 17:39:01.271364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-09 17:39:01.281273] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.286 [2024-12-09 17:39:01.281374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.286 [2024-12-09 17:39:01.281387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.286 [2024-12-09 17:39:01.281394] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.286 [2024-12-09 17:39:01.281400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.286 [2024-12-09 17:39:01.281415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-09 17:39:01.291197] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.286 [2024-12-09 17:39:01.291264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.286 [2024-12-09 17:39:01.291278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.286 [2024-12-09 17:39:01.291285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.286 [2024-12-09 17:39:01.291291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.286 [2024-12-09 17:39:01.291306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-09 17:39:01.301279] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.286 [2024-12-09 17:39:01.301333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.286 [2024-12-09 17:39:01.301346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.286 [2024-12-09 17:39:01.301353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.286 [2024-12-09 17:39:01.301360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.286 [2024-12-09 17:39:01.301376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-09 17:39:01.311314] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.286 [2024-12-09 17:39:01.311368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.286 [2024-12-09 17:39:01.311382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.286 [2024-12-09 17:39:01.311389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.286 [2024-12-09 17:39:01.311396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.286 [2024-12-09 17:39:01.311414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-09 17:39:01.321343] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.286 [2024-12-09 17:39:01.321401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.286 [2024-12-09 17:39:01.321414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.286 [2024-12-09 17:39:01.321421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.286 [2024-12-09 17:39:01.321428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.286 [2024-12-09 17:39:01.321443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-09 17:39:01.331382] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.286 [2024-12-09 17:39:01.331440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.286 [2024-12-09 17:39:01.331454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.286 [2024-12-09 17:39:01.331461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.286 [2024-12-09 17:39:01.331468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.286 [2024-12-09 17:39:01.331482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-09 17:39:01.341495] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.286 [2024-12-09 17:39:01.341585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.286 [2024-12-09 17:39:01.341612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.286 [2024-12-09 17:39:01.341619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.286 [2024-12-09 17:39:01.341625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.286 [2024-12-09 17:39:01.341646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-09 17:39:01.351344] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.286 [2024-12-09 17:39:01.351406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.286 [2024-12-09 17:39:01.351420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.286 [2024-12-09 17:39:01.351427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.286 [2024-12-09 17:39:01.351434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.286 [2024-12-09 17:39:01.351449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-09 17:39:01.361447] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.286 [2024-12-09 17:39:01.361509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.286 [2024-12-09 17:39:01.361522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.286 [2024-12-09 17:39:01.361529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.286 [2024-12-09 17:39:01.361536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.286 [2024-12-09 17:39:01.361550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-09 17:39:01.371482] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.286 [2024-12-09 17:39:01.371537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.286 [2024-12-09 17:39:01.371551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.286 [2024-12-09 17:39:01.371558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.286 [2024-12-09 17:39:01.371564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.287 [2024-12-09 17:39:01.371580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.287 qpair failed and we were unable to recover it. 00:28:32.287 [2024-12-09 17:39:01.381514] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.287 [2024-12-09 17:39:01.381567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.287 [2024-12-09 17:39:01.381581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.287 [2024-12-09 17:39:01.381588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.287 [2024-12-09 17:39:01.381595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.287 [2024-12-09 17:39:01.381610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.287 qpair failed and we were unable to recover it. 00:28:32.287 [2024-12-09 17:39:01.391560] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.287 [2024-12-09 17:39:01.391618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.287 [2024-12-09 17:39:01.391631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.287 [2024-12-09 17:39:01.391638] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.287 [2024-12-09 17:39:01.391645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.287 [2024-12-09 17:39:01.391660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.287 qpair failed and we were unable to recover it. 00:28:32.287 [2024-12-09 17:39:01.401582] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.287 [2024-12-09 17:39:01.401660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.287 [2024-12-09 17:39:01.401676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.287 [2024-12-09 17:39:01.401684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.287 [2024-12-09 17:39:01.401690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.287 [2024-12-09 17:39:01.401705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.287 qpair failed and we were unable to recover it. 00:28:32.287 [2024-12-09 17:39:01.411543] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.287 [2024-12-09 17:39:01.411603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.287 [2024-12-09 17:39:01.411616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.287 [2024-12-09 17:39:01.411624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.287 [2024-12-09 17:39:01.411630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.287 [2024-12-09 17:39:01.411646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.287 qpair failed and we were unable to recover it. 00:28:32.287 [2024-12-09 17:39:01.421573] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.287 [2024-12-09 17:39:01.421627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.287 [2024-12-09 17:39:01.421640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.287 [2024-12-09 17:39:01.421647] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.287 [2024-12-09 17:39:01.421653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.287 [2024-12-09 17:39:01.421668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.287 qpair failed and we were unable to recover it. 00:28:32.287 [2024-12-09 17:39:01.431674] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.287 [2024-12-09 17:39:01.431732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.287 [2024-12-09 17:39:01.431745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.287 [2024-12-09 17:39:01.431753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.287 [2024-12-09 17:39:01.431760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.287 [2024-12-09 17:39:01.431775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.287 qpair failed and we were unable to recover it. 00:28:32.287 [2024-12-09 17:39:01.441739] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.287 [2024-12-09 17:39:01.441794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.287 [2024-12-09 17:39:01.441807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.287 [2024-12-09 17:39:01.441814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.287 [2024-12-09 17:39:01.441821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.287 [2024-12-09 17:39:01.441840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.287 qpair failed and we were unable to recover it. 00:28:32.287 [2024-12-09 17:39:01.451739] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.287 [2024-12-09 17:39:01.451796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.287 [2024-12-09 17:39:01.451809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.287 [2024-12-09 17:39:01.451816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.287 [2024-12-09 17:39:01.451823] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.287 [2024-12-09 17:39:01.451838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.287 qpair failed and we were unable to recover it. 00:28:32.287 [2024-12-09 17:39:01.461772] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.287 [2024-12-09 17:39:01.461834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.287 [2024-12-09 17:39:01.461848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.287 [2024-12-09 17:39:01.461855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.287 [2024-12-09 17:39:01.461861] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.287 [2024-12-09 17:39:01.461876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.287 qpair failed and we were unable to recover it. 00:28:32.545 [2024-12-09 17:39:01.471816] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.545 [2024-12-09 17:39:01.471877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.545 [2024-12-09 17:39:01.471891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.545 [2024-12-09 17:39:01.471899] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.545 [2024-12-09 17:39:01.471905] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.546 [2024-12-09 17:39:01.471920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.546 qpair failed and we were unable to recover it. 00:28:32.546 [2024-12-09 17:39:01.481760] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.546 [2024-12-09 17:39:01.481824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.546 [2024-12-09 17:39:01.481837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.546 [2024-12-09 17:39:01.481844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.546 [2024-12-09 17:39:01.481851] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.546 [2024-12-09 17:39:01.481865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.546 qpair failed and we were unable to recover it. 00:28:32.546 [2024-12-09 17:39:01.491883] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.546 [2024-12-09 17:39:01.491949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.546 [2024-12-09 17:39:01.491962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.546 [2024-12-09 17:39:01.491969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.546 [2024-12-09 17:39:01.491975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.546 [2024-12-09 17:39:01.491990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.546 qpair failed and we were unable to recover it. 00:28:32.546 [2024-12-09 17:39:01.501787] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.546 [2024-12-09 17:39:01.501839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.546 [2024-12-09 17:39:01.501853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.546 [2024-12-09 17:39:01.501860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.546 [2024-12-09 17:39:01.501867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.546 [2024-12-09 17:39:01.501882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.546 qpair failed and we were unable to recover it. 00:28:32.546 [2024-12-09 17:39:01.511892] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.546 [2024-12-09 17:39:01.511948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.546 [2024-12-09 17:39:01.511962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.546 [2024-12-09 17:39:01.511969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.546 [2024-12-09 17:39:01.511975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.546 [2024-12-09 17:39:01.511990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.546 qpair failed and we were unable to recover it. 00:28:32.546 [2024-12-09 17:39:01.521857] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.546 [2024-12-09 17:39:01.521913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.546 [2024-12-09 17:39:01.521926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.546 [2024-12-09 17:39:01.521933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.546 [2024-12-09 17:39:01.521940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.546 [2024-12-09 17:39:01.521955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.546 qpair failed and we were unable to recover it. 00:28:32.546 [2024-12-09 17:39:01.531951] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.546 [2024-12-09 17:39:01.532005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.546 [2024-12-09 17:39:01.532022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.546 [2024-12-09 17:39:01.532029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.546 [2024-12-09 17:39:01.532035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.546 [2024-12-09 17:39:01.532051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.546 qpair failed and we were unable to recover it. 00:28:32.546 [2024-12-09 17:39:01.541989] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.546 [2024-12-09 17:39:01.542039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.546 [2024-12-09 17:39:01.542053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.546 [2024-12-09 17:39:01.542059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.546 [2024-12-09 17:39:01.542066] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.546 [2024-12-09 17:39:01.542081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.546 qpair failed and we were unable to recover it. 00:28:32.546 [2024-12-09 17:39:01.552051] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.546 [2024-12-09 17:39:01.552107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.546 [2024-12-09 17:39:01.552120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.546 [2024-12-09 17:39:01.552127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.546 [2024-12-09 17:39:01.552133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.546 [2024-12-09 17:39:01.552148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.546 qpair failed and we were unable to recover it. 00:28:32.546 [2024-12-09 17:39:01.562077] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.546 [2024-12-09 17:39:01.562151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.546 [2024-12-09 17:39:01.562165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.546 [2024-12-09 17:39:01.562172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.546 [2024-12-09 17:39:01.562178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.546 [2024-12-09 17:39:01.562193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.546 qpair failed and we were unable to recover it. 00:28:32.546 [2024-12-09 17:39:01.572046] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.546 [2024-12-09 17:39:01.572144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.546 [2024-12-09 17:39:01.572160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.546 [2024-12-09 17:39:01.572167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.546 [2024-12-09 17:39:01.572178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.546 [2024-12-09 17:39:01.572193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.546 qpair failed and we were unable to recover it. 00:28:32.546 [2024-12-09 17:39:01.582121] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.546 [2024-12-09 17:39:01.582204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.546 [2024-12-09 17:39:01.582221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.546 [2024-12-09 17:39:01.582229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.546 [2024-12-09 17:39:01.582235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.546 [2024-12-09 17:39:01.582251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.546 qpair failed and we were unable to recover it. 00:28:32.546 [2024-12-09 17:39:01.592124] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.546 [2024-12-09 17:39:01.592182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.546 [2024-12-09 17:39:01.592195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.546 [2024-12-09 17:39:01.592203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.546 [2024-12-09 17:39:01.592210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.546 [2024-12-09 17:39:01.592230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.546 qpair failed and we were unable to recover it. 00:28:32.546 [2024-12-09 17:39:01.602164] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.546 [2024-12-09 17:39:01.602235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.546 [2024-12-09 17:39:01.602248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.546 [2024-12-09 17:39:01.602255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.546 [2024-12-09 17:39:01.602261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.546 [2024-12-09 17:39:01.602277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.546 qpair failed and we were unable to recover it. 00:28:32.547 [2024-12-09 17:39:01.612184] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.547 [2024-12-09 17:39:01.612246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.547 [2024-12-09 17:39:01.612259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.547 [2024-12-09 17:39:01.612266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.547 [2024-12-09 17:39:01.612272] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.547 [2024-12-09 17:39:01.612287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.547 qpair failed and we were unable to recover it. 00:28:32.547 [2024-12-09 17:39:01.622256] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.547 [2024-12-09 17:39:01.622313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.547 [2024-12-09 17:39:01.622326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.547 [2024-12-09 17:39:01.622333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.547 [2024-12-09 17:39:01.622340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.547 [2024-12-09 17:39:01.622354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.547 qpair failed and we were unable to recover it. 00:28:32.547 [2024-12-09 17:39:01.632251] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.547 [2024-12-09 17:39:01.632306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.547 [2024-12-09 17:39:01.632319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.547 [2024-12-09 17:39:01.632326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.547 [2024-12-09 17:39:01.632332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.547 [2024-12-09 17:39:01.632348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.547 qpair failed and we were unable to recover it. 00:28:32.547 [2024-12-09 17:39:01.642258] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.547 [2024-12-09 17:39:01.642312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.547 [2024-12-09 17:39:01.642325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.547 [2024-12-09 17:39:01.642332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.547 [2024-12-09 17:39:01.642339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.547 [2024-12-09 17:39:01.642354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.547 qpair failed and we were unable to recover it. 00:28:32.547 [2024-12-09 17:39:01.652294] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.547 [2024-12-09 17:39:01.652352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.547 [2024-12-09 17:39:01.652366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.547 [2024-12-09 17:39:01.652375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.547 [2024-12-09 17:39:01.652382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.547 [2024-12-09 17:39:01.652397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.547 qpair failed and we were unable to recover it. 00:28:32.547 [2024-12-09 17:39:01.662322] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.547 [2024-12-09 17:39:01.662380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.547 [2024-12-09 17:39:01.662397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.547 [2024-12-09 17:39:01.662404] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.547 [2024-12-09 17:39:01.662411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.547 [2024-12-09 17:39:01.662426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.547 qpair failed and we were unable to recover it. 00:28:32.547 [2024-12-09 17:39:01.672344] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.547 [2024-12-09 17:39:01.672396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.547 [2024-12-09 17:39:01.672410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.547 [2024-12-09 17:39:01.672417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.547 [2024-12-09 17:39:01.672424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.547 [2024-12-09 17:39:01.672439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.547 qpair failed and we were unable to recover it. 00:28:32.547 [2024-12-09 17:39:01.682433] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.547 [2024-12-09 17:39:01.682536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.547 [2024-12-09 17:39:01.682549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.547 [2024-12-09 17:39:01.682556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.547 [2024-12-09 17:39:01.682562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.547 [2024-12-09 17:39:01.682577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.547 qpair failed and we were unable to recover it. 00:28:32.547 [2024-12-09 17:39:01.692408] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.547 [2024-12-09 17:39:01.692466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.547 [2024-12-09 17:39:01.692479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.547 [2024-12-09 17:39:01.692486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.547 [2024-12-09 17:39:01.692493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.547 [2024-12-09 17:39:01.692508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.547 qpair failed and we were unable to recover it. 00:28:32.547 [2024-12-09 17:39:01.702441] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.547 [2024-12-09 17:39:01.702492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.547 [2024-12-09 17:39:01.702506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.547 [2024-12-09 17:39:01.702516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.547 [2024-12-09 17:39:01.702522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.547 [2024-12-09 17:39:01.702538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.547 qpair failed and we were unable to recover it. 00:28:32.547 [2024-12-09 17:39:01.712457] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.547 [2024-12-09 17:39:01.712506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.547 [2024-12-09 17:39:01.712519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.547 [2024-12-09 17:39:01.712526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.547 [2024-12-09 17:39:01.712532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.547 [2024-12-09 17:39:01.712547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.547 qpair failed and we were unable to recover it. 00:28:32.805 [2024-12-09 17:39:01.722600] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.805 [2024-12-09 17:39:01.722678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.805 [2024-12-09 17:39:01.722691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.805 [2024-12-09 17:39:01.722699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.805 [2024-12-09 17:39:01.722705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.805 [2024-12-09 17:39:01.722720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.805 qpair failed and we were unable to recover it. 00:28:32.805 [2024-12-09 17:39:01.732450] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.805 [2024-12-09 17:39:01.732512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.805 [2024-12-09 17:39:01.732526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.805 [2024-12-09 17:39:01.732533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.805 [2024-12-09 17:39:01.732541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.805 [2024-12-09 17:39:01.732556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.805 qpair failed and we were unable to recover it. 00:28:32.805 [2024-12-09 17:39:01.742549] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.805 [2024-12-09 17:39:01.742615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.805 [2024-12-09 17:39:01.742629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.805 [2024-12-09 17:39:01.742636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.805 [2024-12-09 17:39:01.742642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.805 [2024-12-09 17:39:01.742657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.805 qpair failed and we were unable to recover it. 00:28:32.805 [2024-12-09 17:39:01.752562] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.805 [2024-12-09 17:39:01.752616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.805 [2024-12-09 17:39:01.752629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.805 [2024-12-09 17:39:01.752636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.805 [2024-12-09 17:39:01.752643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.805 [2024-12-09 17:39:01.752657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.805 qpair failed and we were unable to recover it. 00:28:32.805 [2024-12-09 17:39:01.762607] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.805 [2024-12-09 17:39:01.762662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.805 [2024-12-09 17:39:01.762675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.805 [2024-12-09 17:39:01.762682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.805 [2024-12-09 17:39:01.762689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.805 [2024-12-09 17:39:01.762704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.805 qpair failed and we were unable to recover it. 00:28:32.805 [2024-12-09 17:39:01.772658] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.805 [2024-12-09 17:39:01.772714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.805 [2024-12-09 17:39:01.772728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.805 [2024-12-09 17:39:01.772735] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.805 [2024-12-09 17:39:01.772742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.805 [2024-12-09 17:39:01.772757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.805 qpair failed and we were unable to recover it. 00:28:32.805 [2024-12-09 17:39:01.782637] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.805 [2024-12-09 17:39:01.782728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.805 [2024-12-09 17:39:01.782741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.805 [2024-12-09 17:39:01.782748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.805 [2024-12-09 17:39:01.782754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.805 [2024-12-09 17:39:01.782769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.805 qpair failed and we were unable to recover it. 00:28:32.805 [2024-12-09 17:39:01.792744] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.805 [2024-12-09 17:39:01.792832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.805 [2024-12-09 17:39:01.792846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.805 [2024-12-09 17:39:01.792853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.805 [2024-12-09 17:39:01.792859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.805 [2024-12-09 17:39:01.792874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.805 qpair failed and we were unable to recover it. 00:28:32.805 [2024-12-09 17:39:01.802684] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.805 [2024-12-09 17:39:01.802768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.805 [2024-12-09 17:39:01.802781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.805 [2024-12-09 17:39:01.802789] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.805 [2024-12-09 17:39:01.802795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.805 [2024-12-09 17:39:01.802809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.805 qpair failed and we were unable to recover it. 00:28:32.805 [2024-12-09 17:39:01.812656] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.805 [2024-12-09 17:39:01.812709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.805 [2024-12-09 17:39:01.812722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.805 [2024-12-09 17:39:01.812730] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.805 [2024-12-09 17:39:01.812737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.805 [2024-12-09 17:39:01.812752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.805 qpair failed and we were unable to recover it. 00:28:32.805 [2024-12-09 17:39:01.822678] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.805 [2024-12-09 17:39:01.822754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.805 [2024-12-09 17:39:01.822768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.805 [2024-12-09 17:39:01.822775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.805 [2024-12-09 17:39:01.822782] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.806 [2024-12-09 17:39:01.822796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.806 qpair failed and we were unable to recover it. 00:28:32.806 [2024-12-09 17:39:01.832777] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.806 [2024-12-09 17:39:01.832831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.806 [2024-12-09 17:39:01.832845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.806 [2024-12-09 17:39:01.832856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.806 [2024-12-09 17:39:01.832863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.806 [2024-12-09 17:39:01.832878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.806 qpair failed and we were unable to recover it. 00:28:32.806 [2024-12-09 17:39:01.842792] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.806 [2024-12-09 17:39:01.842849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.806 [2024-12-09 17:39:01.842861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.806 [2024-12-09 17:39:01.842868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.806 [2024-12-09 17:39:01.842875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.806 [2024-12-09 17:39:01.842890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.806 qpair failed and we were unable to recover it. 00:28:32.806 [2024-12-09 17:39:01.852864] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.806 [2024-12-09 17:39:01.852920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.806 [2024-12-09 17:39:01.852933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.806 [2024-12-09 17:39:01.852940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.806 [2024-12-09 17:39:01.852947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.806 [2024-12-09 17:39:01.852961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.806 qpair failed and we were unable to recover it. 00:28:32.806 [2024-12-09 17:39:01.862803] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.806 [2024-12-09 17:39:01.862899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.806 [2024-12-09 17:39:01.862912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.806 [2024-12-09 17:39:01.862918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.806 [2024-12-09 17:39:01.862924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.806 [2024-12-09 17:39:01.862939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.806 qpair failed and we were unable to recover it. 00:28:32.806 [2024-12-09 17:39:01.872896] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.806 [2024-12-09 17:39:01.872952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.806 [2024-12-09 17:39:01.872966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.806 [2024-12-09 17:39:01.872973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.806 [2024-12-09 17:39:01.872979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.806 [2024-12-09 17:39:01.872997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.806 qpair failed and we were unable to recover it. 00:28:32.806 [2024-12-09 17:39:01.882928] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.806 [2024-12-09 17:39:01.882986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.806 [2024-12-09 17:39:01.883001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.806 [2024-12-09 17:39:01.883008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.806 [2024-12-09 17:39:01.883014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.806 [2024-12-09 17:39:01.883029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.806 qpair failed and we were unable to recover it. 00:28:32.806 [2024-12-09 17:39:01.892990] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.806 [2024-12-09 17:39:01.893045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.806 [2024-12-09 17:39:01.893058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.806 [2024-12-09 17:39:01.893065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.806 [2024-12-09 17:39:01.893072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.806 [2024-12-09 17:39:01.893087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.806 qpair failed and we were unable to recover it. 00:28:32.806 [2024-12-09 17:39:01.902991] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.806 [2024-12-09 17:39:01.903048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.806 [2024-12-09 17:39:01.903064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.806 [2024-12-09 17:39:01.903073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.806 [2024-12-09 17:39:01.903082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.806 [2024-12-09 17:39:01.903100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.806 qpair failed and we were unable to recover it. 00:28:32.806 [2024-12-09 17:39:01.913019] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.806 [2024-12-09 17:39:01.913078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.806 [2024-12-09 17:39:01.913094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.806 [2024-12-09 17:39:01.913102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.806 [2024-12-09 17:39:01.913108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.806 [2024-12-09 17:39:01.913124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.806 qpair failed and we were unable to recover it. 00:28:32.806 [2024-12-09 17:39:01.923111] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.806 [2024-12-09 17:39:01.923167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.806 [2024-12-09 17:39:01.923181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.806 [2024-12-09 17:39:01.923188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.806 [2024-12-09 17:39:01.923194] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.806 [2024-12-09 17:39:01.923210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.806 qpair failed and we were unable to recover it. 00:28:32.806 [2024-12-09 17:39:01.933092] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.806 [2024-12-09 17:39:01.933161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.806 [2024-12-09 17:39:01.933175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.806 [2024-12-09 17:39:01.933182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.806 [2024-12-09 17:39:01.933188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.806 [2024-12-09 17:39:01.933204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.806 qpair failed and we were unable to recover it. 00:28:32.806 [2024-12-09 17:39:01.943027] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.806 [2024-12-09 17:39:01.943091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.806 [2024-12-09 17:39:01.943104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.806 [2024-12-09 17:39:01.943112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.806 [2024-12-09 17:39:01.943118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.806 [2024-12-09 17:39:01.943132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.806 qpair failed and we were unable to recover it. 00:28:32.806 [2024-12-09 17:39:01.953056] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.806 [2024-12-09 17:39:01.953132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.806 [2024-12-09 17:39:01.953146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.806 [2024-12-09 17:39:01.953153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.806 [2024-12-09 17:39:01.953159] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.806 [2024-12-09 17:39:01.953173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.806 qpair failed and we were unable to recover it. 00:28:32.806 [2024-12-09 17:39:01.963200] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.806 [2024-12-09 17:39:01.963263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.806 [2024-12-09 17:39:01.963280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.806 [2024-12-09 17:39:01.963288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.806 [2024-12-09 17:39:01.963294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.806 [2024-12-09 17:39:01.963309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.806 qpair failed and we were unable to recover it. 00:28:32.806 [2024-12-09 17:39:01.973172] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.806 [2024-12-09 17:39:01.973253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.806 [2024-12-09 17:39:01.973267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.806 [2024-12-09 17:39:01.973275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.806 [2024-12-09 17:39:01.973281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:32.806 [2024-12-09 17:39:01.973296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.806 qpair failed and we were unable to recover it. 00:28:33.064 [2024-12-09 17:39:01.983186] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.064 [2024-12-09 17:39:01.983253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.064 [2024-12-09 17:39:01.983272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.064 [2024-12-09 17:39:01.983280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.064 [2024-12-09 17:39:01.983286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.064 [2024-12-09 17:39:01.983304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.064 qpair failed and we were unable to recover it. 00:28:33.064 [2024-12-09 17:39:01.993277] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.064 [2024-12-09 17:39:01.993342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.064 [2024-12-09 17:39:01.993358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.064 [2024-12-09 17:39:01.993366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.064 [2024-12-09 17:39:01.993373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.064 [2024-12-09 17:39:01.993390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.064 qpair failed and we were unable to recover it. 00:28:33.064 [2024-12-09 17:39:02.003316] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.064 [2024-12-09 17:39:02.003392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.064 [2024-12-09 17:39:02.003406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.064 [2024-12-09 17:39:02.003413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.064 [2024-12-09 17:39:02.003422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.064 [2024-12-09 17:39:02.003438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.064 qpair failed and we were unable to recover it. 00:28:33.064 [2024-12-09 17:39:02.013251] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.064 [2024-12-09 17:39:02.013355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.064 [2024-12-09 17:39:02.013369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.064 [2024-12-09 17:39:02.013376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.064 [2024-12-09 17:39:02.013382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.064 [2024-12-09 17:39:02.013397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.064 qpair failed and we were unable to recover it. 00:28:33.064 [2024-12-09 17:39:02.023388] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.064 [2024-12-09 17:39:02.023456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.064 [2024-12-09 17:39:02.023470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.064 [2024-12-09 17:39:02.023477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.064 [2024-12-09 17:39:02.023484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.064 [2024-12-09 17:39:02.023499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.064 qpair failed and we were unable to recover it. 00:28:33.064 [2024-12-09 17:39:02.033355] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.064 [2024-12-09 17:39:02.033413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.064 [2024-12-09 17:39:02.033427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.064 [2024-12-09 17:39:02.033435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.064 [2024-12-09 17:39:02.033442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.064 [2024-12-09 17:39:02.033457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.064 qpair failed and we were unable to recover it. 00:28:33.064 [2024-12-09 17:39:02.043467] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.064 [2024-12-09 17:39:02.043542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.064 [2024-12-09 17:39:02.043556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.064 [2024-12-09 17:39:02.043564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.064 [2024-12-09 17:39:02.043570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.064 [2024-12-09 17:39:02.043585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.064 qpair failed and we were unable to recover it. 00:28:33.064 [2024-12-09 17:39:02.053423] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.064 [2024-12-09 17:39:02.053478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.064 [2024-12-09 17:39:02.053492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.064 [2024-12-09 17:39:02.053500] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.064 [2024-12-09 17:39:02.053506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.064 [2024-12-09 17:39:02.053521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.064 qpair failed and we were unable to recover it. 00:28:33.065 [2024-12-09 17:39:02.063427] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.065 [2024-12-09 17:39:02.063508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.065 [2024-12-09 17:39:02.063522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.065 [2024-12-09 17:39:02.063529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.065 [2024-12-09 17:39:02.063535] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.065 [2024-12-09 17:39:02.063550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.065 qpair failed and we were unable to recover it. 00:28:33.065 [2024-12-09 17:39:02.073452] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.065 [2024-12-09 17:39:02.073509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.065 [2024-12-09 17:39:02.073524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.065 [2024-12-09 17:39:02.073532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.065 [2024-12-09 17:39:02.073539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.065 [2024-12-09 17:39:02.073554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.065 qpair failed and we were unable to recover it. 00:28:33.065 [2024-12-09 17:39:02.083519] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.065 [2024-12-09 17:39:02.083572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.065 [2024-12-09 17:39:02.083585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.065 [2024-12-09 17:39:02.083592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.065 [2024-12-09 17:39:02.083598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.065 [2024-12-09 17:39:02.083614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.065 qpair failed and we were unable to recover it. 00:28:33.065 [2024-12-09 17:39:02.093525] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.065 [2024-12-09 17:39:02.093582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.065 [2024-12-09 17:39:02.093598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.065 [2024-12-09 17:39:02.093605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.065 [2024-12-09 17:39:02.093612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.065 [2024-12-09 17:39:02.093626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.065 qpair failed and we were unable to recover it. 00:28:33.065 [2024-12-09 17:39:02.103492] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.065 [2024-12-09 17:39:02.103544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.065 [2024-12-09 17:39:02.103557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.065 [2024-12-09 17:39:02.103564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.065 [2024-12-09 17:39:02.103571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.065 [2024-12-09 17:39:02.103586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.065 qpair failed and we were unable to recover it. 00:28:33.065 [2024-12-09 17:39:02.113516] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.065 [2024-12-09 17:39:02.113572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.065 [2024-12-09 17:39:02.113585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.065 [2024-12-09 17:39:02.113592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.065 [2024-12-09 17:39:02.113598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.065 [2024-12-09 17:39:02.113613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.065 qpair failed and we were unable to recover it. 00:28:33.065 [2024-12-09 17:39:02.123551] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.065 [2024-12-09 17:39:02.123607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.065 [2024-12-09 17:39:02.123620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.065 [2024-12-09 17:39:02.123627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.065 [2024-12-09 17:39:02.123634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.065 [2024-12-09 17:39:02.123648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.065 qpair failed and we were unable to recover it. 00:28:33.065 [2024-12-09 17:39:02.133664] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.065 [2024-12-09 17:39:02.133737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.065 [2024-12-09 17:39:02.133751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.065 [2024-12-09 17:39:02.133758] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.065 [2024-12-09 17:39:02.133768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.065 [2024-12-09 17:39:02.133783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.065 qpair failed and we were unable to recover it. 00:28:33.065 [2024-12-09 17:39:02.143636] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.065 [2024-12-09 17:39:02.143726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.065 [2024-12-09 17:39:02.143739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.065 [2024-12-09 17:39:02.143746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.065 [2024-12-09 17:39:02.143753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.065 [2024-12-09 17:39:02.143768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.065 qpair failed and we were unable to recover it. 00:28:33.065 [2024-12-09 17:39:02.153667] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.065 [2024-12-09 17:39:02.153733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.065 [2024-12-09 17:39:02.153746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.065 [2024-12-09 17:39:02.153754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.065 [2024-12-09 17:39:02.153760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.065 [2024-12-09 17:39:02.153774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.065 qpair failed and we were unable to recover it. 00:28:33.065 [2024-12-09 17:39:02.163742] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.065 [2024-12-09 17:39:02.163815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.065 [2024-12-09 17:39:02.163828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.065 [2024-12-09 17:39:02.163835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.065 [2024-12-09 17:39:02.163841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.065 [2024-12-09 17:39:02.163855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.065 qpair failed and we were unable to recover it. 00:28:33.065 [2024-12-09 17:39:02.173741] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.065 [2024-12-09 17:39:02.173839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.065 [2024-12-09 17:39:02.173853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.065 [2024-12-09 17:39:02.173860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.065 [2024-12-09 17:39:02.173866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.065 [2024-12-09 17:39:02.173881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.065 qpair failed and we were unable to recover it. 00:28:33.065 [2024-12-09 17:39:02.183794] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.065 [2024-12-09 17:39:02.183852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.065 [2024-12-09 17:39:02.183865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.065 [2024-12-09 17:39:02.183872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.065 [2024-12-09 17:39:02.183878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.065 [2024-12-09 17:39:02.183893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.065 qpair failed and we were unable to recover it. 00:28:33.065 [2024-12-09 17:39:02.193732] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.065 [2024-12-09 17:39:02.193781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.066 [2024-12-09 17:39:02.193795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.066 [2024-12-09 17:39:02.193802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.066 [2024-12-09 17:39:02.193808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.066 [2024-12-09 17:39:02.193823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.066 qpair failed and we were unable to recover it. 00:28:33.066 [2024-12-09 17:39:02.203806] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.066 [2024-12-09 17:39:02.203887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.066 [2024-12-09 17:39:02.203901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.066 [2024-12-09 17:39:02.203908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.066 [2024-12-09 17:39:02.203915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.066 [2024-12-09 17:39:02.203931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.066 qpair failed and we were unable to recover it. 00:28:33.066 [2024-12-09 17:39:02.213802] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.066 [2024-12-09 17:39:02.213860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.066 [2024-12-09 17:39:02.213873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.066 [2024-12-09 17:39:02.213880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.066 [2024-12-09 17:39:02.213887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.066 [2024-12-09 17:39:02.213901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.066 qpair failed and we were unable to recover it. 00:28:33.066 [2024-12-09 17:39:02.223820] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.066 [2024-12-09 17:39:02.223881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.066 [2024-12-09 17:39:02.223897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.066 [2024-12-09 17:39:02.223905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.066 [2024-12-09 17:39:02.223911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.066 [2024-12-09 17:39:02.223926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.066 qpair failed and we were unable to recover it. 00:28:33.066 [2024-12-09 17:39:02.233924] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.066 [2024-12-09 17:39:02.233975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.066 [2024-12-09 17:39:02.233988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.066 [2024-12-09 17:39:02.233995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.066 [2024-12-09 17:39:02.234001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.066 [2024-12-09 17:39:02.234017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.066 qpair failed and we were unable to recover it. 00:28:33.324 [2024-12-09 17:39:02.244033] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.324 [2024-12-09 17:39:02.244094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.324 [2024-12-09 17:39:02.244112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.324 [2024-12-09 17:39:02.244120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.324 [2024-12-09 17:39:02.244127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.324 [2024-12-09 17:39:02.244144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.324 qpair failed and we were unable to recover it. 00:28:33.324 [2024-12-09 17:39:02.253995] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.324 [2024-12-09 17:39:02.254058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.324 [2024-12-09 17:39:02.254075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.324 [2024-12-09 17:39:02.254083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.324 [2024-12-09 17:39:02.254089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.324 [2024-12-09 17:39:02.254107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.324 qpair failed and we were unable to recover it. 00:28:33.324 [2024-12-09 17:39:02.264019] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.324 [2024-12-09 17:39:02.264076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.324 [2024-12-09 17:39:02.264089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.324 [2024-12-09 17:39:02.264100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.324 [2024-12-09 17:39:02.264107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.324 [2024-12-09 17:39:02.264122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.324 qpair failed and we were unable to recover it. 00:28:33.324 [2024-12-09 17:39:02.273991] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.324 [2024-12-09 17:39:02.274049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.325 [2024-12-09 17:39:02.274064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.325 [2024-12-09 17:39:02.274073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.325 [2024-12-09 17:39:02.274079] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.325 [2024-12-09 17:39:02.274095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.325 qpair failed and we were unable to recover it. 00:28:33.325 [2024-12-09 17:39:02.284023] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.325 [2024-12-09 17:39:02.284077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.325 [2024-12-09 17:39:02.284090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.325 [2024-12-09 17:39:02.284098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.325 [2024-12-09 17:39:02.284105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.325 [2024-12-09 17:39:02.284120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.325 qpair failed and we were unable to recover it. 00:28:33.325 [2024-12-09 17:39:02.294165] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.325 [2024-12-09 17:39:02.294230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.325 [2024-12-09 17:39:02.294244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.325 [2024-12-09 17:39:02.294252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.325 [2024-12-09 17:39:02.294258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.325 [2024-12-09 17:39:02.294273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.325 qpair failed and we were unable to recover it. 00:28:33.325 [2024-12-09 17:39:02.304131] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.325 [2024-12-09 17:39:02.304183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.325 [2024-12-09 17:39:02.304196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.325 [2024-12-09 17:39:02.304203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.325 [2024-12-09 17:39:02.304209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.325 [2024-12-09 17:39:02.304230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.325 qpair failed and we were unable to recover it. 00:28:33.325 [2024-12-09 17:39:02.314161] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.325 [2024-12-09 17:39:02.314220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.325 [2024-12-09 17:39:02.314234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.325 [2024-12-09 17:39:02.314241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.325 [2024-12-09 17:39:02.314247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.325 [2024-12-09 17:39:02.314262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.325 qpair failed and we were unable to recover it. 00:28:33.325 [2024-12-09 17:39:02.324175] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.325 [2024-12-09 17:39:02.324233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.325 [2024-12-09 17:39:02.324245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.325 [2024-12-09 17:39:02.324252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.325 [2024-12-09 17:39:02.324259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.325 [2024-12-09 17:39:02.324273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.325 qpair failed and we were unable to recover it. 00:28:33.325 [2024-12-09 17:39:02.334238] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.325 [2024-12-09 17:39:02.334297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.325 [2024-12-09 17:39:02.334310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.325 [2024-12-09 17:39:02.334317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.325 [2024-12-09 17:39:02.334324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.325 [2024-12-09 17:39:02.334338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.325 qpair failed and we were unable to recover it. 00:28:33.325 [2024-12-09 17:39:02.344289] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.325 [2024-12-09 17:39:02.344341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.325 [2024-12-09 17:39:02.344353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.325 [2024-12-09 17:39:02.344360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.325 [2024-12-09 17:39:02.344367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.325 [2024-12-09 17:39:02.344382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.325 qpair failed and we were unable to recover it. 00:28:33.325 [2024-12-09 17:39:02.354285] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.325 [2024-12-09 17:39:02.354342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.325 [2024-12-09 17:39:02.354356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.325 [2024-12-09 17:39:02.354363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.325 [2024-12-09 17:39:02.354369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.325 [2024-12-09 17:39:02.354384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.325 qpair failed and we were unable to recover it. 00:28:33.325 [2024-12-09 17:39:02.364282] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.325 [2024-12-09 17:39:02.364339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.325 [2024-12-09 17:39:02.364353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.325 [2024-12-09 17:39:02.364359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.325 [2024-12-09 17:39:02.364366] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.325 [2024-12-09 17:39:02.364382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.325 qpair failed and we were unable to recover it. 00:28:33.325 [2024-12-09 17:39:02.374349] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.325 [2024-12-09 17:39:02.374412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.325 [2024-12-09 17:39:02.374426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.325 [2024-12-09 17:39:02.374434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.325 [2024-12-09 17:39:02.374440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.325 [2024-12-09 17:39:02.374455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.325 qpair failed and we were unable to recover it. 00:28:33.325 [2024-12-09 17:39:02.384345] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.325 [2024-12-09 17:39:02.384397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.325 [2024-12-09 17:39:02.384410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.325 [2024-12-09 17:39:02.384417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.325 [2024-12-09 17:39:02.384424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.325 [2024-12-09 17:39:02.384438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.325 qpair failed and we were unable to recover it. 00:28:33.325 [2024-12-09 17:39:02.394382] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.325 [2024-12-09 17:39:02.394439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.325 [2024-12-09 17:39:02.394453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.325 [2024-12-09 17:39:02.394463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.325 [2024-12-09 17:39:02.394469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.325 [2024-12-09 17:39:02.394484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.325 qpair failed and we were unable to recover it. 00:28:33.325 [2024-12-09 17:39:02.404370] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.325 [2024-12-09 17:39:02.404432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.325 [2024-12-09 17:39:02.404446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.325 [2024-12-09 17:39:02.404453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.326 [2024-12-09 17:39:02.404459] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.326 [2024-12-09 17:39:02.404474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.326 qpair failed and we were unable to recover it. 00:28:33.326 [2024-12-09 17:39:02.414459] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.326 [2024-12-09 17:39:02.414513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.326 [2024-12-09 17:39:02.414527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.326 [2024-12-09 17:39:02.414534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.326 [2024-12-09 17:39:02.414541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.326 [2024-12-09 17:39:02.414557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.326 qpair failed and we were unable to recover it. 00:28:33.326 [2024-12-09 17:39:02.424469] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.326 [2024-12-09 17:39:02.424546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.326 [2024-12-09 17:39:02.424560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.326 [2024-12-09 17:39:02.424567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.326 [2024-12-09 17:39:02.424573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.326 [2024-12-09 17:39:02.424587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.326 qpair failed and we were unable to recover it. 00:28:33.326 [2024-12-09 17:39:02.434491] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.326 [2024-12-09 17:39:02.434542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.326 [2024-12-09 17:39:02.434556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.326 [2024-12-09 17:39:02.434563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.326 [2024-12-09 17:39:02.434569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.326 [2024-12-09 17:39:02.434587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.326 qpair failed and we were unable to recover it. 00:28:33.326 [2024-12-09 17:39:02.444553] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.326 [2024-12-09 17:39:02.444607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.326 [2024-12-09 17:39:02.444619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.326 [2024-12-09 17:39:02.444626] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.326 [2024-12-09 17:39:02.444632] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.326 [2024-12-09 17:39:02.444647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.326 qpair failed and we were unable to recover it. 00:28:33.326 [2024-12-09 17:39:02.454554] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.326 [2024-12-09 17:39:02.454607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.326 [2024-12-09 17:39:02.454621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.326 [2024-12-09 17:39:02.454628] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.326 [2024-12-09 17:39:02.454634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.326 [2024-12-09 17:39:02.454647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.326 qpair failed and we were unable to recover it. 00:28:33.326 [2024-12-09 17:39:02.464554] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.326 [2024-12-09 17:39:02.464612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.326 [2024-12-09 17:39:02.464625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.326 [2024-12-09 17:39:02.464633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.326 [2024-12-09 17:39:02.464639] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.326 [2024-12-09 17:39:02.464653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.326 qpair failed and we were unable to recover it. 00:28:33.326 [2024-12-09 17:39:02.474647] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.326 [2024-12-09 17:39:02.474706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.326 [2024-12-09 17:39:02.474720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.326 [2024-12-09 17:39:02.474728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.326 [2024-12-09 17:39:02.474734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.326 [2024-12-09 17:39:02.474749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.326 qpair failed and we were unable to recover it. 00:28:33.326 [2024-12-09 17:39:02.484702] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.326 [2024-12-09 17:39:02.484783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.326 [2024-12-09 17:39:02.484796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.326 [2024-12-09 17:39:02.484804] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.326 [2024-12-09 17:39:02.484810] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.326 [2024-12-09 17:39:02.484824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.326 qpair failed and we were unable to recover it. 00:28:33.326 [2024-12-09 17:39:02.494607] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.326 [2024-12-09 17:39:02.494686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.326 [2024-12-09 17:39:02.494699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.326 [2024-12-09 17:39:02.494706] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.326 [2024-12-09 17:39:02.494713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.326 [2024-12-09 17:39:02.494727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.326 qpair failed and we were unable to recover it. 00:28:33.585 [2024-12-09 17:39:02.504715] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.585 [2024-12-09 17:39:02.504782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.585 [2024-12-09 17:39:02.504800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.585 [2024-12-09 17:39:02.504808] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.585 [2024-12-09 17:39:02.504814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.585 [2024-12-09 17:39:02.504832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.585 qpair failed and we were unable to recover it. 00:28:33.585 [2024-12-09 17:39:02.514665] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.585 [2024-12-09 17:39:02.514724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.585 [2024-12-09 17:39:02.514740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.585 [2024-12-09 17:39:02.514748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.585 [2024-12-09 17:39:02.514754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.585 [2024-12-09 17:39:02.514771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.585 qpair failed and we were unable to recover it. 00:28:33.585 [2024-12-09 17:39:02.524764] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.585 [2024-12-09 17:39:02.524827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.585 [2024-12-09 17:39:02.524844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.585 [2024-12-09 17:39:02.524851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.585 [2024-12-09 17:39:02.524857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.585 [2024-12-09 17:39:02.524872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.585 qpair failed and we were unable to recover it. 00:28:33.585 [2024-12-09 17:39:02.534802] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.585 [2024-12-09 17:39:02.534862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.585 [2024-12-09 17:39:02.534876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.585 [2024-12-09 17:39:02.534884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.585 [2024-12-09 17:39:02.534891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.585 [2024-12-09 17:39:02.534906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.585 qpair failed and we were unable to recover it. 00:28:33.585 [2024-12-09 17:39:02.544844] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.585 [2024-12-09 17:39:02.544912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.585 [2024-12-09 17:39:02.544925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.585 [2024-12-09 17:39:02.544932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.585 [2024-12-09 17:39:02.544938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.585 [2024-12-09 17:39:02.544954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.585 qpair failed and we were unable to recover it. 00:28:33.585 [2024-12-09 17:39:02.554863] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.585 [2024-12-09 17:39:02.554912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.585 [2024-12-09 17:39:02.554926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.585 [2024-12-09 17:39:02.554933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.585 [2024-12-09 17:39:02.554940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.585 [2024-12-09 17:39:02.554955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.585 qpair failed and we were unable to recover it. 00:28:33.585 [2024-12-09 17:39:02.564815] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.585 [2024-12-09 17:39:02.564872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.585 [2024-12-09 17:39:02.564886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.585 [2024-12-09 17:39:02.564893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.585 [2024-12-09 17:39:02.564903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.585 [2024-12-09 17:39:02.564918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.585 qpair failed and we were unable to recover it. 00:28:33.585 [2024-12-09 17:39:02.574844] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.585 [2024-12-09 17:39:02.574900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.585 [2024-12-09 17:39:02.574915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.585 [2024-12-09 17:39:02.574924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.585 [2024-12-09 17:39:02.574935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.585 [2024-12-09 17:39:02.574952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.585 qpair failed and we were unable to recover it. 00:28:33.585 [2024-12-09 17:39:02.584844] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.585 [2024-12-09 17:39:02.584897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.585 [2024-12-09 17:39:02.584911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.585 [2024-12-09 17:39:02.584918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.585 [2024-12-09 17:39:02.584924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.585 [2024-12-09 17:39:02.584940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.585 qpair failed and we were unable to recover it. 00:28:33.585 [2024-12-09 17:39:02.594972] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.585 [2024-12-09 17:39:02.595040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.585 [2024-12-09 17:39:02.595053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.585 [2024-12-09 17:39:02.595061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.585 [2024-12-09 17:39:02.595067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.585 [2024-12-09 17:39:02.595082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.585 qpair failed and we were unable to recover it. 00:28:33.585 [2024-12-09 17:39:02.604917] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.585 [2024-12-09 17:39:02.604974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.585 [2024-12-09 17:39:02.604987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.585 [2024-12-09 17:39:02.604994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.585 [2024-12-09 17:39:02.605001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.585 [2024-12-09 17:39:02.605016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.585 qpair failed and we were unable to recover it. 00:28:33.585 [2024-12-09 17:39:02.615014] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.585 [2024-12-09 17:39:02.615098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.585 [2024-12-09 17:39:02.615112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.585 [2024-12-09 17:39:02.615119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.585 [2024-12-09 17:39:02.615126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.586 [2024-12-09 17:39:02.615141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.586 qpair failed and we were unable to recover it. 00:28:33.586 [2024-12-09 17:39:02.625041] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.586 [2024-12-09 17:39:02.625093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.586 [2024-12-09 17:39:02.625106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.586 [2024-12-09 17:39:02.625114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.586 [2024-12-09 17:39:02.625120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.586 [2024-12-09 17:39:02.625136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.586 qpair failed and we were unable to recover it. 00:28:33.586 [2024-12-09 17:39:02.635064] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.586 [2024-12-09 17:39:02.635136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.586 [2024-12-09 17:39:02.635150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.586 [2024-12-09 17:39:02.635158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.586 [2024-12-09 17:39:02.635164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.586 [2024-12-09 17:39:02.635178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.586 qpair failed and we were unable to recover it. 00:28:33.586 [2024-12-09 17:39:02.645108] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.586 [2024-12-09 17:39:02.645186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.586 [2024-12-09 17:39:02.645199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.586 [2024-12-09 17:39:02.645206] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.586 [2024-12-09 17:39:02.645213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.586 [2024-12-09 17:39:02.645231] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.586 qpair failed and we were unable to recover it. 00:28:33.586 [2024-12-09 17:39:02.655161] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.586 [2024-12-09 17:39:02.655226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.586 [2024-12-09 17:39:02.655242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.586 [2024-12-09 17:39:02.655250] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.586 [2024-12-09 17:39:02.655256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.586 [2024-12-09 17:39:02.655271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.586 qpair failed and we were unable to recover it. 00:28:33.586 [2024-12-09 17:39:02.665189] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.586 [2024-12-09 17:39:02.665299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.586 [2024-12-09 17:39:02.665312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.586 [2024-12-09 17:39:02.665319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.586 [2024-12-09 17:39:02.665325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.586 [2024-12-09 17:39:02.665339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.586 qpair failed and we were unable to recover it. 00:28:33.586 [2024-12-09 17:39:02.675172] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.586 [2024-12-09 17:39:02.675231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.586 [2024-12-09 17:39:02.675245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.586 [2024-12-09 17:39:02.675253] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.586 [2024-12-09 17:39:02.675259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.586 [2024-12-09 17:39:02.675275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.586 qpair failed and we were unable to recover it. 00:28:33.586 [2024-12-09 17:39:02.685238] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.586 [2024-12-09 17:39:02.685294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.586 [2024-12-09 17:39:02.685307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.586 [2024-12-09 17:39:02.685315] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.586 [2024-12-09 17:39:02.685321] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.586 [2024-12-09 17:39:02.685337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.586 qpair failed and we were unable to recover it. 00:28:33.586 [2024-12-09 17:39:02.695237] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.586 [2024-12-09 17:39:02.695295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.586 [2024-12-09 17:39:02.695308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.586 [2024-12-09 17:39:02.695315] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.586 [2024-12-09 17:39:02.695324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.586 [2024-12-09 17:39:02.695339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.586 qpair failed and we were unable to recover it. 00:28:33.586 [2024-12-09 17:39:02.705261] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.586 [2024-12-09 17:39:02.705315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.586 [2024-12-09 17:39:02.705328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.586 [2024-12-09 17:39:02.705335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.586 [2024-12-09 17:39:02.705341] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.586 [2024-12-09 17:39:02.705356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.586 qpair failed and we were unable to recover it. 00:28:33.586 [2024-12-09 17:39:02.715338] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.586 [2024-12-09 17:39:02.715407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.586 [2024-12-09 17:39:02.715420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.586 [2024-12-09 17:39:02.715427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.586 [2024-12-09 17:39:02.715433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.586 [2024-12-09 17:39:02.715448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.586 qpair failed and we were unable to recover it. 00:28:33.586 [2024-12-09 17:39:02.725264] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.586 [2024-12-09 17:39:02.725318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.586 [2024-12-09 17:39:02.725331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.586 [2024-12-09 17:39:02.725338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.586 [2024-12-09 17:39:02.725344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.586 [2024-12-09 17:39:02.725359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.586 qpair failed and we were unable to recover it. 00:28:33.586 [2024-12-09 17:39:02.735281] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.586 [2024-12-09 17:39:02.735338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.586 [2024-12-09 17:39:02.735352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.586 [2024-12-09 17:39:02.735358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.586 [2024-12-09 17:39:02.735365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.586 [2024-12-09 17:39:02.735379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.586 qpair failed and we were unable to recover it. 00:28:33.586 [2024-12-09 17:39:02.745375] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.586 [2024-12-09 17:39:02.745446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.586 [2024-12-09 17:39:02.745459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.586 [2024-12-09 17:39:02.745467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.586 [2024-12-09 17:39:02.745473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.586 [2024-12-09 17:39:02.745487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.586 qpair failed and we were unable to recover it. 00:28:33.586 [2024-12-09 17:39:02.755420] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.587 [2024-12-09 17:39:02.755475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.587 [2024-12-09 17:39:02.755488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.587 [2024-12-09 17:39:02.755495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.587 [2024-12-09 17:39:02.755502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.587 [2024-12-09 17:39:02.755517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.587 qpair failed and we were unable to recover it. 00:28:33.843 [2024-12-09 17:39:02.765475] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.843 [2024-12-09 17:39:02.765542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.843 [2024-12-09 17:39:02.765560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.843 [2024-12-09 17:39:02.765568] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.843 [2024-12-09 17:39:02.765575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.843 [2024-12-09 17:39:02.765593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.843 qpair failed and we were unable to recover it. 00:28:33.843 [2024-12-09 17:39:02.775539] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.843 [2024-12-09 17:39:02.775647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.843 [2024-12-09 17:39:02.775664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.843 [2024-12-09 17:39:02.775671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.843 [2024-12-09 17:39:02.775678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.843 [2024-12-09 17:39:02.775695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.843 qpair failed and we were unable to recover it. 00:28:33.843 [2024-12-09 17:39:02.785499] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.843 [2024-12-09 17:39:02.785581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.843 [2024-12-09 17:39:02.785598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.843 [2024-12-09 17:39:02.785605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.844 [2024-12-09 17:39:02.785611] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.844 [2024-12-09 17:39:02.785627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.844 qpair failed and we were unable to recover it. 00:28:33.844 [2024-12-09 17:39:02.795516] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.844 [2024-12-09 17:39:02.795572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.844 [2024-12-09 17:39:02.795586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.844 [2024-12-09 17:39:02.795593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.844 [2024-12-09 17:39:02.795600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.844 [2024-12-09 17:39:02.795615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.844 qpair failed and we were unable to recover it. 00:28:33.844 [2024-12-09 17:39:02.805568] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.844 [2024-12-09 17:39:02.805666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.844 [2024-12-09 17:39:02.805680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.844 [2024-12-09 17:39:02.805687] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.844 [2024-12-09 17:39:02.805694] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.844 [2024-12-09 17:39:02.805709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.844 qpair failed and we were unable to recover it. 00:28:33.844 [2024-12-09 17:39:02.815584] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.844 [2024-12-09 17:39:02.815637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.844 [2024-12-09 17:39:02.815651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.844 [2024-12-09 17:39:02.815657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.844 [2024-12-09 17:39:02.815664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.844 [2024-12-09 17:39:02.815680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.844 qpair failed and we were unable to recover it. 00:28:33.844 [2024-12-09 17:39:02.825624] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.844 [2024-12-09 17:39:02.825723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.844 [2024-12-09 17:39:02.825737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.844 [2024-12-09 17:39:02.825747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.844 [2024-12-09 17:39:02.825753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.844 [2024-12-09 17:39:02.825768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.844 qpair failed and we were unable to recover it. 00:28:33.844 [2024-12-09 17:39:02.835662] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.844 [2024-12-09 17:39:02.835722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.844 [2024-12-09 17:39:02.835736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.844 [2024-12-09 17:39:02.835743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.844 [2024-12-09 17:39:02.835749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.844 [2024-12-09 17:39:02.835764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.844 qpair failed and we were unable to recover it. 00:28:33.844 [2024-12-09 17:39:02.845716] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.844 [2024-12-09 17:39:02.845819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.844 [2024-12-09 17:39:02.845833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.844 [2024-12-09 17:39:02.845840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.844 [2024-12-09 17:39:02.845846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.844 [2024-12-09 17:39:02.845861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.844 qpair failed and we were unable to recover it. 00:28:33.844 [2024-12-09 17:39:02.855692] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.844 [2024-12-09 17:39:02.855746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.844 [2024-12-09 17:39:02.855759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.844 [2024-12-09 17:39:02.855766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.844 [2024-12-09 17:39:02.855772] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.844 [2024-12-09 17:39:02.855787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.844 qpair failed and we were unable to recover it. 00:28:33.844 [2024-12-09 17:39:02.865720] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.844 [2024-12-09 17:39:02.865775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.844 [2024-12-09 17:39:02.865788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.844 [2024-12-09 17:39:02.865795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.844 [2024-12-09 17:39:02.865801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.844 [2024-12-09 17:39:02.865816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.844 qpair failed and we were unable to recover it. 00:28:33.844 [2024-12-09 17:39:02.875754] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.844 [2024-12-09 17:39:02.875820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.844 [2024-12-09 17:39:02.875834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.844 [2024-12-09 17:39:02.875841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.844 [2024-12-09 17:39:02.875847] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.844 [2024-12-09 17:39:02.875863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.844 qpair failed and we were unable to recover it. 00:28:33.844 [2024-12-09 17:39:02.885840] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.844 [2024-12-09 17:39:02.885941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.844 [2024-12-09 17:39:02.885954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.844 [2024-12-09 17:39:02.885960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.844 [2024-12-09 17:39:02.885967] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.844 [2024-12-09 17:39:02.885981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.844 qpair failed and we were unable to recover it. 00:28:33.844 [2024-12-09 17:39:02.895821] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.844 [2024-12-09 17:39:02.895872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.844 [2024-12-09 17:39:02.895885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.844 [2024-12-09 17:39:02.895892] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.844 [2024-12-09 17:39:02.895898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.844 [2024-12-09 17:39:02.895913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.844 qpair failed and we were unable to recover it. 00:28:33.844 [2024-12-09 17:39:02.905860] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.844 [2024-12-09 17:39:02.905914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.844 [2024-12-09 17:39:02.905928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.844 [2024-12-09 17:39:02.905935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.844 [2024-12-09 17:39:02.905941] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.844 [2024-12-09 17:39:02.905956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.844 qpair failed and we were unable to recover it. 00:28:33.844 [2024-12-09 17:39:02.915866] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.844 [2024-12-09 17:39:02.915923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.844 [2024-12-09 17:39:02.915936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.844 [2024-12-09 17:39:02.915943] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.844 [2024-12-09 17:39:02.915949] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.844 [2024-12-09 17:39:02.915964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.844 qpair failed and we were unable to recover it. 00:28:33.844 [2024-12-09 17:39:02.925917] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.844 [2024-12-09 17:39:02.925974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.844 [2024-12-09 17:39:02.925987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.844 [2024-12-09 17:39:02.925994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.844 [2024-12-09 17:39:02.926001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.844 [2024-12-09 17:39:02.926014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.844 qpair failed and we were unable to recover it. 00:28:33.844 [2024-12-09 17:39:02.935941] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.844 [2024-12-09 17:39:02.936025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.844 [2024-12-09 17:39:02.936038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.844 [2024-12-09 17:39:02.936046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.844 [2024-12-09 17:39:02.936052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.844 [2024-12-09 17:39:02.936067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.844 qpair failed and we were unable to recover it. 00:28:33.844 [2024-12-09 17:39:02.945953] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.844 [2024-12-09 17:39:02.946005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.844 [2024-12-09 17:39:02.946018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.844 [2024-12-09 17:39:02.946025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.844 [2024-12-09 17:39:02.946031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.844 [2024-12-09 17:39:02.946045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.844 qpair failed and we were unable to recover it. 00:28:33.844 [2024-12-09 17:39:02.956069] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.844 [2024-12-09 17:39:02.956155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.844 [2024-12-09 17:39:02.956169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.844 [2024-12-09 17:39:02.956179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.844 [2024-12-09 17:39:02.956185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.844 [2024-12-09 17:39:02.956200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.844 qpair failed and we were unable to recover it. 00:28:33.844 [2024-12-09 17:39:02.965989] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.844 [2024-12-09 17:39:02.966082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.844 [2024-12-09 17:39:02.966095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.844 [2024-12-09 17:39:02.966102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.844 [2024-12-09 17:39:02.966108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.844 [2024-12-09 17:39:02.966123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.844 qpair failed and we were unable to recover it. 00:28:33.844 [2024-12-09 17:39:02.976088] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.844 [2024-12-09 17:39:02.976146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.844 [2024-12-09 17:39:02.976160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.844 [2024-12-09 17:39:02.976168] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.844 [2024-12-09 17:39:02.976174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.844 [2024-12-09 17:39:02.976190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.844 qpair failed and we were unable to recover it. 00:28:33.844 [2024-12-09 17:39:02.986106] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.844 [2024-12-09 17:39:02.986161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.844 [2024-12-09 17:39:02.986175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.844 [2024-12-09 17:39:02.986182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.844 [2024-12-09 17:39:02.986188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.844 [2024-12-09 17:39:02.986203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.844 qpair failed and we were unable to recover it. 00:28:33.844 [2024-12-09 17:39:02.996100] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.844 [2024-12-09 17:39:02.996153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.844 [2024-12-09 17:39:02.996167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.844 [2024-12-09 17:39:02.996173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.844 [2024-12-09 17:39:02.996180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.844 [2024-12-09 17:39:02.996198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.844 qpair failed and we were unable to recover it. 00:28:33.844 [2024-12-09 17:39:03.006200] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.844 [2024-12-09 17:39:03.006284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.844 [2024-12-09 17:39:03.006298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.844 [2024-12-09 17:39:03.006306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.844 [2024-12-09 17:39:03.006312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.844 [2024-12-09 17:39:03.006327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.844 qpair failed and we were unable to recover it. 00:28:33.844 [2024-12-09 17:39:03.016158] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.844 [2024-12-09 17:39:03.016222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.844 [2024-12-09 17:39:03.016236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.844 [2024-12-09 17:39:03.016244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.844 [2024-12-09 17:39:03.016249] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:33.844 [2024-12-09 17:39:03.016264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.844 qpair failed and we were unable to recover it. 00:28:34.103 [2024-12-09 17:39:03.026241] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.103 [2024-12-09 17:39:03.026299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.103 [2024-12-09 17:39:03.026317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.103 [2024-12-09 17:39:03.026326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.103 [2024-12-09 17:39:03.026332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.103 [2024-12-09 17:39:03.026349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.103 qpair failed and we were unable to recover it. 00:28:34.103 [2024-12-09 17:39:03.036230] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.103 [2024-12-09 17:39:03.036284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.103 [2024-12-09 17:39:03.036298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.103 [2024-12-09 17:39:03.036304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.104 [2024-12-09 17:39:03.036311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.104 [2024-12-09 17:39:03.036326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.104 qpair failed and we were unable to recover it. 00:28:34.104 [2024-12-09 17:39:03.046271] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.104 [2024-12-09 17:39:03.046369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.104 [2024-12-09 17:39:03.046382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.104 [2024-12-09 17:39:03.046389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.104 [2024-12-09 17:39:03.046396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.104 [2024-12-09 17:39:03.046411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.104 qpair failed and we were unable to recover it. 00:28:34.104 [2024-12-09 17:39:03.056276] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.104 [2024-12-09 17:39:03.056335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.104 [2024-12-09 17:39:03.056348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.104 [2024-12-09 17:39:03.056355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.104 [2024-12-09 17:39:03.056361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.104 [2024-12-09 17:39:03.056377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.104 qpair failed and we were unable to recover it. 00:28:34.104 [2024-12-09 17:39:03.066379] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.104 [2024-12-09 17:39:03.066458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.104 [2024-12-09 17:39:03.066472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.104 [2024-12-09 17:39:03.066479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.104 [2024-12-09 17:39:03.066485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.104 [2024-12-09 17:39:03.066500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.104 qpair failed and we were unable to recover it. 00:28:34.104 [2024-12-09 17:39:03.076333] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.104 [2024-12-09 17:39:03.076403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.104 [2024-12-09 17:39:03.076418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.104 [2024-12-09 17:39:03.076424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.104 [2024-12-09 17:39:03.076431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.104 [2024-12-09 17:39:03.076446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.104 qpair failed and we were unable to recover it. 00:28:34.104 [2024-12-09 17:39:03.086389] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.104 [2024-12-09 17:39:03.086445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.104 [2024-12-09 17:39:03.086462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.104 [2024-12-09 17:39:03.086469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.104 [2024-12-09 17:39:03.086476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.104 [2024-12-09 17:39:03.086491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.104 qpair failed and we were unable to recover it. 00:28:34.104 [2024-12-09 17:39:03.096334] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.104 [2024-12-09 17:39:03.096393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.104 [2024-12-09 17:39:03.096406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.104 [2024-12-09 17:39:03.096413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.104 [2024-12-09 17:39:03.096419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.104 [2024-12-09 17:39:03.096435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.104 qpair failed and we were unable to recover it. 00:28:34.104 [2024-12-09 17:39:03.106417] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.104 [2024-12-09 17:39:03.106472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.104 [2024-12-09 17:39:03.106484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.104 [2024-12-09 17:39:03.106491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.104 [2024-12-09 17:39:03.106498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.104 [2024-12-09 17:39:03.106513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.104 qpair failed and we were unable to recover it. 00:28:34.104 [2024-12-09 17:39:03.116469] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.104 [2024-12-09 17:39:03.116523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.104 [2024-12-09 17:39:03.116537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.104 [2024-12-09 17:39:03.116544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.104 [2024-12-09 17:39:03.116551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.104 [2024-12-09 17:39:03.116565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.104 qpair failed and we were unable to recover it. 00:28:34.104 [2024-12-09 17:39:03.126548] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.104 [2024-12-09 17:39:03.126652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.104 [2024-12-09 17:39:03.126665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.104 [2024-12-09 17:39:03.126671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.104 [2024-12-09 17:39:03.126680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.104 [2024-12-09 17:39:03.126695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.104 qpair failed and we were unable to recover it. 00:28:34.104 [2024-12-09 17:39:03.136512] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.104 [2024-12-09 17:39:03.136566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.104 [2024-12-09 17:39:03.136580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.104 [2024-12-09 17:39:03.136587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.104 [2024-12-09 17:39:03.136593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.104 [2024-12-09 17:39:03.136609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.104 qpair failed and we were unable to recover it. 00:28:34.104 [2024-12-09 17:39:03.146487] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.104 [2024-12-09 17:39:03.146577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.104 [2024-12-09 17:39:03.146591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.104 [2024-12-09 17:39:03.146598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.104 [2024-12-09 17:39:03.146604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.104 [2024-12-09 17:39:03.146619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.104 qpair failed and we were unable to recover it. 00:28:34.104 [2024-12-09 17:39:03.156572] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.104 [2024-12-09 17:39:03.156652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.104 [2024-12-09 17:39:03.156665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.104 [2024-12-09 17:39:03.156672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.105 [2024-12-09 17:39:03.156679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.105 [2024-12-09 17:39:03.156694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.105 qpair failed and we were unable to recover it. 00:28:34.105 [2024-12-09 17:39:03.166543] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.105 [2024-12-09 17:39:03.166599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.105 [2024-12-09 17:39:03.166612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.105 [2024-12-09 17:39:03.166619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.105 [2024-12-09 17:39:03.166625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.105 [2024-12-09 17:39:03.166640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.105 qpair failed and we were unable to recover it. 00:28:34.105 [2024-12-09 17:39:03.176633] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.105 [2024-12-09 17:39:03.176703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.105 [2024-12-09 17:39:03.176717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.105 [2024-12-09 17:39:03.176724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.105 [2024-12-09 17:39:03.176730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.105 [2024-12-09 17:39:03.176745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.105 qpair failed and we were unable to recover it. 00:28:34.105 [2024-12-09 17:39:03.186650] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.105 [2024-12-09 17:39:03.186703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.105 [2024-12-09 17:39:03.186716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.105 [2024-12-09 17:39:03.186723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.105 [2024-12-09 17:39:03.186730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.105 [2024-12-09 17:39:03.186745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.105 qpair failed and we were unable to recover it. 00:28:34.105 [2024-12-09 17:39:03.196672] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.105 [2024-12-09 17:39:03.196729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.105 [2024-12-09 17:39:03.196742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.105 [2024-12-09 17:39:03.196750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.105 [2024-12-09 17:39:03.196756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.105 [2024-12-09 17:39:03.196771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.105 qpair failed and we were unable to recover it. 00:28:34.105 [2024-12-09 17:39:03.206722] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.105 [2024-12-09 17:39:03.206778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.105 [2024-12-09 17:39:03.206791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.105 [2024-12-09 17:39:03.206798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.105 [2024-12-09 17:39:03.206804] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.105 [2024-12-09 17:39:03.206819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.105 qpair failed and we were unable to recover it. 00:28:34.105 [2024-12-09 17:39:03.216792] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.105 [2024-12-09 17:39:03.216854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.105 [2024-12-09 17:39:03.216870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.105 [2024-12-09 17:39:03.216878] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.105 [2024-12-09 17:39:03.216883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.105 [2024-12-09 17:39:03.216898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.105 qpair failed and we were unable to recover it. 00:28:34.105 [2024-12-09 17:39:03.226810] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.105 [2024-12-09 17:39:03.226865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.105 [2024-12-09 17:39:03.226879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.105 [2024-12-09 17:39:03.226885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.105 [2024-12-09 17:39:03.226892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.105 [2024-12-09 17:39:03.226907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.105 qpair failed and we were unable to recover it. 00:28:34.105 [2024-12-09 17:39:03.236832] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.105 [2024-12-09 17:39:03.236893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.105 [2024-12-09 17:39:03.236907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.105 [2024-12-09 17:39:03.236914] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.105 [2024-12-09 17:39:03.236920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.105 [2024-12-09 17:39:03.236935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.105 qpair failed and we were unable to recover it. 00:28:34.105 [2024-12-09 17:39:03.246870] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.105 [2024-12-09 17:39:03.246974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.105 [2024-12-09 17:39:03.246987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.105 [2024-12-09 17:39:03.246993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.105 [2024-12-09 17:39:03.246999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.105 [2024-12-09 17:39:03.247014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.105 qpair failed and we were unable to recover it. 00:28:34.105 [2024-12-09 17:39:03.256860] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.105 [2024-12-09 17:39:03.256918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.105 [2024-12-09 17:39:03.256931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.105 [2024-12-09 17:39:03.256938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.105 [2024-12-09 17:39:03.256947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.105 [2024-12-09 17:39:03.256962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.105 qpair failed and we were unable to recover it. 00:28:34.105 [2024-12-09 17:39:03.266872] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.105 [2024-12-09 17:39:03.266931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.105 [2024-12-09 17:39:03.266945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.105 [2024-12-09 17:39:03.266952] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.105 [2024-12-09 17:39:03.266959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.105 [2024-12-09 17:39:03.266974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.105 qpair failed and we were unable to recover it. 00:28:34.105 [2024-12-09 17:39:03.276926] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.105 [2024-12-09 17:39:03.276990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.105 [2024-12-09 17:39:03.277007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.105 [2024-12-09 17:39:03.277015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.105 [2024-12-09 17:39:03.277022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.105 [2024-12-09 17:39:03.277039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.105 qpair failed and we were unable to recover it. 00:28:34.364 [2024-12-09 17:39:03.287036] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.364 [2024-12-09 17:39:03.287097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.364 [2024-12-09 17:39:03.287115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.364 [2024-12-09 17:39:03.287123] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.364 [2024-12-09 17:39:03.287130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.364 [2024-12-09 17:39:03.287148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.364 qpair failed and we were unable to recover it. 00:28:34.364 [2024-12-09 17:39:03.296975] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.364 [2024-12-09 17:39:03.297033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.364 [2024-12-09 17:39:03.297047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.364 [2024-12-09 17:39:03.297055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.364 [2024-12-09 17:39:03.297061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.364 [2024-12-09 17:39:03.297077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.364 qpair failed and we were unable to recover it. 00:28:34.364 [2024-12-09 17:39:03.307034] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.364 [2024-12-09 17:39:03.307122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.364 [2024-12-09 17:39:03.307136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.364 [2024-12-09 17:39:03.307143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.364 [2024-12-09 17:39:03.307149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.364 [2024-12-09 17:39:03.307165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.364 qpair failed and we were unable to recover it. 00:28:34.364 [2024-12-09 17:39:03.317015] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.364 [2024-12-09 17:39:03.317074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.364 [2024-12-09 17:39:03.317088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.364 [2024-12-09 17:39:03.317095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.364 [2024-12-09 17:39:03.317102] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.364 [2024-12-09 17:39:03.317116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.364 qpair failed and we were unable to recover it. 00:28:34.364 [2024-12-09 17:39:03.327135] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.364 [2024-12-09 17:39:03.327215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.364 [2024-12-09 17:39:03.327233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.364 [2024-12-09 17:39:03.327239] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.364 [2024-12-09 17:39:03.327246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.364 [2024-12-09 17:39:03.327261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.364 qpair failed and we were unable to recover it. 00:28:34.365 [2024-12-09 17:39:03.337086] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.365 [2024-12-09 17:39:03.337143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.365 [2024-12-09 17:39:03.337157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.365 [2024-12-09 17:39:03.337165] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.365 [2024-12-09 17:39:03.337172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.365 [2024-12-09 17:39:03.337188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.365 qpair failed and we were unable to recover it. 00:28:34.365 [2024-12-09 17:39:03.347031] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.365 [2024-12-09 17:39:03.347085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.365 [2024-12-09 17:39:03.347102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.365 [2024-12-09 17:39:03.347110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.365 [2024-12-09 17:39:03.347117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.365 [2024-12-09 17:39:03.347132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.365 qpair failed and we were unable to recover it. 00:28:34.365 [2024-12-09 17:39:03.357108] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.365 [2024-12-09 17:39:03.357210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.365 [2024-12-09 17:39:03.357229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.365 [2024-12-09 17:39:03.357237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.365 [2024-12-09 17:39:03.357244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.365 [2024-12-09 17:39:03.357260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.365 qpair failed and we were unable to recover it. 00:28:34.365 [2024-12-09 17:39:03.367119] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.365 [2024-12-09 17:39:03.367221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.365 [2024-12-09 17:39:03.367236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.365 [2024-12-09 17:39:03.367243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.365 [2024-12-09 17:39:03.367250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.365 [2024-12-09 17:39:03.367266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.365 qpair failed and we were unable to recover it. 00:28:34.365 [2024-12-09 17:39:03.377190] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.365 [2024-12-09 17:39:03.377250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.365 [2024-12-09 17:39:03.377264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.365 [2024-12-09 17:39:03.377271] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.365 [2024-12-09 17:39:03.377278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.365 [2024-12-09 17:39:03.377292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.365 qpair failed and we were unable to recover it. 00:28:34.365 [2024-12-09 17:39:03.387202] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.365 [2024-12-09 17:39:03.387260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.365 [2024-12-09 17:39:03.387274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.365 [2024-12-09 17:39:03.387284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.365 [2024-12-09 17:39:03.387290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.365 [2024-12-09 17:39:03.387305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.365 qpair failed and we were unable to recover it. 00:28:34.365 [2024-12-09 17:39:03.397240] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.365 [2024-12-09 17:39:03.397298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.365 [2024-12-09 17:39:03.397312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.365 [2024-12-09 17:39:03.397319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.365 [2024-12-09 17:39:03.397325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.365 [2024-12-09 17:39:03.397340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.365 qpair failed and we were unable to recover it. 00:28:34.365 [2024-12-09 17:39:03.407200] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.365 [2024-12-09 17:39:03.407284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.365 [2024-12-09 17:39:03.407298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.365 [2024-12-09 17:39:03.407305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.365 [2024-12-09 17:39:03.407311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.365 [2024-12-09 17:39:03.407326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.365 qpair failed and we were unable to recover it. 00:28:34.365 [2024-12-09 17:39:03.417241] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.365 [2024-12-09 17:39:03.417296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.365 [2024-12-09 17:39:03.417310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.365 [2024-12-09 17:39:03.417317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.365 [2024-12-09 17:39:03.417323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.365 [2024-12-09 17:39:03.417338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.365 qpair failed and we were unable to recover it. 00:28:34.365 [2024-12-09 17:39:03.427261] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.365 [2024-12-09 17:39:03.427334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.365 [2024-12-09 17:39:03.427348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.365 [2024-12-09 17:39:03.427355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.365 [2024-12-09 17:39:03.427361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.365 [2024-12-09 17:39:03.427381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.365 qpair failed and we were unable to recover it. 00:28:34.365 [2024-12-09 17:39:03.437350] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.365 [2024-12-09 17:39:03.437403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.365 [2024-12-09 17:39:03.437417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.365 [2024-12-09 17:39:03.437423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.365 [2024-12-09 17:39:03.437430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.365 [2024-12-09 17:39:03.437445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.365 qpair failed and we were unable to recover it. 00:28:34.365 [2024-12-09 17:39:03.447323] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.365 [2024-12-09 17:39:03.447386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.365 [2024-12-09 17:39:03.447399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.365 [2024-12-09 17:39:03.447406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.365 [2024-12-09 17:39:03.447412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.365 [2024-12-09 17:39:03.447427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.365 qpair failed and we were unable to recover it. 00:28:34.365 [2024-12-09 17:39:03.457346] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.365 [2024-12-09 17:39:03.457434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.365 [2024-12-09 17:39:03.457447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.365 [2024-12-09 17:39:03.457454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.365 [2024-12-09 17:39:03.457461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.365 [2024-12-09 17:39:03.457475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.365 qpair failed and we were unable to recover it. 00:28:34.365 [2024-12-09 17:39:03.467364] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.365 [2024-12-09 17:39:03.467463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.366 [2024-12-09 17:39:03.467477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.366 [2024-12-09 17:39:03.467484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.366 [2024-12-09 17:39:03.467490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.366 [2024-12-09 17:39:03.467505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.366 qpair failed and we were unable to recover it. 00:28:34.366 [2024-12-09 17:39:03.477485] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.366 [2024-12-09 17:39:03.477572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.366 [2024-12-09 17:39:03.477585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.366 [2024-12-09 17:39:03.477592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.366 [2024-12-09 17:39:03.477598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.366 [2024-12-09 17:39:03.477613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.366 qpair failed and we were unable to recover it. 00:28:34.366 [2024-12-09 17:39:03.487545] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.366 [2024-12-09 17:39:03.487600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.366 [2024-12-09 17:39:03.487614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.366 [2024-12-09 17:39:03.487621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.366 [2024-12-09 17:39:03.487627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.366 [2024-12-09 17:39:03.487642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.366 qpair failed and we were unable to recover it. 00:28:34.366 [2024-12-09 17:39:03.497440] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.366 [2024-12-09 17:39:03.497513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.366 [2024-12-09 17:39:03.497526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.366 [2024-12-09 17:39:03.497533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.366 [2024-12-09 17:39:03.497539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.366 [2024-12-09 17:39:03.497554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.366 qpair failed and we were unable to recover it. 00:28:34.366 [2024-12-09 17:39:03.507487] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.366 [2024-12-09 17:39:03.507542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.366 [2024-12-09 17:39:03.507556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.366 [2024-12-09 17:39:03.507563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.366 [2024-12-09 17:39:03.507569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.366 [2024-12-09 17:39:03.507584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.366 qpair failed and we were unable to recover it. 00:28:34.366 [2024-12-09 17:39:03.517562] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.366 [2024-12-09 17:39:03.517625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.366 [2024-12-09 17:39:03.517638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.366 [2024-12-09 17:39:03.517649] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.366 [2024-12-09 17:39:03.517655] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.366 [2024-12-09 17:39:03.517670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.366 qpair failed and we were unable to recover it. 00:28:34.366 [2024-12-09 17:39:03.527570] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.366 [2024-12-09 17:39:03.527663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.366 [2024-12-09 17:39:03.527676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.366 [2024-12-09 17:39:03.527683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.366 [2024-12-09 17:39:03.527689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.366 [2024-12-09 17:39:03.527704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.366 qpair failed and we were unable to recover it. 00:28:34.366 [2024-12-09 17:39:03.537616] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.366 [2024-12-09 17:39:03.537679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.366 [2024-12-09 17:39:03.537698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.366 [2024-12-09 17:39:03.537707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.366 [2024-12-09 17:39:03.537714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.366 [2024-12-09 17:39:03.537731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.366 qpair failed and we were unable to recover it. 00:28:34.624 [2024-12-09 17:39:03.547646] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.624 [2024-12-09 17:39:03.547704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.624 [2024-12-09 17:39:03.547722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.624 [2024-12-09 17:39:03.547730] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.624 [2024-12-09 17:39:03.547737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.624 [2024-12-09 17:39:03.547754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.624 qpair failed and we were unable to recover it. 00:28:34.624 [2024-12-09 17:39:03.557692] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.624 [2024-12-09 17:39:03.557778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.624 [2024-12-09 17:39:03.557792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.624 [2024-12-09 17:39:03.557799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.624 [2024-12-09 17:39:03.557805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.624 [2024-12-09 17:39:03.557823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.624 qpair failed and we were unable to recover it. 00:28:34.624 [2024-12-09 17:39:03.567752] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.624 [2024-12-09 17:39:03.567833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.624 [2024-12-09 17:39:03.567849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.624 [2024-12-09 17:39:03.567856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.624 [2024-12-09 17:39:03.567862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.624 [2024-12-09 17:39:03.567878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.624 qpair failed and we were unable to recover it. 00:28:34.624 [2024-12-09 17:39:03.577664] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.625 [2024-12-09 17:39:03.577725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.625 [2024-12-09 17:39:03.577738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.625 [2024-12-09 17:39:03.577746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.625 [2024-12-09 17:39:03.577752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.625 [2024-12-09 17:39:03.577768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.625 qpair failed and we were unable to recover it. 00:28:34.625 [2024-12-09 17:39:03.587767] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.625 [2024-12-09 17:39:03.587824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.625 [2024-12-09 17:39:03.587838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.625 [2024-12-09 17:39:03.587844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.625 [2024-12-09 17:39:03.587850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.625 [2024-12-09 17:39:03.587865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.625 qpair failed and we were unable to recover it. 00:28:34.625 [2024-12-09 17:39:03.597719] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.625 [2024-12-09 17:39:03.597818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.625 [2024-12-09 17:39:03.597831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.625 [2024-12-09 17:39:03.597838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.625 [2024-12-09 17:39:03.597844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.625 [2024-12-09 17:39:03.597860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.625 qpair failed and we were unable to recover it. 00:28:34.625 [2024-12-09 17:39:03.607889] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.625 [2024-12-09 17:39:03.607947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.625 [2024-12-09 17:39:03.607961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.625 [2024-12-09 17:39:03.607968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.625 [2024-12-09 17:39:03.607974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.625 [2024-12-09 17:39:03.607990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.625 qpair failed and we were unable to recover it. 00:28:34.625 [2024-12-09 17:39:03.617822] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.625 [2024-12-09 17:39:03.617879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.625 [2024-12-09 17:39:03.617893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.625 [2024-12-09 17:39:03.617900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.625 [2024-12-09 17:39:03.617906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.625 [2024-12-09 17:39:03.617921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.625 qpair failed and we were unable to recover it. 00:28:34.625 [2024-12-09 17:39:03.627886] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.625 [2024-12-09 17:39:03.627938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.625 [2024-12-09 17:39:03.627951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.625 [2024-12-09 17:39:03.627958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.625 [2024-12-09 17:39:03.627964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.625 [2024-12-09 17:39:03.627979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.625 qpair failed and we were unable to recover it. 00:28:34.625 [2024-12-09 17:39:03.637909] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.625 [2024-12-09 17:39:03.637964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.625 [2024-12-09 17:39:03.637978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.625 [2024-12-09 17:39:03.637985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.625 [2024-12-09 17:39:03.637991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.625 [2024-12-09 17:39:03.638006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.625 qpair failed and we were unable to recover it. 00:28:34.625 [2024-12-09 17:39:03.647931] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.625 [2024-12-09 17:39:03.648007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.625 [2024-12-09 17:39:03.648023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.625 [2024-12-09 17:39:03.648030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.625 [2024-12-09 17:39:03.648037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.625 [2024-12-09 17:39:03.648052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.625 qpair failed and we were unable to recover it. 00:28:34.625 [2024-12-09 17:39:03.657919] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.625 [2024-12-09 17:39:03.658017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.625 [2024-12-09 17:39:03.658031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.625 [2024-12-09 17:39:03.658038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.625 [2024-12-09 17:39:03.658044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.625 [2024-12-09 17:39:03.658059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.625 qpair failed and we were unable to recover it. 00:28:34.625 [2024-12-09 17:39:03.668035] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.625 [2024-12-09 17:39:03.668090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.625 [2024-12-09 17:39:03.668104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.625 [2024-12-09 17:39:03.668111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.625 [2024-12-09 17:39:03.668117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.625 [2024-12-09 17:39:03.668132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.625 qpair failed and we were unable to recover it. 00:28:34.625 [2024-12-09 17:39:03.678023] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.625 [2024-12-09 17:39:03.678079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.625 [2024-12-09 17:39:03.678092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.625 [2024-12-09 17:39:03.678100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.625 [2024-12-09 17:39:03.678106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.625 [2024-12-09 17:39:03.678122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.625 qpair failed and we were unable to recover it. 00:28:34.625 [2024-12-09 17:39:03.687993] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.625 [2024-12-09 17:39:03.688047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.625 [2024-12-09 17:39:03.688061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.625 [2024-12-09 17:39:03.688068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.625 [2024-12-09 17:39:03.688077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.625 [2024-12-09 17:39:03.688092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.625 qpair failed and we were unable to recover it. 00:28:34.625 [2024-12-09 17:39:03.698114] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.625 [2024-12-09 17:39:03.698167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.625 [2024-12-09 17:39:03.698180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.625 [2024-12-09 17:39:03.698187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.625 [2024-12-09 17:39:03.698194] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.625 [2024-12-09 17:39:03.698209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.625 qpair failed and we were unable to recover it. 00:28:34.625 [2024-12-09 17:39:03.708048] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.625 [2024-12-09 17:39:03.708105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.625 [2024-12-09 17:39:03.708118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.625 [2024-12-09 17:39:03.708125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.625 [2024-12-09 17:39:03.708131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.625 [2024-12-09 17:39:03.708146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.625 qpair failed and we were unable to recover it. 00:28:34.625 [2024-12-09 17:39:03.718068] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.625 [2024-12-09 17:39:03.718120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.625 [2024-12-09 17:39:03.718134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.626 [2024-12-09 17:39:03.718140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.626 [2024-12-09 17:39:03.718146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.626 [2024-12-09 17:39:03.718161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.626 qpair failed and we were unable to recover it. 00:28:34.626 [2024-12-09 17:39:03.728123] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.626 [2024-12-09 17:39:03.728213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.626 [2024-12-09 17:39:03.728230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.626 [2024-12-09 17:39:03.728238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.626 [2024-12-09 17:39:03.728244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.626 [2024-12-09 17:39:03.728259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.626 qpair failed and we were unable to recover it. 00:28:34.626 [2024-12-09 17:39:03.738215] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.626 [2024-12-09 17:39:03.738273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.626 [2024-12-09 17:39:03.738286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.626 [2024-12-09 17:39:03.738293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.626 [2024-12-09 17:39:03.738299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.626 [2024-12-09 17:39:03.738314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.626 qpair failed and we were unable to recover it. 00:28:34.626 [2024-12-09 17:39:03.748208] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.626 [2024-12-09 17:39:03.748264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.626 [2024-12-09 17:39:03.748278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.626 [2024-12-09 17:39:03.748285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.626 [2024-12-09 17:39:03.748292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.626 [2024-12-09 17:39:03.748307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.626 qpair failed and we were unable to recover it. 00:28:34.626 [2024-12-09 17:39:03.758187] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.626 [2024-12-09 17:39:03.758284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.626 [2024-12-09 17:39:03.758298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.626 [2024-12-09 17:39:03.758305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.626 [2024-12-09 17:39:03.758311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.626 [2024-12-09 17:39:03.758325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.626 qpair failed and we were unable to recover it. 00:28:34.626 [2024-12-09 17:39:03.768294] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.626 [2024-12-09 17:39:03.768358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.626 [2024-12-09 17:39:03.768372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.626 [2024-12-09 17:39:03.768380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.626 [2024-12-09 17:39:03.768386] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.626 [2024-12-09 17:39:03.768401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.626 qpair failed and we were unable to recover it. 00:28:34.626 [2024-12-09 17:39:03.778274] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.626 [2024-12-09 17:39:03.778369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.626 [2024-12-09 17:39:03.778385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.626 [2024-12-09 17:39:03.778392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.626 [2024-12-09 17:39:03.778398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.626 [2024-12-09 17:39:03.778413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.626 qpair failed and we were unable to recover it. 00:28:34.626 [2024-12-09 17:39:03.788380] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.626 [2024-12-09 17:39:03.788460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.626 [2024-12-09 17:39:03.788473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.626 [2024-12-09 17:39:03.788480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.626 [2024-12-09 17:39:03.788487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.626 [2024-12-09 17:39:03.788503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.626 qpair failed and we were unable to recover it. 00:28:34.626 [2024-12-09 17:39:03.798406] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.626 [2024-12-09 17:39:03.798469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.626 [2024-12-09 17:39:03.798493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.626 [2024-12-09 17:39:03.798501] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.626 [2024-12-09 17:39:03.798508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.626 [2024-12-09 17:39:03.798524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.626 qpair failed and we were unable to recover it. 00:28:34.885 [2024-12-09 17:39:03.808464] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.885 [2024-12-09 17:39:03.808528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.885 [2024-12-09 17:39:03.808546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.885 [2024-12-09 17:39:03.808554] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.885 [2024-12-09 17:39:03.808561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.885 [2024-12-09 17:39:03.808579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.885 qpair failed and we were unable to recover it. 00:28:34.885 [2024-12-09 17:39:03.818427] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.885 [2024-12-09 17:39:03.818482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.885 [2024-12-09 17:39:03.818496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.885 [2024-12-09 17:39:03.818503] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.885 [2024-12-09 17:39:03.818513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.885 [2024-12-09 17:39:03.818528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.885 qpair failed and we were unable to recover it. 00:28:34.885 [2024-12-09 17:39:03.828447] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.885 [2024-12-09 17:39:03.828510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.885 [2024-12-09 17:39:03.828524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.885 [2024-12-09 17:39:03.828531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.885 [2024-12-09 17:39:03.828537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.885 [2024-12-09 17:39:03.828553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.885 qpair failed and we were unable to recover it. 00:28:34.885 [2024-12-09 17:39:03.838471] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.885 [2024-12-09 17:39:03.838526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.885 [2024-12-09 17:39:03.838540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.885 [2024-12-09 17:39:03.838547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.885 [2024-12-09 17:39:03.838553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.885 [2024-12-09 17:39:03.838569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.885 qpair failed and we were unable to recover it. 00:28:34.885 [2024-12-09 17:39:03.848504] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.885 [2024-12-09 17:39:03.848560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.885 [2024-12-09 17:39:03.848574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.885 [2024-12-09 17:39:03.848581] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.885 [2024-12-09 17:39:03.848587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.885 [2024-12-09 17:39:03.848602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.885 qpair failed and we were unable to recover it. 00:28:34.885 [2024-12-09 17:39:03.858520] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.885 [2024-12-09 17:39:03.858572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.885 [2024-12-09 17:39:03.858586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.885 [2024-12-09 17:39:03.858592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.885 [2024-12-09 17:39:03.858599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.885 [2024-12-09 17:39:03.858615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.885 qpair failed and we were unable to recover it. 00:28:34.885 [2024-12-09 17:39:03.868556] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.885 [2024-12-09 17:39:03.868613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.885 [2024-12-09 17:39:03.868628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.885 [2024-12-09 17:39:03.868635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.885 [2024-12-09 17:39:03.868641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.885 [2024-12-09 17:39:03.868656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.885 qpair failed and we were unable to recover it. 00:28:34.885 [2024-12-09 17:39:03.878590] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.885 [2024-12-09 17:39:03.878638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.885 [2024-12-09 17:39:03.878651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.885 [2024-12-09 17:39:03.878658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.885 [2024-12-09 17:39:03.878664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.885 [2024-12-09 17:39:03.878680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.885 qpair failed and we were unable to recover it. 00:28:34.885 [2024-12-09 17:39:03.888639] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.885 [2024-12-09 17:39:03.888717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.885 [2024-12-09 17:39:03.888730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.885 [2024-12-09 17:39:03.888737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.885 [2024-12-09 17:39:03.888743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.885 [2024-12-09 17:39:03.888758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.885 qpair failed and we were unable to recover it. 00:28:34.885 [2024-12-09 17:39:03.898652] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.885 [2024-12-09 17:39:03.898718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.885 [2024-12-09 17:39:03.898733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.885 [2024-12-09 17:39:03.898740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.885 [2024-12-09 17:39:03.898746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.885 [2024-12-09 17:39:03.898761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.885 qpair failed and we were unable to recover it. 00:28:34.885 [2024-12-09 17:39:03.908665] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.885 [2024-12-09 17:39:03.908717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.885 [2024-12-09 17:39:03.908735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.885 [2024-12-09 17:39:03.908742] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.885 [2024-12-09 17:39:03.908748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.885 [2024-12-09 17:39:03.908764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.885 qpair failed and we were unable to recover it. 00:28:34.885 [2024-12-09 17:39:03.918632] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.885 [2024-12-09 17:39:03.918687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.885 [2024-12-09 17:39:03.918701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.886 [2024-12-09 17:39:03.918708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.886 [2024-12-09 17:39:03.918714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.886 [2024-12-09 17:39:03.918730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.886 qpair failed and we were unable to recover it. 00:28:34.886 [2024-12-09 17:39:03.928733] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.886 [2024-12-09 17:39:03.928831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.886 [2024-12-09 17:39:03.928846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.886 [2024-12-09 17:39:03.928853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.886 [2024-12-09 17:39:03.928860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.886 [2024-12-09 17:39:03.928875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.886 qpair failed and we were unable to recover it. 00:28:34.886 [2024-12-09 17:39:03.938755] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.886 [2024-12-09 17:39:03.938811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.886 [2024-12-09 17:39:03.938823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.886 [2024-12-09 17:39:03.938830] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.886 [2024-12-09 17:39:03.938837] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.886 [2024-12-09 17:39:03.938852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.886 qpair failed and we were unable to recover it. 00:28:34.886 [2024-12-09 17:39:03.948787] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.886 [2024-12-09 17:39:03.948842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.886 [2024-12-09 17:39:03.948856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.886 [2024-12-09 17:39:03.948867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.886 [2024-12-09 17:39:03.948873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.886 [2024-12-09 17:39:03.948888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.886 qpair failed and we were unable to recover it. 00:28:34.886 [2024-12-09 17:39:03.958815] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.886 [2024-12-09 17:39:03.958907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.886 [2024-12-09 17:39:03.958920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.886 [2024-12-09 17:39:03.958927] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.886 [2024-12-09 17:39:03.958933] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.886 [2024-12-09 17:39:03.958949] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.886 qpair failed and we were unable to recover it. 00:28:34.886 [2024-12-09 17:39:03.968847] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.886 [2024-12-09 17:39:03.968926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.886 [2024-12-09 17:39:03.968942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.886 [2024-12-09 17:39:03.968950] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.886 [2024-12-09 17:39:03.968956] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.886 [2024-12-09 17:39:03.968971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.886 qpair failed and we were unable to recover it. 00:28:34.886 [2024-12-09 17:39:03.978801] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.886 [2024-12-09 17:39:03.978854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.886 [2024-12-09 17:39:03.978867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.886 [2024-12-09 17:39:03.978874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.886 [2024-12-09 17:39:03.978880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.886 [2024-12-09 17:39:03.978895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.886 qpair failed and we were unable to recover it. 00:28:34.886 [2024-12-09 17:39:03.988912] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.886 [2024-12-09 17:39:03.988965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.886 [2024-12-09 17:39:03.988979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.886 [2024-12-09 17:39:03.988986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.886 [2024-12-09 17:39:03.988992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.886 [2024-12-09 17:39:03.989010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.886 qpair failed and we were unable to recover it. 00:28:34.886 [2024-12-09 17:39:03.998926] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.886 [2024-12-09 17:39:03.998983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.886 [2024-12-09 17:39:03.998996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.886 [2024-12-09 17:39:03.999003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.886 [2024-12-09 17:39:03.999009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.886 [2024-12-09 17:39:03.999024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.886 qpair failed and we were unable to recover it. 00:28:34.886 [2024-12-09 17:39:04.008976] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.886 [2024-12-09 17:39:04.009031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.886 [2024-12-09 17:39:04.009045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.886 [2024-12-09 17:39:04.009051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.886 [2024-12-09 17:39:04.009058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.886 [2024-12-09 17:39:04.009073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.886 qpair failed and we were unable to recover it. 00:28:34.886 [2024-12-09 17:39:04.018992] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.886 [2024-12-09 17:39:04.019049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.886 [2024-12-09 17:39:04.019062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.886 [2024-12-09 17:39:04.019069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.886 [2024-12-09 17:39:04.019075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.886 [2024-12-09 17:39:04.019090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.886 qpair failed and we were unable to recover it. 00:28:34.886 [2024-12-09 17:39:04.029064] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.886 [2024-12-09 17:39:04.029151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.886 [2024-12-09 17:39:04.029164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.886 [2024-12-09 17:39:04.029171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.886 [2024-12-09 17:39:04.029177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.886 [2024-12-09 17:39:04.029192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.886 qpair failed and we were unable to recover it. 00:28:34.886 [2024-12-09 17:39:04.039039] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.886 [2024-12-09 17:39:04.039096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.886 [2024-12-09 17:39:04.039109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.886 [2024-12-09 17:39:04.039116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.886 [2024-12-09 17:39:04.039123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.886 [2024-12-09 17:39:04.039137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.886 qpair failed and we were unable to recover it. 00:28:34.886 [2024-12-09 17:39:04.049082] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.886 [2024-12-09 17:39:04.049141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.886 [2024-12-09 17:39:04.049154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.886 [2024-12-09 17:39:04.049162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.886 [2024-12-09 17:39:04.049168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.887 [2024-12-09 17:39:04.049183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.887 qpair failed and we were unable to recover it. 00:28:34.887 [2024-12-09 17:39:04.059115] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.887 [2024-12-09 17:39:04.059239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.887 [2024-12-09 17:39:04.059261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.887 [2024-12-09 17:39:04.059270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.887 [2024-12-09 17:39:04.059277] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:34.887 [2024-12-09 17:39:04.059296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.887 qpair failed and we were unable to recover it. 00:28:35.145 [2024-12-09 17:39:04.069147] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.145 [2024-12-09 17:39:04.069207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.145 [2024-12-09 17:39:04.069227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.145 [2024-12-09 17:39:04.069235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.145 [2024-12-09 17:39:04.069242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.145 [2024-12-09 17:39:04.069259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.145 qpair failed and we were unable to recover it. 00:28:35.145 [2024-12-09 17:39:04.079181] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.145 [2024-12-09 17:39:04.079262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.145 [2024-12-09 17:39:04.079276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.146 [2024-12-09 17:39:04.079286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.146 [2024-12-09 17:39:04.079292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.146 [2024-12-09 17:39:04.079308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.146 qpair failed and we were unable to recover it. 00:28:35.146 [2024-12-09 17:39:04.089118] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.146 [2024-12-09 17:39:04.089175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.146 [2024-12-09 17:39:04.089188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.146 [2024-12-09 17:39:04.089195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.146 [2024-12-09 17:39:04.089201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.146 [2024-12-09 17:39:04.089238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.146 qpair failed and we were unable to recover it. 00:28:35.146 [2024-12-09 17:39:04.099259] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.146 [2024-12-09 17:39:04.099318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.146 [2024-12-09 17:39:04.099332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.146 [2024-12-09 17:39:04.099339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.146 [2024-12-09 17:39:04.099345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.146 [2024-12-09 17:39:04.099360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.146 qpair failed and we were unable to recover it. 00:28:35.146 [2024-12-09 17:39:04.109248] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.146 [2024-12-09 17:39:04.109300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.146 [2024-12-09 17:39:04.109314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.146 [2024-12-09 17:39:04.109321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.146 [2024-12-09 17:39:04.109328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.146 [2024-12-09 17:39:04.109343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.146 qpair failed and we were unable to recover it. 00:28:35.146 [2024-12-09 17:39:04.119291] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.146 [2024-12-09 17:39:04.119340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.146 [2024-12-09 17:39:04.119353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.146 [2024-12-09 17:39:04.119360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.146 [2024-12-09 17:39:04.119367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.146 [2024-12-09 17:39:04.119385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.146 qpair failed and we were unable to recover it. 00:28:35.146 [2024-12-09 17:39:04.129310] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.146 [2024-12-09 17:39:04.129365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.146 [2024-12-09 17:39:04.129378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.146 [2024-12-09 17:39:04.129385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.146 [2024-12-09 17:39:04.129390] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.146 [2024-12-09 17:39:04.129406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.146 qpair failed and we were unable to recover it. 00:28:35.146 [2024-12-09 17:39:04.139331] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.146 [2024-12-09 17:39:04.139394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.146 [2024-12-09 17:39:04.139407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.146 [2024-12-09 17:39:04.139414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.146 [2024-12-09 17:39:04.139421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.146 [2024-12-09 17:39:04.139435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.146 qpair failed and we were unable to recover it. 00:28:35.146 [2024-12-09 17:39:04.149348] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.146 [2024-12-09 17:39:04.149415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.146 [2024-12-09 17:39:04.149428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.146 [2024-12-09 17:39:04.149435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.146 [2024-12-09 17:39:04.149441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.146 [2024-12-09 17:39:04.149456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.146 qpair failed and we were unable to recover it. 00:28:35.146 [2024-12-09 17:39:04.159361] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.146 [2024-12-09 17:39:04.159418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.146 [2024-12-09 17:39:04.159432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.146 [2024-12-09 17:39:04.159440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.146 [2024-12-09 17:39:04.159446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.146 [2024-12-09 17:39:04.159462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.146 qpair failed and we were unable to recover it. 00:28:35.146 [2024-12-09 17:39:04.169407] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.146 [2024-12-09 17:39:04.169505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.146 [2024-12-09 17:39:04.169520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.146 [2024-12-09 17:39:04.169528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.146 [2024-12-09 17:39:04.169534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.146 [2024-12-09 17:39:04.169549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.146 qpair failed and we were unable to recover it. 00:28:35.146 [2024-12-09 17:39:04.179446] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.146 [2024-12-09 17:39:04.179523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.146 [2024-12-09 17:39:04.179537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.146 [2024-12-09 17:39:04.179545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.146 [2024-12-09 17:39:04.179551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.146 [2024-12-09 17:39:04.179566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.146 qpair failed and we were unable to recover it. 00:28:35.146 [2024-12-09 17:39:04.189559] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.146 [2024-12-09 17:39:04.189616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.146 [2024-12-09 17:39:04.189629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.146 [2024-12-09 17:39:04.189637] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.146 [2024-12-09 17:39:04.189643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.146 [2024-12-09 17:39:04.189658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.146 qpair failed and we were unable to recover it. 00:28:35.146 [2024-12-09 17:39:04.199508] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.146 [2024-12-09 17:39:04.199591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.146 [2024-12-09 17:39:04.199604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.146 [2024-12-09 17:39:04.199611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.146 [2024-12-09 17:39:04.199617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.146 [2024-12-09 17:39:04.199632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.146 qpair failed and we were unable to recover it. 00:28:35.146 [2024-12-09 17:39:04.209536] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.146 [2024-12-09 17:39:04.209598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.146 [2024-12-09 17:39:04.209614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.147 [2024-12-09 17:39:04.209621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.147 [2024-12-09 17:39:04.209627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.147 [2024-12-09 17:39:04.209642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.147 qpair failed and we were unable to recover it. 00:28:35.147 [2024-12-09 17:39:04.219536] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.147 [2024-12-09 17:39:04.219595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.147 [2024-12-09 17:39:04.219610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.147 [2024-12-09 17:39:04.219618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.147 [2024-12-09 17:39:04.219624] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.147 [2024-12-09 17:39:04.219638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.147 qpair failed and we were unable to recover it. 00:28:35.147 [2024-12-09 17:39:04.229597] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.147 [2024-12-09 17:39:04.229666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.147 [2024-12-09 17:39:04.229679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.147 [2024-12-09 17:39:04.229686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.147 [2024-12-09 17:39:04.229692] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.147 [2024-12-09 17:39:04.229706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.147 qpair failed and we were unable to recover it. 00:28:35.147 [2024-12-09 17:39:04.239548] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.147 [2024-12-09 17:39:04.239613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.147 [2024-12-09 17:39:04.239628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.147 [2024-12-09 17:39:04.239635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.147 [2024-12-09 17:39:04.239641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.147 [2024-12-09 17:39:04.239655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.147 qpair failed and we were unable to recover it. 00:28:35.147 [2024-12-09 17:39:04.249660] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.147 [2024-12-09 17:39:04.249717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.147 [2024-12-09 17:39:04.249729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.147 [2024-12-09 17:39:04.249736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.147 [2024-12-09 17:39:04.249746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.147 [2024-12-09 17:39:04.249761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.147 qpair failed and we were unable to recover it. 00:28:35.147 [2024-12-09 17:39:04.259689] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.147 [2024-12-09 17:39:04.259771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.147 [2024-12-09 17:39:04.259784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.147 [2024-12-09 17:39:04.259791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.147 [2024-12-09 17:39:04.259798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.147 [2024-12-09 17:39:04.259811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.147 qpair failed and we were unable to recover it. 00:28:35.147 [2024-12-09 17:39:04.269723] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.147 [2024-12-09 17:39:04.269805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.147 [2024-12-09 17:39:04.269819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.147 [2024-12-09 17:39:04.269826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.147 [2024-12-09 17:39:04.269833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.147 [2024-12-09 17:39:04.269848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.147 qpair failed and we were unable to recover it. 00:28:35.147 [2024-12-09 17:39:04.279768] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.147 [2024-12-09 17:39:04.279869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.147 [2024-12-09 17:39:04.279882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.147 [2024-12-09 17:39:04.279889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.147 [2024-12-09 17:39:04.279895] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.147 [2024-12-09 17:39:04.279911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.147 qpair failed and we were unable to recover it. 00:28:35.147 [2024-12-09 17:39:04.289788] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.147 [2024-12-09 17:39:04.289842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.147 [2024-12-09 17:39:04.289855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.147 [2024-12-09 17:39:04.289862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.147 [2024-12-09 17:39:04.289870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.147 [2024-12-09 17:39:04.289885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.147 qpair failed and we were unable to recover it. 00:28:35.147 [2024-12-09 17:39:04.299854] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.147 [2024-12-09 17:39:04.299956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.147 [2024-12-09 17:39:04.299970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.147 [2024-12-09 17:39:04.299977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.147 [2024-12-09 17:39:04.299983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.147 [2024-12-09 17:39:04.299997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.147 qpair failed and we were unable to recover it. 00:28:35.147 [2024-12-09 17:39:04.309867] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.147 [2024-12-09 17:39:04.309914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.147 [2024-12-09 17:39:04.309928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.147 [2024-12-09 17:39:04.309935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.147 [2024-12-09 17:39:04.309941] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.147 [2024-12-09 17:39:04.309955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.147 qpair failed and we were unable to recover it. 00:28:35.147 [2024-12-09 17:39:04.319875] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.147 [2024-12-09 17:39:04.319938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.147 [2024-12-09 17:39:04.319956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.147 [2024-12-09 17:39:04.319964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.147 [2024-12-09 17:39:04.319970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.147 [2024-12-09 17:39:04.319988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.147 qpair failed and we were unable to recover it. 00:28:35.406 [2024-12-09 17:39:04.329990] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.406 [2024-12-09 17:39:04.330070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.406 [2024-12-09 17:39:04.330088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.406 [2024-12-09 17:39:04.330096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.406 [2024-12-09 17:39:04.330102] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.406 [2024-12-09 17:39:04.330119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.406 qpair failed and we were unable to recover it. 00:28:35.406 [2024-12-09 17:39:04.339867] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.406 [2024-12-09 17:39:04.339949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.406 [2024-12-09 17:39:04.339966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.406 [2024-12-09 17:39:04.339973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.406 [2024-12-09 17:39:04.339979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.406 [2024-12-09 17:39:04.339994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.406 qpair failed and we were unable to recover it. 00:28:35.406 [2024-12-09 17:39:04.349876] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.406 [2024-12-09 17:39:04.349958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.406 [2024-12-09 17:39:04.349972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.406 [2024-12-09 17:39:04.349979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.406 [2024-12-09 17:39:04.349985] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.406 [2024-12-09 17:39:04.350001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.406 qpair failed and we were unable to recover it. 00:28:35.406 [2024-12-09 17:39:04.359999] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.406 [2024-12-09 17:39:04.360098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.406 [2024-12-09 17:39:04.360112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.406 [2024-12-09 17:39:04.360119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.406 [2024-12-09 17:39:04.360125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.406 [2024-12-09 17:39:04.360140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.406 qpair failed and we were unable to recover it. 00:28:35.406 [2024-12-09 17:39:04.370001] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.406 [2024-12-09 17:39:04.370059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.406 [2024-12-09 17:39:04.370073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.406 [2024-12-09 17:39:04.370080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.406 [2024-12-09 17:39:04.370086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.406 [2024-12-09 17:39:04.370101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.406 qpair failed and we were unable to recover it. 00:28:35.406 [2024-12-09 17:39:04.379948] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.406 [2024-12-09 17:39:04.380002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.406 [2024-12-09 17:39:04.380015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.406 [2024-12-09 17:39:04.380022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.406 [2024-12-09 17:39:04.380032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.406 [2024-12-09 17:39:04.380047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.406 qpair failed and we were unable to recover it. 00:28:35.406 [2024-12-09 17:39:04.389970] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.406 [2024-12-09 17:39:04.390061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.406 [2024-12-09 17:39:04.390075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.406 [2024-12-09 17:39:04.390082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.406 [2024-12-09 17:39:04.390088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.406 [2024-12-09 17:39:04.390103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.406 qpair failed and we were unable to recover it. 00:28:35.406 [2024-12-09 17:39:04.400076] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.406 [2024-12-09 17:39:04.400126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.406 [2024-12-09 17:39:04.400139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.406 [2024-12-09 17:39:04.400146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.406 [2024-12-09 17:39:04.400152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.406 [2024-12-09 17:39:04.400167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.406 qpair failed and we were unable to recover it. 00:28:35.406 [2024-12-09 17:39:04.410033] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.406 [2024-12-09 17:39:04.410089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.406 [2024-12-09 17:39:04.410104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.407 [2024-12-09 17:39:04.410112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.407 [2024-12-09 17:39:04.410119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.407 [2024-12-09 17:39:04.410134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.407 qpair failed and we were unable to recover it. 00:28:35.407 [2024-12-09 17:39:04.420134] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.407 [2024-12-09 17:39:04.420193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.407 [2024-12-09 17:39:04.420207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.407 [2024-12-09 17:39:04.420215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.407 [2024-12-09 17:39:04.420226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.407 [2024-12-09 17:39:04.420241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.407 qpair failed and we were unable to recover it. 00:28:35.407 [2024-12-09 17:39:04.430160] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.407 [2024-12-09 17:39:04.430215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.407 [2024-12-09 17:39:04.430233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.407 [2024-12-09 17:39:04.430239] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.407 [2024-12-09 17:39:04.430245] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.407 [2024-12-09 17:39:04.430260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.407 qpair failed and we were unable to recover it. 00:28:35.407 [2024-12-09 17:39:04.440181] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.407 [2024-12-09 17:39:04.440238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.407 [2024-12-09 17:39:04.440251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.407 [2024-12-09 17:39:04.440259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.407 [2024-12-09 17:39:04.440266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.407 [2024-12-09 17:39:04.440281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.407 qpair failed and we were unable to recover it. 00:28:35.407 [2024-12-09 17:39:04.450284] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.407 [2024-12-09 17:39:04.450366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.407 [2024-12-09 17:39:04.450380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.407 [2024-12-09 17:39:04.450387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.407 [2024-12-09 17:39:04.450393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.407 [2024-12-09 17:39:04.450408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.407 qpair failed and we were unable to recover it. 00:28:35.407 [2024-12-09 17:39:04.460256] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.407 [2024-12-09 17:39:04.460310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.407 [2024-12-09 17:39:04.460323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.407 [2024-12-09 17:39:04.460330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.407 [2024-12-09 17:39:04.460337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.407 [2024-12-09 17:39:04.460352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.407 qpair failed and we were unable to recover it. 00:28:35.407 [2024-12-09 17:39:04.470285] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.407 [2024-12-09 17:39:04.470338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.407 [2024-12-09 17:39:04.470355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.407 [2024-12-09 17:39:04.470363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.407 [2024-12-09 17:39:04.470369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.407 [2024-12-09 17:39:04.470384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.407 qpair failed and we were unable to recover it. 00:28:35.407 [2024-12-09 17:39:04.480246] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.407 [2024-12-09 17:39:04.480301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.407 [2024-12-09 17:39:04.480315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.407 [2024-12-09 17:39:04.480322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.407 [2024-12-09 17:39:04.480328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.407 [2024-12-09 17:39:04.480343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.407 qpair failed and we were unable to recover it. 00:28:35.407 [2024-12-09 17:39:04.490275] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.407 [2024-12-09 17:39:04.490332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.407 [2024-12-09 17:39:04.490346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.407 [2024-12-09 17:39:04.490353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.407 [2024-12-09 17:39:04.490359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.407 [2024-12-09 17:39:04.490374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.407 qpair failed and we were unable to recover it. 00:28:35.407 [2024-12-09 17:39:04.500403] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.407 [2024-12-09 17:39:04.500457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.407 [2024-12-09 17:39:04.500469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.407 [2024-12-09 17:39:04.500476] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.407 [2024-12-09 17:39:04.500483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.407 [2024-12-09 17:39:04.500499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.407 qpair failed and we were unable to recover it. 00:28:35.407 [2024-12-09 17:39:04.510385] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.407 [2024-12-09 17:39:04.510458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.407 [2024-12-09 17:39:04.510472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.407 [2024-12-09 17:39:04.510483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.407 [2024-12-09 17:39:04.510489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.407 [2024-12-09 17:39:04.510504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.407 qpair failed and we were unable to recover it. 00:28:35.407 [2024-12-09 17:39:04.520447] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.407 [2024-12-09 17:39:04.520504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.407 [2024-12-09 17:39:04.520517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.407 [2024-12-09 17:39:04.520523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.407 [2024-12-09 17:39:04.520529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.407 [2024-12-09 17:39:04.520543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.407 qpair failed and we were unable to recover it. 00:28:35.407 [2024-12-09 17:39:04.530479] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.407 [2024-12-09 17:39:04.530535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.407 [2024-12-09 17:39:04.530548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.407 [2024-12-09 17:39:04.530555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.407 [2024-12-09 17:39:04.530561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.407 [2024-12-09 17:39:04.530575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.407 qpair failed and we were unable to recover it. 00:28:35.407 [2024-12-09 17:39:04.540509] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.407 [2024-12-09 17:39:04.540573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.407 [2024-12-09 17:39:04.540587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.407 [2024-12-09 17:39:04.540594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.408 [2024-12-09 17:39:04.540600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.408 [2024-12-09 17:39:04.540615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.408 qpair failed and we were unable to recover it. 00:28:35.408 [2024-12-09 17:39:04.550544] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.408 [2024-12-09 17:39:04.550601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.408 [2024-12-09 17:39:04.550614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.408 [2024-12-09 17:39:04.550622] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.408 [2024-12-09 17:39:04.550628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.408 [2024-12-09 17:39:04.550645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.408 qpair failed and we were unable to recover it. 00:28:35.408 [2024-12-09 17:39:04.560568] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.408 [2024-12-09 17:39:04.560634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.408 [2024-12-09 17:39:04.560647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.408 [2024-12-09 17:39:04.560654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.408 [2024-12-09 17:39:04.560661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.408 [2024-12-09 17:39:04.560675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.408 qpair failed and we were unable to recover it. 00:28:35.408 [2024-12-09 17:39:04.570608] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.408 [2024-12-09 17:39:04.570711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.408 [2024-12-09 17:39:04.570725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.408 [2024-12-09 17:39:04.570732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.408 [2024-12-09 17:39:04.570738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.408 [2024-12-09 17:39:04.570753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.408 qpair failed and we were unable to recover it. 00:28:35.408 [2024-12-09 17:39:04.580612] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.408 [2024-12-09 17:39:04.580677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.408 [2024-12-09 17:39:04.580695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.408 [2024-12-09 17:39:04.580703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.408 [2024-12-09 17:39:04.580714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.408 [2024-12-09 17:39:04.580732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.408 qpair failed and we were unable to recover it. 00:28:35.666 [2024-12-09 17:39:04.590648] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.666 [2024-12-09 17:39:04.590721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.666 [2024-12-09 17:39:04.590739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.667 [2024-12-09 17:39:04.590747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.667 [2024-12-09 17:39:04.590754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.667 [2024-12-09 17:39:04.590771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.667 qpair failed and we were unable to recover it. 00:28:35.667 [2024-12-09 17:39:04.600600] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.667 [2024-12-09 17:39:04.600661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.667 [2024-12-09 17:39:04.600675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.667 [2024-12-09 17:39:04.600682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.667 [2024-12-09 17:39:04.600689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.667 [2024-12-09 17:39:04.600704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.667 qpair failed and we were unable to recover it. 00:28:35.667 [2024-12-09 17:39:04.610680] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.667 [2024-12-09 17:39:04.610742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.667 [2024-12-09 17:39:04.610755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.667 [2024-12-09 17:39:04.610763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.667 [2024-12-09 17:39:04.610769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.667 [2024-12-09 17:39:04.610784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.667 qpair failed and we were unable to recover it. 00:28:35.667 [2024-12-09 17:39:04.620726] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.667 [2024-12-09 17:39:04.620784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.667 [2024-12-09 17:39:04.620798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.667 [2024-12-09 17:39:04.620805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.667 [2024-12-09 17:39:04.620812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.667 [2024-12-09 17:39:04.620827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.667 qpair failed and we were unable to recover it. 00:28:35.667 [2024-12-09 17:39:04.630668] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.667 [2024-12-09 17:39:04.630769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.667 [2024-12-09 17:39:04.630782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.667 [2024-12-09 17:39:04.630789] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.667 [2024-12-09 17:39:04.630795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.667 [2024-12-09 17:39:04.630810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.667 qpair failed and we were unable to recover it. 00:28:35.667 [2024-12-09 17:39:04.640799] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.667 [2024-12-09 17:39:04.640853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.667 [2024-12-09 17:39:04.640867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.667 [2024-12-09 17:39:04.640877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.667 [2024-12-09 17:39:04.640883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.667 [2024-12-09 17:39:04.640899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.667 qpair failed and we were unable to recover it. 00:28:35.667 [2024-12-09 17:39:04.650784] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.667 [2024-12-09 17:39:04.650865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.667 [2024-12-09 17:39:04.650878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.667 [2024-12-09 17:39:04.650886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.667 [2024-12-09 17:39:04.650892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.667 [2024-12-09 17:39:04.650906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.667 qpair failed and we were unable to recover it. 00:28:35.667 [2024-12-09 17:39:04.660836] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.667 [2024-12-09 17:39:04.660895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.667 [2024-12-09 17:39:04.660909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.667 [2024-12-09 17:39:04.660916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.667 [2024-12-09 17:39:04.660922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.667 [2024-12-09 17:39:04.660938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.667 qpair failed and we were unable to recover it. 00:28:35.667 [2024-12-09 17:39:04.670820] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.667 [2024-12-09 17:39:04.670873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.667 [2024-12-09 17:39:04.670888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.667 [2024-12-09 17:39:04.670895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.667 [2024-12-09 17:39:04.670902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.667 [2024-12-09 17:39:04.670917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.667 qpair failed and we were unable to recover it. 00:28:35.667 [2024-12-09 17:39:04.680855] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.667 [2024-12-09 17:39:04.680909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.667 [2024-12-09 17:39:04.680922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.667 [2024-12-09 17:39:04.680929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.667 [2024-12-09 17:39:04.680936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.667 [2024-12-09 17:39:04.680954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.667 qpair failed and we were unable to recover it. 00:28:35.667 [2024-12-09 17:39:04.690925] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.667 [2024-12-09 17:39:04.690982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.667 [2024-12-09 17:39:04.690995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.667 [2024-12-09 17:39:04.691002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.667 [2024-12-09 17:39:04.691009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.667 [2024-12-09 17:39:04.691024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.667 qpair failed and we were unable to recover it. 00:28:35.667 [2024-12-09 17:39:04.700933] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.667 [2024-12-09 17:39:04.701020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.667 [2024-12-09 17:39:04.701033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.667 [2024-12-09 17:39:04.701040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.667 [2024-12-09 17:39:04.701046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.667 [2024-12-09 17:39:04.701060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.667 qpair failed and we were unable to recover it. 00:28:35.667 [2024-12-09 17:39:04.710985] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.667 [2024-12-09 17:39:04.711038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.667 [2024-12-09 17:39:04.711051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.667 [2024-12-09 17:39:04.711058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.667 [2024-12-09 17:39:04.711064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.667 [2024-12-09 17:39:04.711080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.667 qpair failed and we were unable to recover it. 00:28:35.667 [2024-12-09 17:39:04.720898] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.667 [2024-12-09 17:39:04.720969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.667 [2024-12-09 17:39:04.720982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.667 [2024-12-09 17:39:04.720989] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.667 [2024-12-09 17:39:04.720995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.667 [2024-12-09 17:39:04.721010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.667 qpair failed and we were unable to recover it. 00:28:35.667 [2024-12-09 17:39:04.731030] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.667 [2024-12-09 17:39:04.731208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.667 [2024-12-09 17:39:04.731234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.667 [2024-12-09 17:39:04.731241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.667 [2024-12-09 17:39:04.731247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.667 [2024-12-09 17:39:04.731263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.667 qpair failed and we were unable to recover it. 00:28:35.667 [2024-12-09 17:39:04.740997] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.667 [2024-12-09 17:39:04.741095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.667 [2024-12-09 17:39:04.741109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.667 [2024-12-09 17:39:04.741116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.667 [2024-12-09 17:39:04.741123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.667 [2024-12-09 17:39:04.741137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.667 qpair failed and we were unable to recover it. 00:28:35.667 [2024-12-09 17:39:04.751053] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.668 [2024-12-09 17:39:04.751108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.668 [2024-12-09 17:39:04.751121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.668 [2024-12-09 17:39:04.751128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.668 [2024-12-09 17:39:04.751134] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.668 [2024-12-09 17:39:04.751149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.668 qpair failed and we were unable to recover it. 00:28:35.668 [2024-12-09 17:39:04.761084] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.668 [2024-12-09 17:39:04.761144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.668 [2024-12-09 17:39:04.761157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.668 [2024-12-09 17:39:04.761164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.668 [2024-12-09 17:39:04.761170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.668 [2024-12-09 17:39:04.761185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.668 qpair failed and we were unable to recover it. 00:28:35.668 [2024-12-09 17:39:04.771124] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.668 [2024-12-09 17:39:04.771182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.668 [2024-12-09 17:39:04.771199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.668 [2024-12-09 17:39:04.771207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.668 [2024-12-09 17:39:04.771213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.668 [2024-12-09 17:39:04.771233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.668 qpair failed and we were unable to recover it. 00:28:35.668 [2024-12-09 17:39:04.781142] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.668 [2024-12-09 17:39:04.781196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.668 [2024-12-09 17:39:04.781210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.668 [2024-12-09 17:39:04.781220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.668 [2024-12-09 17:39:04.781226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.668 [2024-12-09 17:39:04.781242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.668 qpair failed and we were unable to recover it. 00:28:35.668 [2024-12-09 17:39:04.791164] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.668 [2024-12-09 17:39:04.791215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.668 [2024-12-09 17:39:04.791234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.668 [2024-12-09 17:39:04.791241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.668 [2024-12-09 17:39:04.791247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.668 [2024-12-09 17:39:04.791263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.668 qpair failed and we were unable to recover it. 00:28:35.668 [2024-12-09 17:39:04.801190] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.668 [2024-12-09 17:39:04.801248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.668 [2024-12-09 17:39:04.801262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.668 [2024-12-09 17:39:04.801269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.668 [2024-12-09 17:39:04.801275] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.668 [2024-12-09 17:39:04.801291] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.668 qpair failed and we were unable to recover it. 00:28:35.668 [2024-12-09 17:39:04.811236] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.668 [2024-12-09 17:39:04.811293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.668 [2024-12-09 17:39:04.811306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.668 [2024-12-09 17:39:04.811313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.668 [2024-12-09 17:39:04.811322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.668 [2024-12-09 17:39:04.811337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.668 qpair failed and we were unable to recover it. 00:28:35.668 [2024-12-09 17:39:04.821287] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.668 [2024-12-09 17:39:04.821341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.668 [2024-12-09 17:39:04.821354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.668 [2024-12-09 17:39:04.821361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.668 [2024-12-09 17:39:04.821368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.668 [2024-12-09 17:39:04.821383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.668 qpair failed and we were unable to recover it. 00:28:35.668 [2024-12-09 17:39:04.831283] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.668 [2024-12-09 17:39:04.831336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.668 [2024-12-09 17:39:04.831350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.668 [2024-12-09 17:39:04.831356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.668 [2024-12-09 17:39:04.831363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.668 [2024-12-09 17:39:04.831378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.668 qpair failed and we were unable to recover it. 00:28:35.668 [2024-12-09 17:39:04.841341] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.668 [2024-12-09 17:39:04.841408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.668 [2024-12-09 17:39:04.841426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.668 [2024-12-09 17:39:04.841433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.668 [2024-12-09 17:39:04.841440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.668 [2024-12-09 17:39:04.841457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.668 qpair failed and we were unable to recover it. 00:28:35.927 [2024-12-09 17:39:04.851321] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.927 [2024-12-09 17:39:04.851384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.927 [2024-12-09 17:39:04.851402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.927 [2024-12-09 17:39:04.851409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.927 [2024-12-09 17:39:04.851416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.927 [2024-12-09 17:39:04.851433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.927 qpair failed and we were unable to recover it. 00:28:35.927 [2024-12-09 17:39:04.861382] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.927 [2024-12-09 17:39:04.861441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.927 [2024-12-09 17:39:04.861454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.927 [2024-12-09 17:39:04.861461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.927 [2024-12-09 17:39:04.861468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.927 [2024-12-09 17:39:04.861483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.927 qpair failed and we were unable to recover it. 00:28:35.927 [2024-12-09 17:39:04.871416] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.927 [2024-12-09 17:39:04.871476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.927 [2024-12-09 17:39:04.871491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.927 [2024-12-09 17:39:04.871498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.927 [2024-12-09 17:39:04.871504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.927 [2024-12-09 17:39:04.871520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.927 qpair failed and we were unable to recover it. 00:28:35.927 [2024-12-09 17:39:04.881395] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.927 [2024-12-09 17:39:04.881476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.927 [2024-12-09 17:39:04.881490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.927 [2024-12-09 17:39:04.881498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.927 [2024-12-09 17:39:04.881506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.927 [2024-12-09 17:39:04.881521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.927 qpair failed and we were unable to recover it. 00:28:35.927 [2024-12-09 17:39:04.891411] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.927 [2024-12-09 17:39:04.891466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.927 [2024-12-09 17:39:04.891480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.927 [2024-12-09 17:39:04.891487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.927 [2024-12-09 17:39:04.891493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.927 [2024-12-09 17:39:04.891509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.927 qpair failed and we were unable to recover it. 00:28:35.927 [2024-12-09 17:39:04.901452] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.927 [2024-12-09 17:39:04.901543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.927 [2024-12-09 17:39:04.901559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.927 [2024-12-09 17:39:04.901567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.927 [2024-12-09 17:39:04.901573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.927 [2024-12-09 17:39:04.901588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.927 qpair failed and we were unable to recover it. 00:28:35.927 [2024-12-09 17:39:04.911522] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.927 [2024-12-09 17:39:04.911578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.927 [2024-12-09 17:39:04.911593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.927 [2024-12-09 17:39:04.911600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.927 [2024-12-09 17:39:04.911607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.927 [2024-12-09 17:39:04.911622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.927 qpair failed and we were unable to recover it. 00:28:35.927 [2024-12-09 17:39:04.921544] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.927 [2024-12-09 17:39:04.921598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.927 [2024-12-09 17:39:04.921611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.927 [2024-12-09 17:39:04.921618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.927 [2024-12-09 17:39:04.921625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.927 [2024-12-09 17:39:04.921640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.927 qpair failed and we were unable to recover it. 00:28:35.927 [2024-12-09 17:39:04.931519] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.927 [2024-12-09 17:39:04.931579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.927 [2024-12-09 17:39:04.931593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.927 [2024-12-09 17:39:04.931600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.927 [2024-12-09 17:39:04.931606] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.927 [2024-12-09 17:39:04.931622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.927 qpair failed and we were unable to recover it. 00:28:35.927 [2024-12-09 17:39:04.941610] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.927 [2024-12-09 17:39:04.941667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.927 [2024-12-09 17:39:04.941680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.927 [2024-12-09 17:39:04.941687] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.927 [2024-12-09 17:39:04.941697] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.927 [2024-12-09 17:39:04.941711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.927 qpair failed and we were unable to recover it. 00:28:35.927 [2024-12-09 17:39:04.951633] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.927 [2024-12-09 17:39:04.951686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.927 [2024-12-09 17:39:04.951700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.927 [2024-12-09 17:39:04.951707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.927 [2024-12-09 17:39:04.951713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.927 [2024-12-09 17:39:04.951728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.927 qpair failed and we were unable to recover it. 00:28:35.927 [2024-12-09 17:39:04.961604] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.927 [2024-12-09 17:39:04.961665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.927 [2024-12-09 17:39:04.961678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.928 [2024-12-09 17:39:04.961685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.928 [2024-12-09 17:39:04.961691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.928 [2024-12-09 17:39:04.961706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.928 qpair failed and we were unable to recover it. 00:28:35.928 [2024-12-09 17:39:04.971696] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.928 [2024-12-09 17:39:04.971753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.928 [2024-12-09 17:39:04.971767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.928 [2024-12-09 17:39:04.971774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.928 [2024-12-09 17:39:04.971780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.928 [2024-12-09 17:39:04.971796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.928 qpair failed and we were unable to recover it. 00:28:35.928 [2024-12-09 17:39:04.981748] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.928 [2024-12-09 17:39:04.981802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.928 [2024-12-09 17:39:04.981815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.928 [2024-12-09 17:39:04.981822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.928 [2024-12-09 17:39:04.981829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.928 [2024-12-09 17:39:04.981844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.928 qpair failed and we were unable to recover it. 00:28:35.928 [2024-12-09 17:39:04.991762] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.928 [2024-12-09 17:39:04.991812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.928 [2024-12-09 17:39:04.991826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.928 [2024-12-09 17:39:04.991833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.928 [2024-12-09 17:39:04.991839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.928 [2024-12-09 17:39:04.991855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.928 qpair failed and we were unable to recover it. 00:28:35.928 [2024-12-09 17:39:05.001823] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.928 [2024-12-09 17:39:05.001876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.928 [2024-12-09 17:39:05.001889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.928 [2024-12-09 17:39:05.001896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.928 [2024-12-09 17:39:05.001903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.928 [2024-12-09 17:39:05.001917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.928 qpair failed and we were unable to recover it. 00:28:35.928 [2024-12-09 17:39:05.011881] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.928 [2024-12-09 17:39:05.011941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.928 [2024-12-09 17:39:05.011955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.928 [2024-12-09 17:39:05.011962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.928 [2024-12-09 17:39:05.011968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.928 [2024-12-09 17:39:05.011982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.928 qpair failed and we were unable to recover it. 00:28:35.928 [2024-12-09 17:39:05.021850] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.928 [2024-12-09 17:39:05.021905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.928 [2024-12-09 17:39:05.021921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.928 [2024-12-09 17:39:05.021928] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.928 [2024-12-09 17:39:05.021934] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.928 [2024-12-09 17:39:05.021948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.928 qpair failed and we were unable to recover it. 00:28:35.928 [2024-12-09 17:39:05.031854] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.928 [2024-12-09 17:39:05.031941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.928 [2024-12-09 17:39:05.031954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.928 [2024-12-09 17:39:05.031961] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.928 [2024-12-09 17:39:05.031967] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.928 [2024-12-09 17:39:05.031982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.928 qpair failed and we were unable to recover it. 00:28:35.928 [2024-12-09 17:39:05.041829] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.928 [2024-12-09 17:39:05.041884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.928 [2024-12-09 17:39:05.041899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.928 [2024-12-09 17:39:05.041906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.928 [2024-12-09 17:39:05.041912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.928 [2024-12-09 17:39:05.041927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.928 qpair failed and we were unable to recover it. 00:28:35.928 [2024-12-09 17:39:05.051951] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.928 [2024-12-09 17:39:05.052007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.928 [2024-12-09 17:39:05.052021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.928 [2024-12-09 17:39:05.052027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.928 [2024-12-09 17:39:05.052034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.928 [2024-12-09 17:39:05.052048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.928 qpair failed and we were unable to recover it. 00:28:35.928 [2024-12-09 17:39:05.061981] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.928 [2024-12-09 17:39:05.062058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.928 [2024-12-09 17:39:05.062071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.928 [2024-12-09 17:39:05.062078] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.928 [2024-12-09 17:39:05.062084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.928 [2024-12-09 17:39:05.062099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.928 qpair failed and we were unable to recover it. 00:28:35.928 [2024-12-09 17:39:05.071979] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.928 [2024-12-09 17:39:05.072038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.928 [2024-12-09 17:39:05.072052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.928 [2024-12-09 17:39:05.072062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.928 [2024-12-09 17:39:05.072069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.928 [2024-12-09 17:39:05.072084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.928 qpair failed and we were unable to recover it. 00:28:35.928 [2024-12-09 17:39:05.082084] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.928 [2024-12-09 17:39:05.082138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.928 [2024-12-09 17:39:05.082152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.928 [2024-12-09 17:39:05.082159] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.928 [2024-12-09 17:39:05.082165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.928 [2024-12-09 17:39:05.082180] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.928 qpair failed and we were unable to recover it. 00:28:35.928 [2024-12-09 17:39:05.091999] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.928 [2024-12-09 17:39:05.092054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.928 [2024-12-09 17:39:05.092067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.928 [2024-12-09 17:39:05.092074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.928 [2024-12-09 17:39:05.092080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.929 [2024-12-09 17:39:05.092095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.929 qpair failed and we were unable to recover it. 00:28:35.929 [2024-12-09 17:39:05.102031] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.929 [2024-12-09 17:39:05.102088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.929 [2024-12-09 17:39:05.102106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.929 [2024-12-09 17:39:05.102114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.929 [2024-12-09 17:39:05.102121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:35.929 [2024-12-09 17:39:05.102138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.929 qpair failed and we were unable to recover it. 00:28:36.192 [2024-12-09 17:39:05.112131] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.192 [2024-12-09 17:39:05.112263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.192 [2024-12-09 17:39:05.112282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.192 [2024-12-09 17:39:05.112290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.192 [2024-12-09 17:39:05.112296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.192 [2024-12-09 17:39:05.112318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.192 qpair failed and we were unable to recover it. 00:28:36.192 [2024-12-09 17:39:05.122140] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.192 [2024-12-09 17:39:05.122194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.192 [2024-12-09 17:39:05.122208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.192 [2024-12-09 17:39:05.122215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.192 [2024-12-09 17:39:05.122227] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.192 [2024-12-09 17:39:05.122244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.192 qpair failed and we were unable to recover it. 00:28:36.192 [2024-12-09 17:39:05.132250] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.192 [2024-12-09 17:39:05.132318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.192 [2024-12-09 17:39:05.132332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.192 [2024-12-09 17:39:05.132339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.192 [2024-12-09 17:39:05.132346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.192 [2024-12-09 17:39:05.132362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.192 qpair failed and we were unable to recover it. 00:28:36.192 [2024-12-09 17:39:05.142186] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.192 [2024-12-09 17:39:05.142246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.192 [2024-12-09 17:39:05.142260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.192 [2024-12-09 17:39:05.142267] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.192 [2024-12-09 17:39:05.142274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.192 [2024-12-09 17:39:05.142289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.192 qpair failed and we were unable to recover it. 00:28:36.192 [2024-12-09 17:39:05.152263] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.192 [2024-12-09 17:39:05.152319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.192 [2024-12-09 17:39:05.152332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.192 [2024-12-09 17:39:05.152339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.192 [2024-12-09 17:39:05.152345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.192 [2024-12-09 17:39:05.152360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.192 qpair failed and we were unable to recover it. 00:28:36.192 [2024-12-09 17:39:05.162285] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.192 [2024-12-09 17:39:05.162363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.192 [2024-12-09 17:39:05.162378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.192 [2024-12-09 17:39:05.162385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.192 [2024-12-09 17:39:05.162391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.192 [2024-12-09 17:39:05.162408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.192 qpair failed and we were unable to recover it. 00:28:36.192 [2024-12-09 17:39:05.172292] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.192 [2024-12-09 17:39:05.172353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.192 [2024-12-09 17:39:05.172367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.192 [2024-12-09 17:39:05.172375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.192 [2024-12-09 17:39:05.172381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.192 [2024-12-09 17:39:05.172397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.192 qpair failed and we were unable to recover it. 00:28:36.192 [2024-12-09 17:39:05.182326] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.192 [2024-12-09 17:39:05.182378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.192 [2024-12-09 17:39:05.182391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.192 [2024-12-09 17:39:05.182398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.192 [2024-12-09 17:39:05.182404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.192 [2024-12-09 17:39:05.182420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.192 qpair failed and we were unable to recover it. 00:28:36.192 [2024-12-09 17:39:05.192340] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.192 [2024-12-09 17:39:05.192402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.192 [2024-12-09 17:39:05.192416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.192 [2024-12-09 17:39:05.192423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.192 [2024-12-09 17:39:05.192429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.192 [2024-12-09 17:39:05.192444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.192 qpair failed and we were unable to recover it. 00:28:36.192 [2024-12-09 17:39:05.202368] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.192 [2024-12-09 17:39:05.202431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.192 [2024-12-09 17:39:05.202445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.192 [2024-12-09 17:39:05.202455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.192 [2024-12-09 17:39:05.202462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.192 [2024-12-09 17:39:05.202476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.192 qpair failed and we were unable to recover it. 00:28:36.192 [2024-12-09 17:39:05.212423] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.192 [2024-12-09 17:39:05.212480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.192 [2024-12-09 17:39:05.212493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.192 [2024-12-09 17:39:05.212499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.192 [2024-12-09 17:39:05.212506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.192 [2024-12-09 17:39:05.212520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.192 qpair failed and we were unable to recover it. 00:28:36.192 [2024-12-09 17:39:05.222462] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.192 [2024-12-09 17:39:05.222519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.192 [2024-12-09 17:39:05.222531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.192 [2024-12-09 17:39:05.222538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.192 [2024-12-09 17:39:05.222545] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.192 [2024-12-09 17:39:05.222559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.192 qpair failed and we were unable to recover it. 00:28:36.192 [2024-12-09 17:39:05.232449] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.192 [2024-12-09 17:39:05.232533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.192 [2024-12-09 17:39:05.232546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.192 [2024-12-09 17:39:05.232553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.192 [2024-12-09 17:39:05.232559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.192 [2024-12-09 17:39:05.232574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.192 qpair failed and we were unable to recover it. 00:28:36.192 [2024-12-09 17:39:05.242480] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.192 [2024-12-09 17:39:05.242534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.192 [2024-12-09 17:39:05.242548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.192 [2024-12-09 17:39:05.242555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.192 [2024-12-09 17:39:05.242561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.192 [2024-12-09 17:39:05.242579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.192 qpair failed and we were unable to recover it. 00:28:36.192 [2024-12-09 17:39:05.252445] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.192 [2024-12-09 17:39:05.252550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.192 [2024-12-09 17:39:05.252563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.192 [2024-12-09 17:39:05.252570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.192 [2024-12-09 17:39:05.252576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.192 [2024-12-09 17:39:05.252590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.192 qpair failed and we were unable to recover it. 00:28:36.192 [2024-12-09 17:39:05.262543] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.192 [2024-12-09 17:39:05.262640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.192 [2024-12-09 17:39:05.262653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.192 [2024-12-09 17:39:05.262660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.192 [2024-12-09 17:39:05.262666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.192 [2024-12-09 17:39:05.262681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.192 qpair failed and we were unable to recover it. 00:28:36.192 [2024-12-09 17:39:05.272561] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.192 [2024-12-09 17:39:05.272617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.192 [2024-12-09 17:39:05.272631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.192 [2024-12-09 17:39:05.272638] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.192 [2024-12-09 17:39:05.272645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.192 [2024-12-09 17:39:05.272660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.192 qpair failed and we were unable to recover it. 00:28:36.192 [2024-12-09 17:39:05.282594] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.192 [2024-12-09 17:39:05.282684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.192 [2024-12-09 17:39:05.282698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.192 [2024-12-09 17:39:05.282705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.192 [2024-12-09 17:39:05.282711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.192 [2024-12-09 17:39:05.282726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.192 qpair failed and we were unable to recover it. 00:28:36.192 [2024-12-09 17:39:05.292632] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.192 [2024-12-09 17:39:05.292687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.192 [2024-12-09 17:39:05.292700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.192 [2024-12-09 17:39:05.292707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.192 [2024-12-09 17:39:05.292713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.192 [2024-12-09 17:39:05.292728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.192 qpair failed and we were unable to recover it. 00:28:36.192 [2024-12-09 17:39:05.302661] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.192 [2024-12-09 17:39:05.302714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.192 [2024-12-09 17:39:05.302727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.192 [2024-12-09 17:39:05.302734] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.192 [2024-12-09 17:39:05.302740] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.192 [2024-12-09 17:39:05.302755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.193 qpair failed and we were unable to recover it. 00:28:36.193 [2024-12-09 17:39:05.312679] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.193 [2024-12-09 17:39:05.312764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.193 [2024-12-09 17:39:05.312777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.193 [2024-12-09 17:39:05.312784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.193 [2024-12-09 17:39:05.312790] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.193 [2024-12-09 17:39:05.312804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.193 qpair failed and we were unable to recover it. 00:28:36.193 [2024-12-09 17:39:05.322696] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.193 [2024-12-09 17:39:05.322795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.193 [2024-12-09 17:39:05.322809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.193 [2024-12-09 17:39:05.322816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.193 [2024-12-09 17:39:05.322822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.193 [2024-12-09 17:39:05.322837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.193 qpair failed and we were unable to recover it. 00:28:36.193 [2024-12-09 17:39:05.332727] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.193 [2024-12-09 17:39:05.332812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.193 [2024-12-09 17:39:05.332829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.193 [2024-12-09 17:39:05.332836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.193 [2024-12-09 17:39:05.332842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.193 [2024-12-09 17:39:05.332857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.193 qpair failed and we were unable to recover it. 00:28:36.193 [2024-12-09 17:39:05.342778] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.193 [2024-12-09 17:39:05.342835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.193 [2024-12-09 17:39:05.342848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.193 [2024-12-09 17:39:05.342854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.193 [2024-12-09 17:39:05.342861] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.193 [2024-12-09 17:39:05.342876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.193 qpair failed and we were unable to recover it. 00:28:36.193 [2024-12-09 17:39:05.352767] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.193 [2024-12-09 17:39:05.352822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.193 [2024-12-09 17:39:05.352836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.193 [2024-12-09 17:39:05.352843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.193 [2024-12-09 17:39:05.352849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.193 [2024-12-09 17:39:05.352864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.193 qpair failed and we were unable to recover it. 00:28:36.193 [2024-12-09 17:39:05.362882] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.193 [2024-12-09 17:39:05.362968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.193 [2024-12-09 17:39:05.362985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.193 [2024-12-09 17:39:05.362992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.193 [2024-12-09 17:39:05.362999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.193 [2024-12-09 17:39:05.363015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.193 qpair failed and we were unable to recover it. 00:28:36.450 [2024-12-09 17:39:05.372889] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.450 [2024-12-09 17:39:05.372954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.450 [2024-12-09 17:39:05.372971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.450 [2024-12-09 17:39:05.372980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.450 [2024-12-09 17:39:05.372989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.450 [2024-12-09 17:39:05.373007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.450 qpair failed and we were unable to recover it. 00:28:36.450 [2024-12-09 17:39:05.382878] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.450 [2024-12-09 17:39:05.382934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.450 [2024-12-09 17:39:05.382948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.450 [2024-12-09 17:39:05.382955] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.450 [2024-12-09 17:39:05.382961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.450 [2024-12-09 17:39:05.382976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.450 qpair failed and we were unable to recover it. 00:28:36.450 [2024-12-09 17:39:05.392901] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.450 [2024-12-09 17:39:05.392957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.450 [2024-12-09 17:39:05.392971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.450 [2024-12-09 17:39:05.392979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.450 [2024-12-09 17:39:05.392986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.450 [2024-12-09 17:39:05.393001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.450 qpair failed and we were unable to recover it. 00:28:36.450 [2024-12-09 17:39:05.402931] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.450 [2024-12-09 17:39:05.402984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.450 [2024-12-09 17:39:05.402997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.450 [2024-12-09 17:39:05.403004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.450 [2024-12-09 17:39:05.403011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.450 [2024-12-09 17:39:05.403026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.450 qpair failed and we were unable to recover it. 00:28:36.450 [2024-12-09 17:39:05.412973] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.450 [2024-12-09 17:39:05.413030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.450 [2024-12-09 17:39:05.413043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.450 [2024-12-09 17:39:05.413051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.450 [2024-12-09 17:39:05.413057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.450 [2024-12-09 17:39:05.413073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.450 qpair failed and we were unable to recover it. 00:28:36.450 [2024-12-09 17:39:05.423019] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.450 [2024-12-09 17:39:05.423074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.450 [2024-12-09 17:39:05.423088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.450 [2024-12-09 17:39:05.423095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.450 [2024-12-09 17:39:05.423101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.450 [2024-12-09 17:39:05.423116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.450 qpair failed and we were unable to recover it. 00:28:36.450 [2024-12-09 17:39:05.433060] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.450 [2024-12-09 17:39:05.433146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.450 [2024-12-09 17:39:05.433159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.450 [2024-12-09 17:39:05.433167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.450 [2024-12-09 17:39:05.433173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.450 [2024-12-09 17:39:05.433188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.450 qpair failed and we were unable to recover it. 00:28:36.450 [2024-12-09 17:39:05.443039] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.450 [2024-12-09 17:39:05.443096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.450 [2024-12-09 17:39:05.443109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.450 [2024-12-09 17:39:05.443116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.450 [2024-12-09 17:39:05.443123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.450 [2024-12-09 17:39:05.443138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.450 qpair failed and we were unable to recover it. 00:28:36.450 [2024-12-09 17:39:05.453099] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.450 [2024-12-09 17:39:05.453156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.450 [2024-12-09 17:39:05.453169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.450 [2024-12-09 17:39:05.453176] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.450 [2024-12-09 17:39:05.453183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.450 [2024-12-09 17:39:05.453198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.450 qpair failed and we were unable to recover it. 00:28:36.450 [2024-12-09 17:39:05.463110] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.450 [2024-12-09 17:39:05.463168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.450 [2024-12-09 17:39:05.463185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.450 [2024-12-09 17:39:05.463192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.450 [2024-12-09 17:39:05.463198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.450 [2024-12-09 17:39:05.463213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.450 qpair failed and we were unable to recover it. 00:28:36.450 [2024-12-09 17:39:05.473061] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.450 [2024-12-09 17:39:05.473123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.450 [2024-12-09 17:39:05.473138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.450 [2024-12-09 17:39:05.473146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.450 [2024-12-09 17:39:05.473152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.450 [2024-12-09 17:39:05.473167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.450 qpair failed and we were unable to recover it. 00:28:36.450 [2024-12-09 17:39:05.483159] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.451 [2024-12-09 17:39:05.483211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.451 [2024-12-09 17:39:05.483229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.451 [2024-12-09 17:39:05.483236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.451 [2024-12-09 17:39:05.483242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.451 [2024-12-09 17:39:05.483257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.451 qpair failed and we were unable to recover it. 00:28:36.451 [2024-12-09 17:39:05.493201] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.451 [2024-12-09 17:39:05.493264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.451 [2024-12-09 17:39:05.493277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.451 [2024-12-09 17:39:05.493285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.451 [2024-12-09 17:39:05.493291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.451 [2024-12-09 17:39:05.493306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.451 qpair failed and we were unable to recover it. 00:28:36.451 [2024-12-09 17:39:05.503237] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.451 [2024-12-09 17:39:05.503295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.451 [2024-12-09 17:39:05.503308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.451 [2024-12-09 17:39:05.503315] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.451 [2024-12-09 17:39:05.503324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.451 [2024-12-09 17:39:05.503339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.451 qpair failed and we were unable to recover it. 00:28:36.451 [2024-12-09 17:39:05.513187] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.451 [2024-12-09 17:39:05.513243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.451 [2024-12-09 17:39:05.513256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.451 [2024-12-09 17:39:05.513264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.451 [2024-12-09 17:39:05.513270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.451 [2024-12-09 17:39:05.513285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.451 qpair failed and we were unable to recover it. 00:28:36.451 [2024-12-09 17:39:05.523272] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.451 [2024-12-09 17:39:05.523376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.451 [2024-12-09 17:39:05.523390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.451 [2024-12-09 17:39:05.523397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.451 [2024-12-09 17:39:05.523402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.451 [2024-12-09 17:39:05.523417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.451 qpair failed and we were unable to recover it. 00:28:36.451 [2024-12-09 17:39:05.533271] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.451 [2024-12-09 17:39:05.533357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.451 [2024-12-09 17:39:05.533370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.451 [2024-12-09 17:39:05.533377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.451 [2024-12-09 17:39:05.533384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.451 [2024-12-09 17:39:05.533399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.451 qpair failed and we were unable to recover it. 00:28:36.451 [2024-12-09 17:39:05.543341] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.451 [2024-12-09 17:39:05.543401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.451 [2024-12-09 17:39:05.543414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.451 [2024-12-09 17:39:05.543422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.451 [2024-12-09 17:39:05.543429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.451 [2024-12-09 17:39:05.543443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.451 qpair failed and we were unable to recover it. 00:28:36.451 [2024-12-09 17:39:05.553396] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.451 [2024-12-09 17:39:05.553454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.451 [2024-12-09 17:39:05.553467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.451 [2024-12-09 17:39:05.553474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.451 [2024-12-09 17:39:05.553480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.451 [2024-12-09 17:39:05.553495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.451 qpair failed and we were unable to recover it. 00:28:36.451 [2024-12-09 17:39:05.563403] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.451 [2024-12-09 17:39:05.563457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.451 [2024-12-09 17:39:05.563470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.451 [2024-12-09 17:39:05.563477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.451 [2024-12-09 17:39:05.563483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.451 [2024-12-09 17:39:05.563498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.451 qpair failed and we were unable to recover it. 00:28:36.451 [2024-12-09 17:39:05.573441] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.451 [2024-12-09 17:39:05.573499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.451 [2024-12-09 17:39:05.573513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.451 [2024-12-09 17:39:05.573521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.451 [2024-12-09 17:39:05.573528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.451 [2024-12-09 17:39:05.573543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.451 qpair failed and we were unable to recover it. 00:28:36.451 [2024-12-09 17:39:05.583400] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.451 [2024-12-09 17:39:05.583457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.451 [2024-12-09 17:39:05.583470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.451 [2024-12-09 17:39:05.583477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.451 [2024-12-09 17:39:05.583483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.451 [2024-12-09 17:39:05.583498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.451 qpair failed and we were unable to recover it. 00:28:36.451 [2024-12-09 17:39:05.593486] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.451 [2024-12-09 17:39:05.593545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.451 [2024-12-09 17:39:05.593559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.451 [2024-12-09 17:39:05.593566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.451 [2024-12-09 17:39:05.593572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.451 [2024-12-09 17:39:05.593588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.451 qpair failed and we were unable to recover it. 00:28:36.451 [2024-12-09 17:39:05.603456] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.451 [2024-12-09 17:39:05.603512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.451 [2024-12-09 17:39:05.603525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.451 [2024-12-09 17:39:05.603532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.451 [2024-12-09 17:39:05.603539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.451 [2024-12-09 17:39:05.603553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.451 qpair failed and we were unable to recover it. 00:28:36.451 [2024-12-09 17:39:05.613596] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.451 [2024-12-09 17:39:05.613691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.451 [2024-12-09 17:39:05.613704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.452 [2024-12-09 17:39:05.613711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.452 [2024-12-09 17:39:05.613718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.452 [2024-12-09 17:39:05.613733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.452 qpair failed and we were unable to recover it. 00:28:36.452 [2024-12-09 17:39:05.623573] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.452 [2024-12-09 17:39:05.623658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.452 [2024-12-09 17:39:05.623681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.452 [2024-12-09 17:39:05.623689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.452 [2024-12-09 17:39:05.623696] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.452 [2024-12-09 17:39:05.623713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.452 qpair failed and we were unable to recover it. 00:28:36.710 [2024-12-09 17:39:05.633561] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.710 [2024-12-09 17:39:05.633668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.710 [2024-12-09 17:39:05.633686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.710 [2024-12-09 17:39:05.633699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.710 [2024-12-09 17:39:05.633706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.710 [2024-12-09 17:39:05.633723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.710 qpair failed and we were unable to recover it. 00:28:36.710 [2024-12-09 17:39:05.643643] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.710 [2024-12-09 17:39:05.643718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.710 [2024-12-09 17:39:05.643732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.710 [2024-12-09 17:39:05.643739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.710 [2024-12-09 17:39:05.643745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.710 [2024-12-09 17:39:05.643760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.710 qpair failed and we were unable to recover it. 00:28:36.710 [2024-12-09 17:39:05.653683] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.710 [2024-12-09 17:39:05.653758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.710 [2024-12-09 17:39:05.653772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.710 [2024-12-09 17:39:05.653779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.710 [2024-12-09 17:39:05.653785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.710 [2024-12-09 17:39:05.653801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.710 qpair failed and we were unable to recover it. 00:28:36.710 [2024-12-09 17:39:05.663726] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.710 [2024-12-09 17:39:05.663785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.710 [2024-12-09 17:39:05.663799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.710 [2024-12-09 17:39:05.663805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.710 [2024-12-09 17:39:05.663812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.710 [2024-12-09 17:39:05.663827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.710 qpair failed and we were unable to recover it. 00:28:36.710 [2024-12-09 17:39:05.673697] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.710 [2024-12-09 17:39:05.673755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.710 [2024-12-09 17:39:05.673770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.710 [2024-12-09 17:39:05.673779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.710 [2024-12-09 17:39:05.673786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.710 [2024-12-09 17:39:05.673805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.710 qpair failed and we were unable to recover it. 00:28:36.710 [2024-12-09 17:39:05.683771] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.710 [2024-12-09 17:39:05.683834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.710 [2024-12-09 17:39:05.683848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.710 [2024-12-09 17:39:05.683855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.710 [2024-12-09 17:39:05.683861] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.710 [2024-12-09 17:39:05.683876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.710 qpair failed and we were unable to recover it. 00:28:36.710 [2024-12-09 17:39:05.693804] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.710 [2024-12-09 17:39:05.693868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.710 [2024-12-09 17:39:05.693882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.710 [2024-12-09 17:39:05.693889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.710 [2024-12-09 17:39:05.693895] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.710 [2024-12-09 17:39:05.693910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.710 qpair failed and we were unable to recover it. 00:28:36.710 [2024-12-09 17:39:05.703811] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.710 [2024-12-09 17:39:05.703866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.710 [2024-12-09 17:39:05.703879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.710 [2024-12-09 17:39:05.703886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.710 [2024-12-09 17:39:05.703892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.710 [2024-12-09 17:39:05.703907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.710 qpair failed and we were unable to recover it. 00:28:36.710 [2024-12-09 17:39:05.713837] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.710 [2024-12-09 17:39:05.713890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.710 [2024-12-09 17:39:05.713903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.710 [2024-12-09 17:39:05.713910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.710 [2024-12-09 17:39:05.713917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.710 [2024-12-09 17:39:05.713932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.710 qpair failed and we were unable to recover it. 00:28:36.710 [2024-12-09 17:39:05.723812] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.710 [2024-12-09 17:39:05.723908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.710 [2024-12-09 17:39:05.723922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.710 [2024-12-09 17:39:05.723929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.710 [2024-12-09 17:39:05.723935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.711 [2024-12-09 17:39:05.723949] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.711 qpair failed and we were unable to recover it. 00:28:36.711 [2024-12-09 17:39:05.733919] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.711 [2024-12-09 17:39:05.733975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.711 [2024-12-09 17:39:05.733989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.711 [2024-12-09 17:39:05.733996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.711 [2024-12-09 17:39:05.734002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.711 [2024-12-09 17:39:05.734019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.711 qpair failed and we were unable to recover it. 00:28:36.711 [2024-12-09 17:39:05.743923] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.711 [2024-12-09 17:39:05.743976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.711 [2024-12-09 17:39:05.743991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.711 [2024-12-09 17:39:05.743998] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.711 [2024-12-09 17:39:05.744004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.711 [2024-12-09 17:39:05.744019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.711 qpair failed and we were unable to recover it. 00:28:36.711 [2024-12-09 17:39:05.753950] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.711 [2024-12-09 17:39:05.754007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.711 [2024-12-09 17:39:05.754020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.711 [2024-12-09 17:39:05.754028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.711 [2024-12-09 17:39:05.754035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.711 [2024-12-09 17:39:05.754049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.711 qpair failed and we were unable to recover it. 00:28:36.711 [2024-12-09 17:39:05.763998] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.711 [2024-12-09 17:39:05.764055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.711 [2024-12-09 17:39:05.764071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.711 [2024-12-09 17:39:05.764078] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.711 [2024-12-09 17:39:05.764084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.711 [2024-12-09 17:39:05.764100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.711 qpair failed and we were unable to recover it. 00:28:36.711 [2024-12-09 17:39:05.774022] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.711 [2024-12-09 17:39:05.774077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.711 [2024-12-09 17:39:05.774091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.711 [2024-12-09 17:39:05.774098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.711 [2024-12-09 17:39:05.774104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.711 [2024-12-09 17:39:05.774120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.711 qpair failed and we were unable to recover it. 00:28:36.711 [2024-12-09 17:39:05.784048] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.711 [2024-12-09 17:39:05.784105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.711 [2024-12-09 17:39:05.784119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.711 [2024-12-09 17:39:05.784126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.711 [2024-12-09 17:39:05.784133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.711 [2024-12-09 17:39:05.784149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.711 qpair failed and we were unable to recover it. 00:28:36.711 [2024-12-09 17:39:05.794008] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.711 [2024-12-09 17:39:05.794071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.711 [2024-12-09 17:39:05.794085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.711 [2024-12-09 17:39:05.794092] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.711 [2024-12-09 17:39:05.794098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.711 [2024-12-09 17:39:05.794113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.711 qpair failed and we were unable to recover it. 00:28:36.711 [2024-12-09 17:39:05.804114] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.711 [2024-12-09 17:39:05.804174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.711 [2024-12-09 17:39:05.804188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.711 [2024-12-09 17:39:05.804195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.711 [2024-12-09 17:39:05.804201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.711 [2024-12-09 17:39:05.804223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.711 qpair failed and we were unable to recover it. 00:28:36.711 [2024-12-09 17:39:05.814171] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.711 [2024-12-09 17:39:05.814232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.711 [2024-12-09 17:39:05.814245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.711 [2024-12-09 17:39:05.814252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.711 [2024-12-09 17:39:05.814258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.711 [2024-12-09 17:39:05.814274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.711 qpair failed and we were unable to recover it. 00:28:36.711 [2024-12-09 17:39:05.824167] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.711 [2024-12-09 17:39:05.824224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.711 [2024-12-09 17:39:05.824238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.711 [2024-12-09 17:39:05.824244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.711 [2024-12-09 17:39:05.824251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.711 [2024-12-09 17:39:05.824265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.711 qpair failed and we were unable to recover it. 00:28:36.711 [2024-12-09 17:39:05.834141] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.711 [2024-12-09 17:39:05.834194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.711 [2024-12-09 17:39:05.834207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.711 [2024-12-09 17:39:05.834215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.711 [2024-12-09 17:39:05.834224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.711 [2024-12-09 17:39:05.834241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.711 qpair failed and we were unable to recover it. 00:28:36.711 [2024-12-09 17:39:05.844206] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.711 [2024-12-09 17:39:05.844264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.711 [2024-12-09 17:39:05.844277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.711 [2024-12-09 17:39:05.844285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.711 [2024-12-09 17:39:05.844291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.711 [2024-12-09 17:39:05.844306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.711 qpair failed and we were unable to recover it. 00:28:36.711 [2024-12-09 17:39:05.854293] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.711 [2024-12-09 17:39:05.854351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.711 [2024-12-09 17:39:05.854364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.711 [2024-12-09 17:39:05.854372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.711 [2024-12-09 17:39:05.854378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.712 [2024-12-09 17:39:05.854393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.712 qpair failed and we were unable to recover it. 00:28:36.712 [2024-12-09 17:39:05.864320] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.712 [2024-12-09 17:39:05.864374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.712 [2024-12-09 17:39:05.864388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.712 [2024-12-09 17:39:05.864394] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.712 [2024-12-09 17:39:05.864401] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.712 [2024-12-09 17:39:05.864417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.712 qpair failed and we were unable to recover it. 00:28:36.712 [2024-12-09 17:39:05.874311] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.712 [2024-12-09 17:39:05.874364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.712 [2024-12-09 17:39:05.874378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.712 [2024-12-09 17:39:05.874385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.712 [2024-12-09 17:39:05.874392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.712 [2024-12-09 17:39:05.874407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.712 qpair failed and we were unable to recover it. 00:28:36.712 [2024-12-09 17:39:05.884377] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.712 [2024-12-09 17:39:05.884437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.712 [2024-12-09 17:39:05.884454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.712 [2024-12-09 17:39:05.884462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.712 [2024-12-09 17:39:05.884469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.712 [2024-12-09 17:39:05.884486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.712 qpair failed and we were unable to recover it. 00:28:36.970 [2024-12-09 17:39:05.894400] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.970 [2024-12-09 17:39:05.894461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.970 [2024-12-09 17:39:05.894482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.970 [2024-12-09 17:39:05.894491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.970 [2024-12-09 17:39:05.894497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.970 [2024-12-09 17:39:05.894514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.970 qpair failed and we were unable to recover it. 00:28:36.970 [2024-12-09 17:39:05.904330] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.970 [2024-12-09 17:39:05.904387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.970 [2024-12-09 17:39:05.904401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.970 [2024-12-09 17:39:05.904409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.970 [2024-12-09 17:39:05.904416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.970 [2024-12-09 17:39:05.904431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.970 qpair failed and we were unable to recover it. 00:28:36.970 [2024-12-09 17:39:05.914419] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.970 [2024-12-09 17:39:05.914474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.970 [2024-12-09 17:39:05.914488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.970 [2024-12-09 17:39:05.914495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.970 [2024-12-09 17:39:05.914501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.970 [2024-12-09 17:39:05.914516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.970 qpair failed and we were unable to recover it. 00:28:36.970 [2024-12-09 17:39:05.924377] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.970 [2024-12-09 17:39:05.924433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.970 [2024-12-09 17:39:05.924448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.970 [2024-12-09 17:39:05.924455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.970 [2024-12-09 17:39:05.924461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.970 [2024-12-09 17:39:05.924478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.970 qpair failed and we were unable to recover it. 00:28:36.970 [2024-12-09 17:39:05.934487] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.970 [2024-12-09 17:39:05.934550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.970 [2024-12-09 17:39:05.934564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.970 [2024-12-09 17:39:05.934572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.970 [2024-12-09 17:39:05.934581] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.970 [2024-12-09 17:39:05.934596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.970 qpair failed and we were unable to recover it. 00:28:36.970 [2024-12-09 17:39:05.944512] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.970 [2024-12-09 17:39:05.944567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.970 [2024-12-09 17:39:05.944581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.970 [2024-12-09 17:39:05.944588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.970 [2024-12-09 17:39:05.944595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.970 [2024-12-09 17:39:05.944610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.970 qpair failed and we were unable to recover it. 00:28:36.970 [2024-12-09 17:39:05.954461] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.970 [2024-12-09 17:39:05.954512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.970 [2024-12-09 17:39:05.954526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.970 [2024-12-09 17:39:05.954533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.970 [2024-12-09 17:39:05.954539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.970 [2024-12-09 17:39:05.954554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.970 qpair failed and we were unable to recover it. 00:28:36.970 [2024-12-09 17:39:05.964553] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.970 [2024-12-09 17:39:05.964647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.970 [2024-12-09 17:39:05.964661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.971 [2024-12-09 17:39:05.964668] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.971 [2024-12-09 17:39:05.964674] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.971 [2024-12-09 17:39:05.964688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.971 qpair failed and we were unable to recover it. 00:28:36.971 [2024-12-09 17:39:05.974527] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.971 [2024-12-09 17:39:05.974585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.971 [2024-12-09 17:39:05.974600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.971 [2024-12-09 17:39:05.974607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.971 [2024-12-09 17:39:05.974613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.971 [2024-12-09 17:39:05.974628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.971 qpair failed and we were unable to recover it. 00:28:36.971 [2024-12-09 17:39:05.984650] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.971 [2024-12-09 17:39:05.984718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.971 [2024-12-09 17:39:05.984731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.971 [2024-12-09 17:39:05.984738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.971 [2024-12-09 17:39:05.984745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.971 [2024-12-09 17:39:05.984759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.971 qpair failed and we were unable to recover it. 00:28:36.971 [2024-12-09 17:39:05.994656] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.971 [2024-12-09 17:39:05.994713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.971 [2024-12-09 17:39:05.994727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.971 [2024-12-09 17:39:05.994734] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.971 [2024-12-09 17:39:05.994741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.971 [2024-12-09 17:39:05.994756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.971 qpair failed and we were unable to recover it. 00:28:36.971 [2024-12-09 17:39:06.004663] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.971 [2024-12-09 17:39:06.004716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.971 [2024-12-09 17:39:06.004729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.971 [2024-12-09 17:39:06.004736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.971 [2024-12-09 17:39:06.004742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.971 [2024-12-09 17:39:06.004757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.971 qpair failed and we were unable to recover it. 00:28:36.971 [2024-12-09 17:39:06.014711] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.971 [2024-12-09 17:39:06.014768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.971 [2024-12-09 17:39:06.014781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.971 [2024-12-09 17:39:06.014788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.971 [2024-12-09 17:39:06.014795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.971 [2024-12-09 17:39:06.014809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.971 qpair failed and we were unable to recover it. 00:28:36.971 [2024-12-09 17:39:06.024727] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.971 [2024-12-09 17:39:06.024786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.971 [2024-12-09 17:39:06.024802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.971 [2024-12-09 17:39:06.024809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.971 [2024-12-09 17:39:06.024815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.971 [2024-12-09 17:39:06.024830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.971 qpair failed and we were unable to recover it. 00:28:36.971 [2024-12-09 17:39:06.034702] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.971 [2024-12-09 17:39:06.034764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.971 [2024-12-09 17:39:06.034777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.971 [2024-12-09 17:39:06.034784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.971 [2024-12-09 17:39:06.034790] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.971 [2024-12-09 17:39:06.034805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.971 qpair failed and we were unable to recover it. 00:28:36.971 [2024-12-09 17:39:06.044695] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.971 [2024-12-09 17:39:06.044747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.971 [2024-12-09 17:39:06.044760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.971 [2024-12-09 17:39:06.044767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.971 [2024-12-09 17:39:06.044774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.971 [2024-12-09 17:39:06.044789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.971 qpair failed and we were unable to recover it. 00:28:36.971 [2024-12-09 17:39:06.054803] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.971 [2024-12-09 17:39:06.054856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.971 [2024-12-09 17:39:06.054870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.971 [2024-12-09 17:39:06.054877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.971 [2024-12-09 17:39:06.054884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.971 [2024-12-09 17:39:06.054900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.971 qpair failed and we were unable to recover it. 00:28:36.971 [2024-12-09 17:39:06.064843] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.971 [2024-12-09 17:39:06.064901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.971 [2024-12-09 17:39:06.064914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.971 [2024-12-09 17:39:06.064921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.971 [2024-12-09 17:39:06.064930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.971 [2024-12-09 17:39:06.064946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.971 qpair failed and we were unable to recover it. 00:28:36.971 [2024-12-09 17:39:06.074819] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.971 [2024-12-09 17:39:06.074913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.971 [2024-12-09 17:39:06.074928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.971 [2024-12-09 17:39:06.074935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.971 [2024-12-09 17:39:06.074941] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.971 [2024-12-09 17:39:06.074956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.971 qpair failed and we were unable to recover it. 00:28:36.971 [2024-12-09 17:39:06.084922] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.971 [2024-12-09 17:39:06.084979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.971 [2024-12-09 17:39:06.084992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.971 [2024-12-09 17:39:06.084999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.971 [2024-12-09 17:39:06.085006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.971 [2024-12-09 17:39:06.085020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.971 qpair failed and we were unable to recover it. 00:28:36.971 [2024-12-09 17:39:06.094983] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.971 [2024-12-09 17:39:06.095041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.971 [2024-12-09 17:39:06.095053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.971 [2024-12-09 17:39:06.095060] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.971 [2024-12-09 17:39:06.095067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.972 [2024-12-09 17:39:06.095082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.972 qpair failed and we were unable to recover it. 00:28:36.972 [2024-12-09 17:39:06.104950] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.972 [2024-12-09 17:39:06.105000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.972 [2024-12-09 17:39:06.105013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.972 [2024-12-09 17:39:06.105020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.972 [2024-12-09 17:39:06.105027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.972 [2024-12-09 17:39:06.105042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.972 qpair failed and we were unable to recover it. 00:28:36.972 [2024-12-09 17:39:06.114973] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.972 [2024-12-09 17:39:06.115040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.972 [2024-12-09 17:39:06.115054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.972 [2024-12-09 17:39:06.115062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.972 [2024-12-09 17:39:06.115068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.972 [2024-12-09 17:39:06.115083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.972 qpair failed and we were unable to recover it. 00:28:36.972 [2024-12-09 17:39:06.125031] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.972 [2024-12-09 17:39:06.125088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.972 [2024-12-09 17:39:06.125102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.972 [2024-12-09 17:39:06.125109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.972 [2024-12-09 17:39:06.125115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.972 [2024-12-09 17:39:06.125129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.972 qpair failed and we were unable to recover it. 00:28:36.972 [2024-12-09 17:39:06.134972] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.972 [2024-12-09 17:39:06.135028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.972 [2024-12-09 17:39:06.135041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.972 [2024-12-09 17:39:06.135048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.972 [2024-12-09 17:39:06.135055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.972 [2024-12-09 17:39:06.135069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.972 qpair failed and we were unable to recover it. 00:28:36.972 [2024-12-09 17:39:06.145085] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.972 [2024-12-09 17:39:06.145148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.972 [2024-12-09 17:39:06.145166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.972 [2024-12-09 17:39:06.145175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.972 [2024-12-09 17:39:06.145181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:36.972 [2024-12-09 17:39:06.145199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.972 qpair failed and we were unable to recover it. 00:28:37.230 [2024-12-09 17:39:06.155056] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.230 [2024-12-09 17:39:06.155131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.230 [2024-12-09 17:39:06.155149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.230 [2024-12-09 17:39:06.155156] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.230 [2024-12-09 17:39:06.155163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:37.230 [2024-12-09 17:39:06.155180] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.230 qpair failed and we were unable to recover it. 00:28:37.230 [2024-12-09 17:39:06.165108] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.230 [2024-12-09 17:39:06.165175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.230 [2024-12-09 17:39:06.165191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.230 [2024-12-09 17:39:06.165200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.230 [2024-12-09 17:39:06.165208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:37.230 [2024-12-09 17:39:06.165231] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.230 qpair failed and we were unable to recover it. 00:28:37.230 [2024-12-09 17:39:06.175203] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.230 [2024-12-09 17:39:06.175305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.230 [2024-12-09 17:39:06.175321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.230 [2024-12-09 17:39:06.175329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.230 [2024-12-09 17:39:06.175335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:37.230 [2024-12-09 17:39:06.175350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.230 qpair failed and we were unable to recover it. 00:28:37.230 [2024-12-09 17:39:06.185187] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.230 [2024-12-09 17:39:06.185262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.230 [2024-12-09 17:39:06.185276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.230 [2024-12-09 17:39:06.185283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.230 [2024-12-09 17:39:06.185289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:37.230 [2024-12-09 17:39:06.185306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.230 qpair failed and we were unable to recover it. 00:28:37.230 [2024-12-09 17:39:06.195224] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.230 [2024-12-09 17:39:06.195292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.230 [2024-12-09 17:39:06.195307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.230 [2024-12-09 17:39:06.195317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.230 [2024-12-09 17:39:06.195323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8054000b90 00:28:37.230 [2024-12-09 17:39:06.195339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.230 qpair failed and we were unable to recover it. 00:28:37.230 [2024-12-09 17:39:06.195447] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:28:37.230 A controller has encountered a failure and is being reset. 00:28:37.230 Controller properly reset. 00:28:37.230 Initializing NVMe Controllers 00:28:37.230 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:37.230 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:37.230 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:28:37.230 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:28:37.230 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:28:37.230 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:28:37.230 Initialization complete. Launching workers. 00:28:37.230 Starting thread on core 1 00:28:37.230 Starting thread on core 2 00:28:37.230 Starting thread on core 3 00:28:37.230 Starting thread on core 0 00:28:37.230 17:39:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:28:37.230 00:28:37.230 real 0m10.769s 00:28:37.230 user 0m19.162s 00:28:37.230 sys 0m4.634s 00:28:37.230 17:39:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:37.230 17:39:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:37.230 ************************************ 00:28:37.230 END TEST nvmf_target_disconnect_tc2 00:28:37.230 ************************************ 00:28:37.230 17:39:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:28:37.230 17:39:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:28:37.231 17:39:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:28:37.231 17:39:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:37.231 17:39:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:28:37.231 17:39:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:37.231 17:39:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:28:37.231 17:39:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:37.231 17:39:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:37.231 rmmod nvme_tcp 00:28:37.231 rmmod nvme_fabrics 00:28:37.231 rmmod nvme_keyring 00:28:37.231 17:39:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:37.231 17:39:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:28:37.231 17:39:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:28:37.231 17:39:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 2739190 ']' 00:28:37.231 17:39:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 2739190 00:28:37.231 17:39:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2739190 ']' 00:28:37.231 17:39:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 2739190 00:28:37.231 17:39:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:28:37.231 17:39:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:37.231 17:39:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2739190 00:28:37.489 17:39:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:28:37.489 17:39:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:28:37.489 17:39:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2739190' 00:28:37.489 killing process with pid 2739190 00:28:37.489 17:39:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 2739190 00:28:37.489 17:39:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 2739190 00:28:37.489 17:39:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:37.490 17:39:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:37.490 17:39:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:37.490 17:39:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:28:37.490 17:39:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:28:37.490 17:39:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:37.490 17:39:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:28:37.490 17:39:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:37.490 17:39:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:37.490 17:39:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:37.490 17:39:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:37.490 17:39:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:40.024 17:39:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:40.024 00:28:40.024 real 0m19.573s 00:28:40.024 user 0m46.773s 00:28:40.024 sys 0m9.513s 00:28:40.024 17:39:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:40.024 17:39:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:40.024 ************************************ 00:28:40.024 END TEST nvmf_target_disconnect 00:28:40.024 ************************************ 00:28:40.024 17:39:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:28:40.024 00:28:40.024 real 5m51.724s 00:28:40.024 user 10m30.803s 00:28:40.024 sys 1m58.400s 00:28:40.024 17:39:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:40.024 17:39:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.024 ************************************ 00:28:40.024 END TEST nvmf_host 00:28:40.024 ************************************ 00:28:40.024 17:39:08 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:28:40.024 17:39:08 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:28:40.024 17:39:08 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:28:40.024 17:39:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:40.024 17:39:08 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:40.024 17:39:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:40.024 ************************************ 00:28:40.024 START TEST nvmf_target_core_interrupt_mode 00:28:40.024 ************************************ 00:28:40.024 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:28:40.024 * Looking for test storage... 00:28:40.024 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:28:40.024 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:40.024 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version 00:28:40.024 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:40.024 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:40.024 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:40.024 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:40.024 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:40.024 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:28:40.024 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:28:40.024 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:28:40.024 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:28:40.024 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:28:40.024 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:28:40.024 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:28:40.024 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:40.024 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:28:40.024 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:28:40.024 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:40.024 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:40.024 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:28:40.024 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:28:40.024 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:40.024 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:28:40.024 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:28:40.024 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:28:40.024 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:28:40.024 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:40.024 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:28:40.024 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:28:40.025 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:40.025 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:40.025 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:28:40.025 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:40.025 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:40.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:40.025 --rc genhtml_branch_coverage=1 00:28:40.025 --rc genhtml_function_coverage=1 00:28:40.025 --rc genhtml_legend=1 00:28:40.025 --rc geninfo_all_blocks=1 00:28:40.025 --rc geninfo_unexecuted_blocks=1 00:28:40.025 00:28:40.025 ' 00:28:40.025 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:40.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:40.025 --rc genhtml_branch_coverage=1 00:28:40.025 --rc genhtml_function_coverage=1 00:28:40.025 --rc genhtml_legend=1 00:28:40.025 --rc geninfo_all_blocks=1 00:28:40.025 --rc geninfo_unexecuted_blocks=1 00:28:40.025 00:28:40.025 ' 00:28:40.025 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:40.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:40.025 --rc genhtml_branch_coverage=1 00:28:40.025 --rc genhtml_function_coverage=1 00:28:40.025 --rc genhtml_legend=1 00:28:40.025 --rc geninfo_all_blocks=1 00:28:40.025 --rc geninfo_unexecuted_blocks=1 00:28:40.025 00:28:40.025 ' 00:28:40.025 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:40.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:40.025 --rc genhtml_branch_coverage=1 00:28:40.025 --rc genhtml_function_coverage=1 00:28:40.025 --rc genhtml_legend=1 00:28:40.025 --rc geninfo_all_blocks=1 00:28:40.025 --rc geninfo_unexecuted_blocks=1 00:28:40.025 00:28:40.025 ' 00:28:40.025 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:28:40.025 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:28:40.025 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:40.025 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:28:40.025 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:40.025 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:40.025 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:40.025 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:40.025 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:40.025 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:40.025 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:40.025 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:40.025 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:40.025 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:40.025 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:28:40.025 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:28:40.025 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:40.025 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:40.025 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:40.025 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:40.025 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:40.025 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:28:40.025 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:40.025 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:40.025 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:40.025 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:40.025 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:40.025 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:40.025 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:28:40.025 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:40.025 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:28:40.025 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:40.025 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:40.025 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:40.025 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:40.025 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:40.025 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:40.025 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:40.025 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:40.025 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:40.025 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:40.025 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:28:40.025 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:28:40.025 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:28:40.025 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:28:40.025 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:40.025 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:40.025 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:40.025 ************************************ 00:28:40.025 START TEST nvmf_abort 00:28:40.025 ************************************ 00:28:40.025 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:28:40.025 * Looking for test storage... 00:28:40.025 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:40.025 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:40.025 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:28:40.025 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:40.025 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:40.025 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:40.025 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:40.025 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:40.025 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:28:40.025 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:28:40.025 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:28:40.025 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:28:40.025 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:28:40.025 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:28:40.025 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:28:40.025 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:40.025 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:28:40.025 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:28:40.025 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:40.025 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:40.025 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:28:40.025 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:28:40.025 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:40.026 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:28:40.026 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:28:40.026 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:28:40.026 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:28:40.026 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:40.026 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:28:40.026 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:28:40.026 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:40.026 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:40.026 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:28:40.026 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:40.026 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:40.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:40.026 --rc genhtml_branch_coverage=1 00:28:40.026 --rc genhtml_function_coverage=1 00:28:40.026 --rc genhtml_legend=1 00:28:40.026 --rc geninfo_all_blocks=1 00:28:40.026 --rc geninfo_unexecuted_blocks=1 00:28:40.026 00:28:40.026 ' 00:28:40.026 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:40.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:40.026 --rc genhtml_branch_coverage=1 00:28:40.026 --rc genhtml_function_coverage=1 00:28:40.026 --rc genhtml_legend=1 00:28:40.026 --rc geninfo_all_blocks=1 00:28:40.026 --rc geninfo_unexecuted_blocks=1 00:28:40.026 00:28:40.026 ' 00:28:40.026 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:40.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:40.026 --rc genhtml_branch_coverage=1 00:28:40.026 --rc genhtml_function_coverage=1 00:28:40.026 --rc genhtml_legend=1 00:28:40.026 --rc geninfo_all_blocks=1 00:28:40.026 --rc geninfo_unexecuted_blocks=1 00:28:40.026 00:28:40.026 ' 00:28:40.026 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:40.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:40.026 --rc genhtml_branch_coverage=1 00:28:40.026 --rc genhtml_function_coverage=1 00:28:40.026 --rc genhtml_legend=1 00:28:40.026 --rc geninfo_all_blocks=1 00:28:40.026 --rc geninfo_unexecuted_blocks=1 00:28:40.026 00:28:40.026 ' 00:28:40.026 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:40.026 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:28:40.285 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:40.285 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:40.285 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:40.285 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:40.285 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:40.285 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:40.285 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:40.285 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:40.285 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:40.285 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:40.285 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:28:40.285 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:28:40.285 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:40.285 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:40.285 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:40.285 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:40.285 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:40.285 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:28:40.285 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:40.285 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:40.285 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:40.285 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:40.285 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:40.285 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:40.285 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:28:40.285 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:40.285 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:28:40.285 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:40.285 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:40.285 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:40.285 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:40.285 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:40.285 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:40.285 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:40.285 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:40.285 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:40.285 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:40.285 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:40.286 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:28:40.286 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:28:40.286 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:40.286 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:40.286 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:40.286 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:40.286 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:40.286 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:40.286 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:40.286 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:40.286 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:40.286 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:40.286 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:28:40.286 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:46.854 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:46.854 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:28:46.854 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:46.854 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:46.854 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:46.854 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:46.854 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:46.854 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:28:46.854 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:46.854 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:28:46.854 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:28:46.854 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:28:46.854 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:28:46.854 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:28:46.854 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:28:46.854 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:46.854 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:46.855 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:46.855 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:46.855 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:46.855 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:46.855 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:46.855 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:46.855 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:46.855 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:46.855 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:46.855 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:46.855 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:46.855 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:46.855 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:46.855 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:46.855 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:46.855 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:46.855 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:46.855 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:46.855 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:46.855 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:46.855 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:46.855 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:46.855 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:46.855 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:46.855 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:46.855 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:46.855 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:46.855 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:46.855 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:46.855 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:46.855 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:46.855 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:46.855 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:46.855 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:46.855 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:46.855 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:46.855 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:46.855 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:46.855 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:46.855 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:46.855 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:46.855 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:46.855 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:46.855 Found net devices under 0000:af:00.0: cvl_0_0 00:28:46.855 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:46.855 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:46.855 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:46.855 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:46.855 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:46.855 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:46.855 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:46.855 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:46.855 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:46.855 Found net devices under 0000:af:00.1: cvl_0_1 00:28:46.855 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:46.855 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:46.855 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:28:46.855 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:46.855 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:46.855 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:46.855 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:46.855 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:46.855 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:46.855 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:46.855 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:46.855 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:46.855 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:46.855 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:46.855 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:46.855 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:46.855 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:46.855 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:46.855 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:46.855 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:46.855 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:46.855 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:46.855 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:46.855 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:46.855 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:46.855 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:46.855 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:46.855 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:46.855 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:46.855 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:46.855 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.344 ms 00:28:46.855 00:28:46.855 --- 10.0.0.2 ping statistics --- 00:28:46.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:46.855 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:28:46.855 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:46.855 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:46.855 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:28:46.855 00:28:46.855 --- 10.0.0.1 ping statistics --- 00:28:46.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:46.855 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:28:46.855 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:46.855 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:28:46.855 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:46.855 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:46.855 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:46.855 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:46.855 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:46.855 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:46.855 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:46.855 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:28:46.855 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:46.855 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:46.855 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:46.856 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2744372 00:28:46.856 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:28:46.856 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2744372 00:28:46.856 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2744372 ']' 00:28:46.856 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:46.856 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:46.856 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:46.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:46.856 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:46.856 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:46.856 [2024-12-09 17:39:15.242300] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:46.856 [2024-12-09 17:39:15.243200] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:28:46.856 [2024-12-09 17:39:15.243240] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:46.856 [2024-12-09 17:39:15.323462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:46.856 [2024-12-09 17:39:15.363393] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:46.856 [2024-12-09 17:39:15.363429] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:46.856 [2024-12-09 17:39:15.363436] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:46.856 [2024-12-09 17:39:15.363445] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:46.856 [2024-12-09 17:39:15.363450] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:46.856 [2024-12-09 17:39:15.364776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:46.856 [2024-12-09 17:39:15.364882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:46.856 [2024-12-09 17:39:15.364883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:46.856 [2024-12-09 17:39:15.432861] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:46.856 [2024-12-09 17:39:15.433589] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:46.856 [2024-12-09 17:39:15.433696] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:46.856 [2024-12-09 17:39:15.433847] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:46.856 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:46.856 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:28:46.856 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:46.856 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:46.856 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:46.856 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:46.856 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:28:46.856 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.856 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:46.856 [2024-12-09 17:39:15.497656] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:46.856 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.856 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:28:46.856 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.856 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:46.856 Malloc0 00:28:46.856 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.856 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:46.856 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.856 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:46.856 Delay0 00:28:46.856 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.856 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:46.856 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.856 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:46.856 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.856 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:28:46.856 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.856 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:46.856 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.856 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:46.856 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.856 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:46.856 [2024-12-09 17:39:15.593587] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:46.856 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.856 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:46.856 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.856 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:46.856 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.856 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:28:46.856 [2024-12-09 17:39:15.723965] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:28:48.754 Initializing NVMe Controllers 00:28:48.754 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:28:48.754 controller IO queue size 128 less than required 00:28:48.754 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:28:48.754 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:28:48.754 Initialization complete. Launching workers. 00:28:48.754 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 38347 00:28:48.754 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 38404, failed to submit 66 00:28:48.754 success 38347, unsuccessful 57, failed 0 00:28:48.754 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:48.754 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.754 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:48.754 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.754 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:28:48.754 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:28:48.754 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:48.754 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:28:48.754 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:48.754 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:28:48.754 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:48.754 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:48.754 rmmod nvme_tcp 00:28:48.754 rmmod nvme_fabrics 00:28:48.754 rmmod nvme_keyring 00:28:48.754 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:48.754 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:28:48.755 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:28:48.755 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2744372 ']' 00:28:48.755 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2744372 00:28:48.755 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2744372 ']' 00:28:48.755 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2744372 00:28:48.755 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:28:48.755 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:48.755 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2744372 00:28:48.755 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:48.755 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:48.755 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2744372' 00:28:48.755 killing process with pid 2744372 00:28:48.755 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2744372 00:28:48.755 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2744372 00:28:49.014 17:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:49.014 17:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:49.014 17:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:49.014 17:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:28:49.014 17:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:49.014 17:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:28:49.014 17:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:28:49.014 17:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:49.014 17:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:49.014 17:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:49.014 17:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:49.014 17:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:51.548 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:51.548 00:28:51.548 real 0m11.158s 00:28:51.548 user 0m10.300s 00:28:51.548 sys 0m5.656s 00:28:51.548 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:51.548 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:51.548 ************************************ 00:28:51.548 END TEST nvmf_abort 00:28:51.548 ************************************ 00:28:51.548 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:28:51.548 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:51.548 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:51.548 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:51.548 ************************************ 00:28:51.548 START TEST nvmf_ns_hotplug_stress 00:28:51.548 ************************************ 00:28:51.548 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:28:51.548 * Looking for test storage... 00:28:51.548 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:51.548 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:51.548 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:28:51.548 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:51.548 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:51.548 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:51.548 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:51.548 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:51.548 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:28:51.548 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:28:51.548 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:28:51.548 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:28:51.548 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:28:51.548 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:28:51.548 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:28:51.548 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:51.548 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:28:51.548 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:28:51.548 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:51.548 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:51.548 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:28:51.548 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:28:51.548 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:51.548 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:28:51.548 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:28:51.548 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:28:51.548 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:28:51.548 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:51.549 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:28:51.549 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:28:51.549 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:51.549 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:51.549 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:28:51.549 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:51.549 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:51.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:51.549 --rc genhtml_branch_coverage=1 00:28:51.549 --rc genhtml_function_coverage=1 00:28:51.549 --rc genhtml_legend=1 00:28:51.549 --rc geninfo_all_blocks=1 00:28:51.549 --rc geninfo_unexecuted_blocks=1 00:28:51.549 00:28:51.549 ' 00:28:51.549 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:51.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:51.549 --rc genhtml_branch_coverage=1 00:28:51.549 --rc genhtml_function_coverage=1 00:28:51.549 --rc genhtml_legend=1 00:28:51.549 --rc geninfo_all_blocks=1 00:28:51.549 --rc geninfo_unexecuted_blocks=1 00:28:51.549 00:28:51.549 ' 00:28:51.549 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:51.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:51.549 --rc genhtml_branch_coverage=1 00:28:51.549 --rc genhtml_function_coverage=1 00:28:51.549 --rc genhtml_legend=1 00:28:51.549 --rc geninfo_all_blocks=1 00:28:51.549 --rc geninfo_unexecuted_blocks=1 00:28:51.549 00:28:51.549 ' 00:28:51.549 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:51.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:51.549 --rc genhtml_branch_coverage=1 00:28:51.549 --rc genhtml_function_coverage=1 00:28:51.549 --rc genhtml_legend=1 00:28:51.549 --rc geninfo_all_blocks=1 00:28:51.549 --rc geninfo_unexecuted_blocks=1 00:28:51.549 00:28:51.549 ' 00:28:51.549 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:51.549 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:28:51.549 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:51.549 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:51.549 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:51.549 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:51.549 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:51.549 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:51.549 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:51.549 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:51.549 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:51.549 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:51.549 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:28:51.549 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:28:51.549 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:51.549 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:51.549 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:51.549 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:51.549 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:51.549 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:28:51.549 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:51.549 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:51.549 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:51.549 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.549 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.549 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.549 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:28:51.549 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.549 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:28:51.549 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:51.549 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:51.549 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:51.549 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:51.549 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:51.549 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:51.549 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:51.549 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:51.549 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:51.549 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:51.549 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:51.549 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:28:51.549 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:51.549 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:51.549 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:51.549 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:51.549 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:51.549 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:51.549 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:51.549 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:51.549 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:51.549 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:51.549 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:28:51.549 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:58.119 17:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:58.119 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:28:58.119 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:58.119 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:58.119 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:58.119 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:58.119 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:58.119 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:28:58.119 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:58.119 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:28:58.119 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:28:58.119 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:28:58.119 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:28:58.119 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:28:58.119 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:28:58.119 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:58.119 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:58.119 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:58.119 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:58.119 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:58.119 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:58.119 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:58.119 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:58.119 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:58.119 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:58.119 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:58.119 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:58.119 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:58.119 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:58.119 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:58.119 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:58.119 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:58.119 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:58.119 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:58.119 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:58.119 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:58.119 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:58.119 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:58.119 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:58.119 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:58.119 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:58.119 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:58.119 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:58.119 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:58.119 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:58.119 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:58.119 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:58.119 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:58.119 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:58.119 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:58.119 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:58.119 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:58.119 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:58.119 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:58.119 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:58.119 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:58.119 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:58.119 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:58.119 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:58.119 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:58.119 Found net devices under 0000:af:00.0: cvl_0_0 00:28:58.119 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:58.119 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:58.119 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:58.119 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:58.120 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:58.120 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:58.120 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:58.120 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:58.120 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:58.120 Found net devices under 0000:af:00.1: cvl_0_1 00:28:58.120 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:58.120 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:58.120 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:28:58.120 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:58.120 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:58.120 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:58.120 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:58.120 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:58.120 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:58.120 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:58.120 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:58.120 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:58.120 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:58.120 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:58.120 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:58.120 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:58.120 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:58.120 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:58.120 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:58.120 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:58.120 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:58.120 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:58.120 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:58.120 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:58.120 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:58.120 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:58.120 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:58.120 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:58.120 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:58.120 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:58.120 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.349 ms 00:28:58.120 00:28:58.120 --- 10.0.0.2 ping statistics --- 00:28:58.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:58.120 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:28:58.120 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:58.120 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:58.120 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:28:58.120 00:28:58.120 --- 10.0.0.1 ping statistics --- 00:28:58.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:58.120 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:28:58.120 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:58.120 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:28:58.120 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:58.120 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:58.120 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:58.120 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:58.120 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:58.120 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:58.120 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:58.120 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:28:58.120 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:58.120 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:58.120 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:58.120 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2748159 00:28:58.120 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2748159 00:28:58.120 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:28:58.120 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2748159 ']' 00:28:58.120 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:58.120 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:58.120 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:58.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:58.120 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:58.120 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:58.120 [2024-12-09 17:39:26.394628] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:58.120 [2024-12-09 17:39:26.395647] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:28:58.120 [2024-12-09 17:39:26.395689] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:58.120 [2024-12-09 17:39:26.474394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:58.120 [2024-12-09 17:39:26.514962] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:58.120 [2024-12-09 17:39:26.515012] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:58.120 [2024-12-09 17:39:26.515020] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:58.120 [2024-12-09 17:39:26.515027] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:58.120 [2024-12-09 17:39:26.515032] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:58.120 [2024-12-09 17:39:26.516369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:58.120 [2024-12-09 17:39:26.516475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:58.120 [2024-12-09 17:39:26.516476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:58.120 [2024-12-09 17:39:26.584523] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:58.120 [2024-12-09 17:39:26.585240] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:58.120 [2024-12-09 17:39:26.585353] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:58.120 [2024-12-09 17:39:26.585511] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:58.120 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:58.120 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:28:58.120 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:58.120 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:58.120 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:58.120 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:58.120 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:28:58.120 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:58.120 [2024-12-09 17:39:26.817146] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:58.120 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:58.120 17:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:58.120 [2024-12-09 17:39:27.189569] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:58.121 17:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:58.379 17:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:28:58.727 Malloc0 00:28:58.727 17:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:58.727 Delay0 00:28:58.727 17:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:59.007 17:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:28:59.007 NULL1 00:28:59.007 17:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:28:59.265 17:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2748630 00:28:59.265 17:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:28:59.265 17:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2748630 00:28:59.265 17:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:59.522 17:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:59.780 17:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:28:59.780 17:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:28:59.780 true 00:28:59.780 17:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2748630 00:28:59.780 17:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:00.038 17:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:00.296 17:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:29:00.296 17:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:29:00.555 true 00:29:00.555 17:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2748630 00:29:00.555 17:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:01.488 Read completed with error (sct=0, sc=11) 00:29:01.488 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:01.488 17:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:01.488 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:01.746 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:01.746 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:01.746 17:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:29:01.746 17:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:29:02.004 true 00:29:02.004 17:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2748630 00:29:02.004 17:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:02.261 17:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:02.261 17:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:29:02.261 17:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:29:02.519 true 00:29:02.519 17:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2748630 00:29:02.519 17:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:02.777 17:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:03.035 17:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:29:03.035 17:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:29:03.035 true 00:29:03.293 17:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2748630 00:29:03.293 17:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:03.293 17:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:03.551 17:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:29:03.551 17:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:29:03.808 true 00:29:03.808 17:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2748630 00:29:03.808 17:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:04.741 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:04.741 17:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:04.741 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:04.741 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:04.999 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:04.999 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:04.999 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:04.999 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:04.999 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:29:04.999 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:29:05.257 true 00:29:05.257 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2748630 00:29:05.257 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:06.189 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:06.189 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:29:06.189 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:29:06.446 true 00:29:06.446 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2748630 00:29:06.446 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:06.704 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:06.962 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:29:06.962 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:29:06.962 true 00:29:06.962 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2748630 00:29:06.962 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:08.333 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:08.333 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:08.333 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:08.333 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:08.333 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:08.333 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:08.333 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:08.334 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:29:08.334 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:29:08.591 true 00:29:08.591 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2748630 00:29:08.591 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:09.524 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:09.524 17:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:09.524 17:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:29:09.524 17:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:29:09.781 true 00:29:09.781 17:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2748630 00:29:09.781 17:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:09.781 17:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:10.039 17:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:29:10.039 17:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:29:10.296 true 00:29:10.296 17:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2748630 00:29:10.296 17:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:11.668 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:11.668 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:11.668 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:11.668 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:11.668 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:11.668 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:29:11.668 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:29:11.925 true 00:29:11.925 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2748630 00:29:11.925 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:12.491 17:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:12.748 17:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:29:12.748 17:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:29:13.006 true 00:29:13.006 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2748630 00:29:13.006 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:13.264 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:13.522 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:29:13.522 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:29:13.522 true 00:29:13.522 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2748630 00:29:13.522 17:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:14.893 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:14.893 17:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:14.893 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:14.893 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:14.893 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:14.893 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:14.893 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:14.893 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:14.893 17:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:29:14.893 17:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:29:15.150 true 00:29:15.150 17:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2748630 00:29:15.150 17:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:16.081 17:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:16.081 17:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:29:16.081 17:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:29:16.340 true 00:29:16.340 17:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2748630 00:29:16.340 17:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:16.598 17:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:16.598 17:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:29:16.598 17:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:29:16.855 true 00:29:16.856 17:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2748630 00:29:16.856 17:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:17.788 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:17.788 17:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:18.046 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:29:18.046 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:29:18.303 true 00:29:18.303 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2748630 00:29:18.303 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:18.561 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:18.820 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:29:18.820 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:29:18.820 true 00:29:18.820 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2748630 00:29:18.820 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:20.192 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:20.192 17:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:20.192 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:20.192 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:20.192 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:20.192 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:20.192 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:20.192 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:20.192 17:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:29:20.192 17:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:29:20.450 true 00:29:20.450 17:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2748630 00:29:20.450 17:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:21.382 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:21.382 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:29:21.382 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:29:21.640 true 00:29:21.640 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2748630 00:29:21.640 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:21.897 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:22.155 17:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:29:22.155 17:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:29:22.155 true 00:29:22.155 17:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2748630 00:29:22.155 17:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:23.527 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:23.527 17:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:23.527 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:23.527 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:23.527 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:23.527 17:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:29:23.527 17:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:29:23.785 true 00:29:23.785 17:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2748630 00:29:23.785 17:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:24.043 17:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:24.300 17:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:29:24.300 17:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:29:24.300 true 00:29:24.300 17:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2748630 00:29:24.301 17:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:25.672 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:25.672 17:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:25.672 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:25.672 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:25.672 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:25.672 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:25.672 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:25.672 17:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:29:25.672 17:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:29:25.930 true 00:29:25.930 17:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2748630 00:29:25.930 17:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:26.861 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:26.861 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:26.861 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:26.861 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:29:26.861 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:29:27.119 true 00:29:27.119 17:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2748630 00:29:27.119 17:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:27.377 17:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:27.635 17:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:29:27.635 17:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:29:27.635 true 00:29:27.635 17:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2748630 00:29:27.635 17:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:27.892 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:27.892 17:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:27.892 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:27.892 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:28.150 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:28.150 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:28.150 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:28.150 17:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:29:28.150 17:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:29:28.408 true 00:29:28.408 17:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2748630 00:29:28.408 17:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:29.340 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:29.340 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:29.340 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:29:29.340 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:29:29.598 Initializing NVMe Controllers 00:29:29.598 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:29.598 Controller IO queue size 128, less than required. 00:29:29.598 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:29.598 Controller IO queue size 128, less than required. 00:29:29.598 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:29.598 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:29.598 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:29.598 Initialization complete. Launching workers. 00:29:29.598 ======================================================== 00:29:29.598 Latency(us) 00:29:29.598 Device Information : IOPS MiB/s Average min max 00:29:29.598 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1936.31 0.95 38586.54 1297.10 1137597.81 00:29:29.598 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 15506.99 7.57 8234.56 1575.07 370892.19 00:29:29.598 ======================================================== 00:29:29.598 Total : 17443.30 8.52 11603.81 1297.10 1137597.81 00:29:29.598 00:29:29.598 true 00:29:29.598 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2748630 00:29:29.598 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2748630) - No such process 00:29:29.598 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2748630 00:29:29.598 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:29.856 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:30.114 17:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:29:30.114 17:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:29:30.114 17:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:29:30.114 17:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:30.114 17:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:29:30.114 null0 00:29:30.114 17:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:30.114 17:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:30.114 17:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:29:30.372 null1 00:29:30.372 17:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:30.372 17:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:30.372 17:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:29:30.630 null2 00:29:30.630 17:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:30.630 17:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:30.630 17:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:29:30.630 null3 00:29:30.630 17:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:30.630 17:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:30.630 17:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:29:30.888 null4 00:29:30.888 17:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:30.888 17:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:30.888 17:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:29:31.146 null5 00:29:31.146 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:31.146 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:31.146 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:29:31.146 null6 00:29:31.405 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:31.405 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:31.405 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:29:31.405 null7 00:29:31.405 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:31.405 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:31.405 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:29:31.405 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:31.405 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:31.405 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:29:31.405 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:29:31.405 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:31.405 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:31.405 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:31.405 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:31.405 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:31.405 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:31.405 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:31.405 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:29:31.405 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:31.405 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:29:31.405 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:31.405 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:29:31.405 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:31.405 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:31.405 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:29:31.405 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:31.405 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:31.405 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:31.405 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:31.405 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:31.405 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:31.405 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:31.405 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:29:31.405 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:31.405 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:31.405 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:29:31.405 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:31.405 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:31.405 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:31.405 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:31.405 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:31.405 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:29:31.405 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:31.405 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:29:31.405 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:31.405 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:31.405 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:31.405 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:31.405 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:31.405 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:31.405 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:29:31.405 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:29:31.405 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:31.405 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:31.405 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:31.405 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:31.405 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:31.405 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:31.405 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:29:31.405 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:29:31.406 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:31.406 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:31.406 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:31.406 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:31.406 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:31.406 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:31.406 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:29:31.406 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2753679 2753680 2753682 2753686 2753688 2753691 2753694 2753696 00:29:31.406 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:29:31.406 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:31.406 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:31.406 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:31.664 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:31.664 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:31.664 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:31.664 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:31.664 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:31.664 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:31.664 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:31.664 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:31.921 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:31.921 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:31.921 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:31.921 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:31.921 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:31.921 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:31.921 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:31.921 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:31.921 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:31.921 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:31.921 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:31.921 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:31.921 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:31.921 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:31.921 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:31.921 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:31.921 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:31.922 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:31.922 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:31.922 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:31.922 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:31.922 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:31.922 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:31.922 17:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:32.178 17:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:32.178 17:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:32.178 17:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:32.178 17:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:32.178 17:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:32.178 17:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:32.178 17:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:32.178 17:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:32.178 17:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:32.178 17:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:32.178 17:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:32.436 17:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:32.436 17:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:32.436 17:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:32.436 17:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:32.436 17:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:32.436 17:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:32.436 17:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:32.436 17:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:32.436 17:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:32.436 17:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:32.436 17:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:32.436 17:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:32.436 17:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:32.436 17:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:32.436 17:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:32.436 17:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:32.436 17:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:32.436 17:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:32.436 17:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:32.436 17:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:32.436 17:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:32.436 17:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:32.436 17:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:32.436 17:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:32.436 17:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:32.436 17:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:32.436 17:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:32.436 17:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:32.436 17:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:32.695 17:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:32.695 17:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:32.695 17:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:32.695 17:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:32.695 17:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:32.695 17:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:32.695 17:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:32.695 17:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:32.695 17:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:32.695 17:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:32.695 17:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:32.695 17:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:32.695 17:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:32.695 17:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:32.695 17:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:32.695 17:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:32.695 17:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:32.695 17:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:32.695 17:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:32.695 17:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:32.695 17:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:32.695 17:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:32.695 17:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:32.695 17:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:32.953 17:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:32.953 17:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:32.953 17:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:32.953 17:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:32.953 17:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:32.953 17:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:32.953 17:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:32.953 17:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:33.209 17:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.209 17:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.209 17:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:33.209 17:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.209 17:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.209 17:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:33.209 17:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.209 17:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.209 17:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:33.209 17:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.209 17:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.209 17:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:33.209 17:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.209 17:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.209 17:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:33.210 17:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.210 17:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.210 17:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:33.210 17:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.210 17:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.210 17:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:33.210 17:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.210 17:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.210 17:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:33.210 17:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:33.210 17:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:33.467 17:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:33.467 17:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:33.467 17:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:33.467 17:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:33.467 17:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:33.467 17:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:33.467 17:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.467 17:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.467 17:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:33.467 17:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.467 17:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.467 17:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:33.467 17:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.467 17:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.467 17:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:33.467 17:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.467 17:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.467 17:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:33.467 17:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.467 17:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.467 17:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:33.467 17:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.467 17:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.467 17:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:33.467 17:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.467 17:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.467 17:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:33.467 17:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.467 17:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.467 17:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:33.725 17:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:33.725 17:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:33.725 17:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:33.725 17:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:33.726 17:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:33.726 17:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:33.726 17:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:33.726 17:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:33.984 17:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.984 17:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.984 17:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:33.984 17:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.984 17:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.984 17:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:33.984 17:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.984 17:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.984 17:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:33.984 17:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.984 17:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.984 17:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:33.984 17:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.984 17:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.984 17:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:33.984 17:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.984 17:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.984 17:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:33.984 17:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.984 17:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.984 17:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:33.984 17:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.984 17:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.984 17:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:34.241 17:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:34.241 17:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:34.241 17:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:34.241 17:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:34.241 17:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:34.241 17:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:34.241 17:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:34.241 17:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:34.241 17:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.241 17:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.241 17:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:34.241 17:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.241 17:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.241 17:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:34.241 17:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.241 17:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.241 17:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:34.241 17:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.241 17:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.241 17:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:34.241 17:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.241 17:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.241 17:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:34.241 17:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.241 17:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.241 17:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:34.497 17:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.497 17:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.497 17:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:34.497 17:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.497 17:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.497 17:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:34.497 17:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:34.497 17:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:34.497 17:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:34.497 17:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:34.497 17:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:34.497 17:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:34.497 17:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:34.497 17:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:34.754 17:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.754 17:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.754 17:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:34.754 17:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.754 17:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.754 17:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:34.754 17:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.754 17:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.754 17:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:34.754 17:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.754 17:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.754 17:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.754 17:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:34.754 17:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.754 17:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:34.754 17:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.754 17:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.754 17:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.754 17:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:34.754 17:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.754 17:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:34.754 17:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.754 17:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.754 17:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:35.012 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:35.012 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:35.012 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:35.012 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:35.012 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:35.012 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:35.012 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:35.012 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:35.270 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.270 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.270 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:35.270 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.270 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.270 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:35.270 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.270 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.270 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:35.270 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.270 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.270 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:35.270 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.270 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.270 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:35.270 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.270 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.270 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:35.270 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.270 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.270 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:35.270 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.270 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.270 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:35.270 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:35.270 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:35.270 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:35.270 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:35.270 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:35.270 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:35.529 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:35.529 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:35.529 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.529 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.529 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.529 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.529 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.529 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.529 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.529 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.529 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.529 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.529 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.529 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.529 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.529 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.529 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.529 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.529 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:29:35.529 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:29:35.529 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:35.529 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:29:35.529 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:35.529 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:29:35.529 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:35.529 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:35.529 rmmod nvme_tcp 00:29:35.529 rmmod nvme_fabrics 00:29:35.529 rmmod nvme_keyring 00:29:35.788 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:35.788 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:29:35.788 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:29:35.788 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2748159 ']' 00:29:35.788 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2748159 00:29:35.788 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2748159 ']' 00:29:35.788 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2748159 00:29:35.788 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:29:35.788 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:35.788 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2748159 00:29:35.788 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:35.788 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:35.788 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2748159' 00:29:35.788 killing process with pid 2748159 00:29:35.788 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2748159 00:29:35.788 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2748159 00:29:35.788 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:35.788 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:35.788 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:35.788 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:29:35.788 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:29:35.788 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:35.788 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:29:35.788 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:35.788 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:35.788 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:35.788 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:35.788 17:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:38.323 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:38.323 00:29:38.323 real 0m46.796s 00:29:38.323 user 2m56.579s 00:29:38.323 sys 0m19.460s 00:29:38.323 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:38.323 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:38.323 ************************************ 00:29:38.323 END TEST nvmf_ns_hotplug_stress 00:29:38.323 ************************************ 00:29:38.323 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:29:38.323 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:38.323 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:38.323 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:38.323 ************************************ 00:29:38.323 START TEST nvmf_delete_subsystem 00:29:38.323 ************************************ 00:29:38.323 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:29:38.323 * Looking for test storage... 00:29:38.323 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:38.323 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:38.323 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:29:38.323 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:38.323 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:38.323 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:38.323 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:38.323 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:38.323 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:29:38.323 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:29:38.323 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:29:38.323 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:29:38.323 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:29:38.323 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:29:38.323 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:29:38.323 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:38.323 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:29:38.323 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:29:38.323 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:38.323 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:38.323 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:29:38.323 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:29:38.323 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:38.323 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:29:38.323 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:29:38.323 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:29:38.323 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:29:38.323 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:38.323 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:29:38.323 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:29:38.324 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:38.324 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:38.324 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:29:38.324 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:38.324 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:38.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:38.324 --rc genhtml_branch_coverage=1 00:29:38.324 --rc genhtml_function_coverage=1 00:29:38.324 --rc genhtml_legend=1 00:29:38.324 --rc geninfo_all_blocks=1 00:29:38.324 --rc geninfo_unexecuted_blocks=1 00:29:38.324 00:29:38.324 ' 00:29:38.324 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:38.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:38.324 --rc genhtml_branch_coverage=1 00:29:38.324 --rc genhtml_function_coverage=1 00:29:38.324 --rc genhtml_legend=1 00:29:38.324 --rc geninfo_all_blocks=1 00:29:38.324 --rc geninfo_unexecuted_blocks=1 00:29:38.324 00:29:38.324 ' 00:29:38.324 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:38.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:38.324 --rc genhtml_branch_coverage=1 00:29:38.324 --rc genhtml_function_coverage=1 00:29:38.324 --rc genhtml_legend=1 00:29:38.324 --rc geninfo_all_blocks=1 00:29:38.324 --rc geninfo_unexecuted_blocks=1 00:29:38.324 00:29:38.324 ' 00:29:38.324 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:38.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:38.324 --rc genhtml_branch_coverage=1 00:29:38.324 --rc genhtml_function_coverage=1 00:29:38.324 --rc genhtml_legend=1 00:29:38.324 --rc geninfo_all_blocks=1 00:29:38.324 --rc geninfo_unexecuted_blocks=1 00:29:38.324 00:29:38.324 ' 00:29:38.324 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:38.324 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:29:38.324 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:38.324 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:38.324 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:38.324 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:38.324 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:38.324 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:38.324 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:38.324 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:38.324 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:38.324 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:38.324 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:29:38.324 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:29:38.324 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:38.324 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:38.324 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:38.324 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:38.324 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:38.324 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:29:38.324 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:38.324 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:38.324 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:38.324 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:38.324 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:38.324 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:38.324 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:29:38.324 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:38.324 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:29:38.324 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:38.324 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:38.324 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:38.324 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:38.324 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:38.324 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:38.324 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:38.324 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:38.324 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:38.324 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:38.324 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:29:38.324 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:38.324 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:38.324 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:38.324 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:38.324 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:38.324 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:38.324 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:38.324 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:38.324 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:38.324 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:38.324 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:29:38.324 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:44.891 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:44.891 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:29:44.891 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:44.891 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:44.891 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:44.891 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:44.891 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:44.891 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:29:44.891 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:44.891 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:29:44.891 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:29:44.891 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:29:44.891 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:29:44.891 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:29:44.891 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:29:44.891 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:44.891 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:44.891 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:44.891 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:44.891 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:44.891 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:44.891 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:44.891 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:44.891 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:44.891 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:44.891 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:44.891 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:44.891 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:44.891 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:44.891 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:44.891 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:44.891 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:44.891 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:44.891 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:44.891 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:44.891 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:44.891 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:44.891 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:44.891 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:44.891 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:44.891 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:44.891 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:44.891 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:44.891 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:44.891 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:44.891 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:44.891 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:44.891 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:44.891 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:44.891 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:44.891 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:44.891 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:44.891 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:44.891 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:44.892 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:44.892 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:44.892 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:44.892 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:44.892 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:44.892 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:44.892 Found net devices under 0000:af:00.0: cvl_0_0 00:29:44.892 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:44.892 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:44.892 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:44.892 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:44.892 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:44.892 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:44.892 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:44.892 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:44.892 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:44.892 Found net devices under 0000:af:00.1: cvl_0_1 00:29:44.892 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:44.892 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:44.892 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:29:44.892 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:44.892 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:44.892 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:44.892 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:44.892 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:44.892 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:44.892 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:44.892 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:44.892 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:44.892 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:44.892 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:44.892 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:44.892 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:44.892 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:44.892 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:44.892 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:44.892 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:44.892 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:44.892 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:44.892 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:44.892 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:44.892 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:44.892 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:44.892 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:44.892 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:44.892 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:44.892 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:44.892 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.302 ms 00:29:44.892 00:29:44.892 --- 10.0.0.2 ping statistics --- 00:29:44.892 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:44.892 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:29:44.892 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:44.892 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:44.892 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:29:44.892 00:29:44.892 --- 10.0.0.1 ping statistics --- 00:29:44.892 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:44.892 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:29:44.892 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:44.892 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:29:44.892 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:44.892 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:44.892 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:44.892 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:44.892 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:44.892 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:44.892 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:44.892 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:29:44.892 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:44.892 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:44.892 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:44.892 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2758007 00:29:44.892 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2758007 00:29:44.892 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:29:44.892 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2758007 ']' 00:29:44.892 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:44.892 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:44.892 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:44.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:44.892 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:44.892 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:44.892 [2024-12-09 17:40:13.393965] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:44.892 [2024-12-09 17:40:13.394862] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:29:44.892 [2024-12-09 17:40:13.394894] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:44.892 [2024-12-09 17:40:13.472965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:44.892 [2024-12-09 17:40:13.514426] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:44.892 [2024-12-09 17:40:13.514455] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:44.892 [2024-12-09 17:40:13.514462] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:44.892 [2024-12-09 17:40:13.514468] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:44.892 [2024-12-09 17:40:13.514473] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:44.892 [2024-12-09 17:40:13.515693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:44.892 [2024-12-09 17:40:13.515696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:44.892 [2024-12-09 17:40:13.583645] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:44.892 [2024-12-09 17:40:13.584183] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:44.892 [2024-12-09 17:40:13.584372] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:44.892 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:44.893 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:29:44.893 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:44.893 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:44.893 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:44.893 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:44.893 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:44.893 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.893 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:44.893 [2024-12-09 17:40:13.652567] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:44.893 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.893 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:44.893 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.893 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:44.893 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.893 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:44.893 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.893 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:44.893 [2024-12-09 17:40:13.680811] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:44.893 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.893 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:29:44.893 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.893 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:44.893 NULL1 00:29:44.893 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.893 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:44.893 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.893 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:44.893 Delay0 00:29:44.893 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.893 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:44.893 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.893 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:44.893 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.893 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2758151 00:29:44.893 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:29:44.893 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:29:44.893 [2024-12-09 17:40:13.791772] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:46.793 17:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:46.793 17:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.793 17:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:46.793 Read completed with error (sct=0, sc=8) 00:29:46.793 Read completed with error (sct=0, sc=8) 00:29:46.793 Write completed with error (sct=0, sc=8) 00:29:46.793 starting I/O failed: -6 00:29:46.793 Write completed with error (sct=0, sc=8) 00:29:46.793 Write completed with error (sct=0, sc=8) 00:29:46.793 Write completed with error (sct=0, sc=8) 00:29:46.793 Read completed with error (sct=0, sc=8) 00:29:46.793 starting I/O failed: -6 00:29:46.793 Write completed with error (sct=0, sc=8) 00:29:46.793 Read completed with error (sct=0, sc=8) 00:29:46.793 Write completed with error (sct=0, sc=8) 00:29:46.793 Read completed with error (sct=0, sc=8) 00:29:46.793 starting I/O failed: -6 00:29:46.793 Write completed with error (sct=0, sc=8) 00:29:46.793 Read completed with error (sct=0, sc=8) 00:29:46.793 Read completed with error (sct=0, sc=8) 00:29:46.793 Read completed with error (sct=0, sc=8) 00:29:46.793 starting I/O failed: -6 00:29:46.793 Write completed with error (sct=0, sc=8) 00:29:46.793 Read completed with error (sct=0, sc=8) 00:29:46.793 Read completed with error (sct=0, sc=8) 00:29:46.793 Read completed with error (sct=0, sc=8) 00:29:46.793 starting I/O failed: -6 00:29:46.793 Write completed with error (sct=0, sc=8) 00:29:46.793 Write completed with error (sct=0, sc=8) 00:29:46.793 Write completed with error (sct=0, sc=8) 00:29:46.793 Write completed with error (sct=0, sc=8) 00:29:46.793 starting I/O failed: -6 00:29:46.793 Read completed with error (sct=0, sc=8) 00:29:46.793 Read completed with error (sct=0, sc=8) 00:29:46.793 Read completed with error (sct=0, sc=8) 00:29:46.793 Read completed with error (sct=0, sc=8) 00:29:46.793 starting I/O failed: -6 00:29:46.793 Read completed with error (sct=0, sc=8) 00:29:46.793 Write completed with error (sct=0, sc=8) 00:29:46.793 Write completed with error (sct=0, sc=8) 00:29:46.793 Read completed with error (sct=0, sc=8) 00:29:46.793 starting I/O failed: -6 00:29:46.793 Write completed with error (sct=0, sc=8) 00:29:46.793 Read completed with error (sct=0, sc=8) 00:29:46.793 Write completed with error (sct=0, sc=8) 00:29:46.793 Read completed with error (sct=0, sc=8) 00:29:46.793 starting I/O failed: -6 00:29:46.793 Write completed with error (sct=0, sc=8) 00:29:46.793 [2024-12-09 17:40:15.880584] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe182c0 is same with the state(6) to be set 00:29:46.793 Write completed with error (sct=0, sc=8) 00:29:46.793 Read completed with error (sct=0, sc=8) 00:29:46.793 Read completed with error (sct=0, sc=8) 00:29:46.793 Read completed with error (sct=0, sc=8) 00:29:46.793 Read completed with error (sct=0, sc=8) 00:29:46.793 Read completed with error (sct=0, sc=8) 00:29:46.793 Read completed with error (sct=0, sc=8) 00:29:46.793 Read completed with error (sct=0, sc=8) 00:29:46.793 Write completed with error (sct=0, sc=8) 00:29:46.793 Read completed with error (sct=0, sc=8) 00:29:46.793 Read completed with error (sct=0, sc=8) 00:29:46.793 Read completed with error (sct=0, sc=8) 00:29:46.793 Read completed with error (sct=0, sc=8) 00:29:46.793 Read completed with error (sct=0, sc=8) 00:29:46.793 Read completed with error (sct=0, sc=8) 00:29:46.793 Write completed with error (sct=0, sc=8) 00:29:46.793 Read completed with error (sct=0, sc=8) 00:29:46.793 Read completed with error (sct=0, sc=8) 00:29:46.793 Read completed with error (sct=0, sc=8) 00:29:46.793 Read completed with error (sct=0, sc=8) 00:29:46.793 Read completed with error (sct=0, sc=8) 00:29:46.793 Write completed with error (sct=0, sc=8) 00:29:46.793 Read completed with error (sct=0, sc=8) 00:29:46.793 Read completed with error (sct=0, sc=8) 00:29:46.793 Read completed with error (sct=0, sc=8) 00:29:46.793 Write completed with error (sct=0, sc=8) 00:29:46.793 Read completed with error (sct=0, sc=8) 00:29:46.793 Write completed with error (sct=0, sc=8) 00:29:46.793 Write completed with error (sct=0, sc=8) 00:29:46.793 Read completed with error (sct=0, sc=8) 00:29:46.793 Write completed with error (sct=0, sc=8) 00:29:46.793 Read completed with error (sct=0, sc=8) 00:29:46.793 Read completed with error (sct=0, sc=8) 00:29:46.793 Read completed with error (sct=0, sc=8) 00:29:46.793 Read completed with error (sct=0, sc=8) 00:29:46.793 Read completed with error (sct=0, sc=8) 00:29:46.793 Write completed with error (sct=0, sc=8) 00:29:46.793 Read completed with error (sct=0, sc=8) 00:29:46.793 Read completed with error (sct=0, sc=8) 00:29:46.793 Read completed with error (sct=0, sc=8) 00:29:46.793 Read completed with error (sct=0, sc=8) 00:29:46.793 Write completed with error (sct=0, sc=8) 00:29:46.793 Write completed with error (sct=0, sc=8) 00:29:46.793 Read completed with error (sct=0, sc=8) 00:29:46.793 Write completed with error (sct=0, sc=8) 00:29:46.793 Read completed with error (sct=0, sc=8) 00:29:46.793 Write completed with error (sct=0, sc=8) 00:29:46.793 Read completed with error (sct=0, sc=8) 00:29:46.793 Read completed with error (sct=0, sc=8) 00:29:46.793 Read completed with error (sct=0, sc=8) 00:29:46.793 Read completed with error (sct=0, sc=8) 00:29:46.794 Read completed with error (sct=0, sc=8) 00:29:46.794 Read completed with error (sct=0, sc=8) 00:29:46.794 Read completed with error (sct=0, sc=8) 00:29:46.794 Read completed with error (sct=0, sc=8) 00:29:46.794 Write completed with error (sct=0, sc=8) 00:29:46.794 Write completed with error (sct=0, sc=8) 00:29:46.794 Read completed with error (sct=0, sc=8) 00:29:46.794 Read completed with error (sct=0, sc=8) 00:29:46.794 Read completed with error (sct=0, sc=8) 00:29:46.794 Read completed with error (sct=0, sc=8) 00:29:46.794 Read completed with error (sct=0, sc=8) 00:29:46.794 Read completed with error (sct=0, sc=8) 00:29:46.794 Read completed with error (sct=0, sc=8) 00:29:46.794 Write completed with error (sct=0, sc=8) 00:29:46.794 Read completed with error (sct=0, sc=8) 00:29:46.794 Read completed with error (sct=0, sc=8) 00:29:46.794 Write completed with error (sct=0, sc=8) 00:29:46.794 Read completed with error (sct=0, sc=8) 00:29:46.794 Read completed with error (sct=0, sc=8) 00:29:46.794 Read completed with error (sct=0, sc=8) 00:29:46.794 Read completed with error (sct=0, sc=8) 00:29:46.794 Write completed with error (sct=0, sc=8) 00:29:46.794 Write completed with error (sct=0, sc=8) 00:29:46.794 Read completed with error (sct=0, sc=8) 00:29:46.794 Read completed with error (sct=0, sc=8) 00:29:46.794 Read completed with error (sct=0, sc=8) 00:29:46.794 Read completed with error (sct=0, sc=8) 00:29:46.794 Read completed with error (sct=0, sc=8) 00:29:46.794 Read completed with error (sct=0, sc=8) 00:29:46.794 Write completed with error (sct=0, sc=8) 00:29:46.794 Read completed with error (sct=0, sc=8) 00:29:46.794 Read completed with error (sct=0, sc=8) 00:29:46.794 Write completed with error (sct=0, sc=8) 00:29:46.794 Write completed with error (sct=0, sc=8) 00:29:46.794 Read completed with error (sct=0, sc=8) 00:29:46.794 Read completed with error (sct=0, sc=8) 00:29:46.794 Read completed with error (sct=0, sc=8) 00:29:46.794 Read completed with error (sct=0, sc=8) 00:29:46.794 Read completed with error (sct=0, sc=8) 00:29:46.794 Read completed with error (sct=0, sc=8) 00:29:46.794 Read completed with error (sct=0, sc=8) 00:29:46.794 Write completed with error (sct=0, sc=8) 00:29:46.794 starting I/O failed: -6 00:29:46.794 Read completed with error (sct=0, sc=8) 00:29:46.794 Write completed with error (sct=0, sc=8) 00:29:46.794 Read completed with error (sct=0, sc=8) 00:29:46.794 Read completed with error (sct=0, sc=8) 00:29:46.794 starting I/O failed: -6 00:29:46.794 Write completed with error (sct=0, sc=8) 00:29:46.794 Read completed with error (sct=0, sc=8) 00:29:46.794 Read completed with error (sct=0, sc=8) 00:29:46.794 Read completed with error (sct=0, sc=8) 00:29:46.794 starting I/O failed: -6 00:29:46.794 Read completed with error (sct=0, sc=8) 00:29:46.794 Read completed with error (sct=0, sc=8) 00:29:46.794 Read completed with error (sct=0, sc=8) 00:29:46.794 Read completed with error (sct=0, sc=8) 00:29:46.794 starting I/O failed: -6 00:29:46.794 Write completed with error (sct=0, sc=8) 00:29:46.794 Read completed with error (sct=0, sc=8) 00:29:46.794 Write completed with error (sct=0, sc=8) 00:29:46.794 Read completed with error (sct=0, sc=8) 00:29:46.794 starting I/O failed: -6 00:29:46.794 Write completed with error (sct=0, sc=8) 00:29:46.794 Read completed with error (sct=0, sc=8) 00:29:46.794 Read completed with error (sct=0, sc=8) 00:29:46.794 Read completed with error (sct=0, sc=8) 00:29:46.794 starting I/O failed: -6 00:29:46.794 Write completed with error (sct=0, sc=8) 00:29:46.794 Read completed with error (sct=0, sc=8) 00:29:46.794 Read completed with error (sct=0, sc=8) 00:29:46.794 Read completed with error (sct=0, sc=8) 00:29:46.794 starting I/O failed: -6 00:29:46.794 Read completed with error (sct=0, sc=8) 00:29:46.794 Read completed with error (sct=0, sc=8) 00:29:46.794 Write completed with error (sct=0, sc=8) 00:29:46.794 Read completed with error (sct=0, sc=8) 00:29:46.794 starting I/O failed: -6 00:29:46.794 Write completed with error (sct=0, sc=8) 00:29:46.794 Read completed with error (sct=0, sc=8) 00:29:46.794 Read completed with error (sct=0, sc=8) 00:29:46.794 Read completed with error (sct=0, sc=8) 00:29:46.794 starting I/O failed: -6 00:29:46.794 Read completed with error (sct=0, sc=8) 00:29:46.794 Read completed with error (sct=0, sc=8) 00:29:46.794 Read completed with error (sct=0, sc=8) 00:29:46.794 Read completed with error (sct=0, sc=8) 00:29:46.794 starting I/O failed: -6 00:29:46.794 Read completed with error (sct=0, sc=8) 00:29:46.794 Read completed with error (sct=0, sc=8) 00:29:46.794 Read completed with error (sct=0, sc=8) 00:29:46.794 [2024-12-09 17:40:15.882354] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc78400d490 is same with the state(6) to be set 00:29:47.730 [2024-12-09 17:40:16.846097] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe199b0 is same with the state(6) to be set 00:29:47.730 Read completed with error (sct=0, sc=8) 00:29:47.730 Write completed with error (sct=0, sc=8) 00:29:47.730 Write completed with error (sct=0, sc=8) 00:29:47.730 Read completed with error (sct=0, sc=8) 00:29:47.730 Read completed with error (sct=0, sc=8) 00:29:47.730 Read completed with error (sct=0, sc=8) 00:29:47.730 Read completed with error (sct=0, sc=8) 00:29:47.730 Read completed with error (sct=0, sc=8) 00:29:47.730 Read completed with error (sct=0, sc=8) 00:29:47.730 Read completed with error (sct=0, sc=8) 00:29:47.730 Read completed with error (sct=0, sc=8) 00:29:47.730 Write completed with error (sct=0, sc=8) 00:29:47.730 Read completed with error (sct=0, sc=8) 00:29:47.730 Read completed with error (sct=0, sc=8) 00:29:47.730 Read completed with error (sct=0, sc=8) 00:29:47.730 Read completed with error (sct=0, sc=8) 00:29:47.730 Read completed with error (sct=0, sc=8) 00:29:47.730 Write completed with error (sct=0, sc=8) 00:29:47.730 Read completed with error (sct=0, sc=8) 00:29:47.730 Read completed with error (sct=0, sc=8) 00:29:47.730 Read completed with error (sct=0, sc=8) 00:29:47.730 Write completed with error (sct=0, sc=8) 00:29:47.730 Read completed with error (sct=0, sc=8) 00:29:47.730 Read completed with error (sct=0, sc=8) 00:29:47.730 Write completed with error (sct=0, sc=8) 00:29:47.730 Read completed with error (sct=0, sc=8) 00:29:47.730 Read completed with error (sct=0, sc=8) 00:29:47.730 Write completed with error (sct=0, sc=8) 00:29:47.730 Read completed with error (sct=0, sc=8) 00:29:47.730 Read completed with error (sct=0, sc=8) 00:29:47.730 Read completed with error (sct=0, sc=8) 00:29:47.730 Read completed with error (sct=0, sc=8) 00:29:47.730 Read completed with error (sct=0, sc=8) 00:29:47.730 [2024-12-09 17:40:16.884877] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc784000c40 is same with the state(6) to be set 00:29:47.730 Read completed with error (sct=0, sc=8) 00:29:47.730 Read completed with error (sct=0, sc=8) 00:29:47.730 Read completed with error (sct=0, sc=8) 00:29:47.730 Write completed with error (sct=0, sc=8) 00:29:47.730 Read completed with error (sct=0, sc=8) 00:29:47.730 Read completed with error (sct=0, sc=8) 00:29:47.730 Write completed with error (sct=0, sc=8) 00:29:47.730 Read completed with error (sct=0, sc=8) 00:29:47.730 Read completed with error (sct=0, sc=8) 00:29:47.730 Read completed with error (sct=0, sc=8) 00:29:47.730 Write completed with error (sct=0, sc=8) 00:29:47.730 Read completed with error (sct=0, sc=8) 00:29:47.730 [2024-12-09 17:40:16.885072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe18960 is same with the state(6) to be set 00:29:47.730 Write completed with error (sct=0, sc=8) 00:29:47.730 Read completed with error (sct=0, sc=8) 00:29:47.730 Read completed with error (sct=0, sc=8) 00:29:47.730 Read completed with error (sct=0, sc=8) 00:29:47.730 Read completed with error (sct=0, sc=8) 00:29:47.730 Read completed with error (sct=0, sc=8) 00:29:47.730 Read completed with error (sct=0, sc=8) 00:29:47.730 Write completed with error (sct=0, sc=8) 00:29:47.730 Write completed with error (sct=0, sc=8) 00:29:47.730 Read completed with error (sct=0, sc=8) 00:29:47.730 Write completed with error (sct=0, sc=8) 00:29:47.730 Write completed with error (sct=0, sc=8) 00:29:47.730 Write completed with error (sct=0, sc=8) 00:29:47.730 Read completed with error (sct=0, sc=8) 00:29:47.730 Write completed with error (sct=0, sc=8) 00:29:47.730 Write completed with error (sct=0, sc=8) 00:29:47.730 Write completed with error (sct=0, sc=8) 00:29:47.730 Write completed with error (sct=0, sc=8) 00:29:47.730 Read completed with error (sct=0, sc=8) 00:29:47.730 Read completed with error (sct=0, sc=8) 00:29:47.730 Write completed with error (sct=0, sc=8) 00:29:47.730 Read completed with error (sct=0, sc=8) 00:29:47.730 Read completed with error (sct=0, sc=8) 00:29:47.730 Read completed with error (sct=0, sc=8) 00:29:47.730 Read completed with error (sct=0, sc=8) 00:29:47.730 Read completed with error (sct=0, sc=8) 00:29:47.730 Write completed with error (sct=0, sc=8) 00:29:47.730 Write completed with error (sct=0, sc=8) 00:29:47.730 Read completed with error (sct=0, sc=8) 00:29:47.730 Read completed with error (sct=0, sc=8) 00:29:47.730 Read completed with error (sct=0, sc=8) 00:29:47.730 Read completed with error (sct=0, sc=8) 00:29:47.730 Read completed with error (sct=0, sc=8) 00:29:47.730 [2024-12-09 17:40:16.885363] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc78400d7c0 is same with the state(6) to be set 00:29:47.730 Write completed with error (sct=0, sc=8) 00:29:47.730 Read completed with error (sct=0, sc=8) 00:29:47.730 Read completed with error (sct=0, sc=8) 00:29:47.730 Read completed with error (sct=0, sc=8) 00:29:47.730 Read completed with error (sct=0, sc=8) 00:29:47.730 Write completed with error (sct=0, sc=8) 00:29:47.730 Read completed with error (sct=0, sc=8) 00:29:47.730 Read completed with error (sct=0, sc=8) 00:29:47.730 Read completed with error (sct=0, sc=8) 00:29:47.730 Read completed with error (sct=0, sc=8) 00:29:47.730 Read completed with error (sct=0, sc=8) 00:29:47.730 Write completed with error (sct=0, sc=8) 00:29:47.730 Read completed with error (sct=0, sc=8) 00:29:47.730 Write completed with error (sct=0, sc=8) 00:29:47.730 Read completed with error (sct=0, sc=8) 00:29:47.730 Write completed with error (sct=0, sc=8) 00:29:47.730 Read completed with error (sct=0, sc=8) 00:29:47.730 Write completed with error (sct=0, sc=8) 00:29:47.730 Write completed with error (sct=0, sc=8) 00:29:47.730 Read completed with error (sct=0, sc=8) 00:29:47.730 Read completed with error (sct=0, sc=8) 00:29:47.730 Read completed with error (sct=0, sc=8) 00:29:47.730 Read completed with error (sct=0, sc=8) 00:29:47.730 [2024-12-09 17:40:16.885867] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc78400d020 is same with the state(6) to be set 00:29:47.730 Initializing NVMe Controllers 00:29:47.730 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:47.730 Controller IO queue size 128, less than required. 00:29:47.730 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:47.730 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:29:47.730 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:29:47.730 Initialization complete. Launching workers. 00:29:47.730 ======================================================== 00:29:47.730 Latency(us) 00:29:47.730 Device Information : IOPS MiB/s Average min max 00:29:47.730 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 147.51 0.07 907746.27 234.66 1010499.30 00:29:47.731 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 164.90 0.08 1044393.74 708.34 2001888.56 00:29:47.731 ======================================================== 00:29:47.731 Total : 312.41 0.15 979871.81 234.66 2001888.56 00:29:47.731 00:29:47.731 [2024-12-09 17:40:16.886466] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe199b0 (9): Bad file descriptor 00:29:47.731 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:29:47.731 17:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.731 17:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:29:47.731 17:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2758151 00:29:47.731 17:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:29:48.297 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:29:48.297 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2758151 00:29:48.297 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2758151) - No such process 00:29:48.297 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2758151 00:29:48.297 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:29:48.297 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2758151 00:29:48.297 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:29:48.297 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:48.297 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:29:48.297 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:48.297 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2758151 00:29:48.297 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:29:48.297 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:48.297 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:48.298 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:48.298 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:48.298 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.298 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:48.298 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.298 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:48.298 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.298 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:48.298 [2024-12-09 17:40:17.416717] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:48.298 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.298 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:48.298 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.298 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:48.298 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.298 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2758707 00:29:48.298 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:29:48.298 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:29:48.298 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2758707 00:29:48.298 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:48.556 [2024-12-09 17:40:17.501236] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:48.814 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:48.814 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2758707 00:29:48.814 17:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:49.388 17:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:49.388 17:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2758707 00:29:49.388 17:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:50.011 17:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:50.011 17:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2758707 00:29:50.011 17:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:50.323 17:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:50.323 17:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2758707 00:29:50.323 17:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:50.890 17:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:50.890 17:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2758707 00:29:50.890 17:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:51.457 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:51.457 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2758707 00:29:51.457 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:51.457 Initializing NVMe Controllers 00:29:51.457 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:51.457 Controller IO queue size 128, less than required. 00:29:51.457 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:51.457 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:29:51.457 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:29:51.457 Initialization complete. Launching workers. 00:29:51.457 ======================================================== 00:29:51.457 Latency(us) 00:29:51.457 Device Information : IOPS MiB/s Average min max 00:29:51.457 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002144.99 1000124.93 1040578.86 00:29:51.457 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004032.94 1000188.81 1041368.26 00:29:51.457 ======================================================== 00:29:51.457 Total : 256.00 0.12 1003088.97 1000124.93 1041368.26 00:29:51.457 00:29:52.024 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:52.024 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2758707 00:29:52.024 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2758707) - No such process 00:29:52.024 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2758707 00:29:52.024 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:29:52.024 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:29:52.024 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:52.024 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:29:52.024 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:52.024 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:29:52.024 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:52.024 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:52.024 rmmod nvme_tcp 00:29:52.024 rmmod nvme_fabrics 00:29:52.024 rmmod nvme_keyring 00:29:52.024 17:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:52.024 17:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:29:52.024 17:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:29:52.024 17:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2758007 ']' 00:29:52.024 17:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2758007 00:29:52.024 17:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2758007 ']' 00:29:52.024 17:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2758007 00:29:52.024 17:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:29:52.024 17:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:52.024 17:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2758007 00:29:52.024 17:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:52.024 17:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:52.024 17:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2758007' 00:29:52.024 killing process with pid 2758007 00:29:52.024 17:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2758007 00:29:52.024 17:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2758007 00:29:52.283 17:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:52.283 17:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:52.283 17:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:52.284 17:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:29:52.284 17:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:29:52.284 17:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:52.284 17:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:29:52.284 17:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:52.284 17:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:52.284 17:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:52.284 17:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:52.284 17:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:54.188 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:54.188 00:29:54.188 real 0m16.211s 00:29:54.188 user 0m25.919s 00:29:54.188 sys 0m6.197s 00:29:54.188 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:54.188 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:54.188 ************************************ 00:29:54.188 END TEST nvmf_delete_subsystem 00:29:54.188 ************************************ 00:29:54.447 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:29:54.447 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:54.447 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:54.447 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:54.447 ************************************ 00:29:54.447 START TEST nvmf_host_management 00:29:54.447 ************************************ 00:29:54.447 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:29:54.447 * Looking for test storage... 00:29:54.447 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:54.447 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:54.447 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:29:54.447 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:54.447 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:54.447 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:54.447 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:54.447 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:54.447 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:29:54.447 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:29:54.447 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:29:54.447 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:29:54.447 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:29:54.447 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:29:54.447 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:29:54.447 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:54.447 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:29:54.447 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:29:54.447 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:54.447 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:54.447 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:29:54.447 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:29:54.447 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:54.447 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:29:54.447 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:29:54.447 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:29:54.447 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:29:54.447 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:54.447 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:29:54.447 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:29:54.448 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:54.448 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:54.448 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:29:54.448 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:54.448 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:54.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:54.448 --rc genhtml_branch_coverage=1 00:29:54.448 --rc genhtml_function_coverage=1 00:29:54.448 --rc genhtml_legend=1 00:29:54.448 --rc geninfo_all_blocks=1 00:29:54.448 --rc geninfo_unexecuted_blocks=1 00:29:54.448 00:29:54.448 ' 00:29:54.448 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:54.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:54.448 --rc genhtml_branch_coverage=1 00:29:54.448 --rc genhtml_function_coverage=1 00:29:54.448 --rc genhtml_legend=1 00:29:54.448 --rc geninfo_all_blocks=1 00:29:54.448 --rc geninfo_unexecuted_blocks=1 00:29:54.448 00:29:54.448 ' 00:29:54.448 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:54.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:54.448 --rc genhtml_branch_coverage=1 00:29:54.448 --rc genhtml_function_coverage=1 00:29:54.448 --rc genhtml_legend=1 00:29:54.448 --rc geninfo_all_blocks=1 00:29:54.448 --rc geninfo_unexecuted_blocks=1 00:29:54.448 00:29:54.448 ' 00:29:54.448 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:54.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:54.448 --rc genhtml_branch_coverage=1 00:29:54.448 --rc genhtml_function_coverage=1 00:29:54.448 --rc genhtml_legend=1 00:29:54.448 --rc geninfo_all_blocks=1 00:29:54.448 --rc geninfo_unexecuted_blocks=1 00:29:54.448 00:29:54.448 ' 00:29:54.448 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:54.448 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:29:54.448 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:54.448 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:54.448 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:54.448 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:54.448 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:54.448 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:54.448 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:54.448 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:54.448 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:54.448 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:54.448 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:29:54.448 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:29:54.448 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:54.448 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:54.448 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:54.448 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:54.448 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:54.448 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:29:54.448 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:54.448 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:54.448 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:54.448 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.448 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.448 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.448 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:29:54.448 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.448 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:29:54.448 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:54.448 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:54.448 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:54.448 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:54.448 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:54.448 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:54.448 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:54.448 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:54.448 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:54.448 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:54.448 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:54.448 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:54.448 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:29:54.448 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:54.448 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:54.448 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:54.448 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:54.448 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:54.448 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:54.448 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:54.448 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:54.448 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:54.448 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:54.448 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:29:54.448 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:01.017 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:01.017 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:01.017 Found net devices under 0000:af:00.0: cvl_0_0 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:01.017 Found net devices under 0000:af:00.1: cvl_0_1 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:01.017 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:01.018 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:01.018 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:01.018 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:01.018 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:01.018 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:01.018 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:01.018 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:01.018 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:01.018 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:01.018 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:01.018 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:01.018 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.316 ms 00:30:01.018 00:30:01.018 --- 10.0.0.2 ping statistics --- 00:30:01.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:01.018 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:30:01.018 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:01.018 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:01.018 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:30:01.018 00:30:01.018 --- 10.0.0.1 ping statistics --- 00:30:01.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:01.018 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:30:01.018 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:01.018 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:30:01.018 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:01.018 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:01.018 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:01.018 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:01.018 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:01.018 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:01.018 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:01.018 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:30:01.018 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:30:01.018 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:30:01.018 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:01.018 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:01.018 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:01.018 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2762727 00:30:01.018 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2762727 00:30:01.018 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:30:01.018 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2762727 ']' 00:30:01.018 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:01.018 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:01.018 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:01.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:01.018 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:01.018 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:01.018 [2024-12-09 17:40:29.528354] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:01.018 [2024-12-09 17:40:29.529298] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:30:01.018 [2024-12-09 17:40:29.529335] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:01.018 [2024-12-09 17:40:29.608336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:01.018 [2024-12-09 17:40:29.650365] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:01.018 [2024-12-09 17:40:29.650400] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:01.018 [2024-12-09 17:40:29.650407] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:01.018 [2024-12-09 17:40:29.650413] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:01.018 [2024-12-09 17:40:29.650417] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:01.018 [2024-12-09 17:40:29.651932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:01.018 [2024-12-09 17:40:29.652041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:01.018 [2024-12-09 17:40:29.652151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:01.018 [2024-12-09 17:40:29.652152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:01.018 [2024-12-09 17:40:29.720908] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:01.018 [2024-12-09 17:40:29.721440] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:01.018 [2024-12-09 17:40:29.721773] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:01.018 [2024-12-09 17:40:29.721987] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:01.018 [2024-12-09 17:40:29.722038] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:01.018 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:01.018 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:30:01.018 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:01.018 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:01.018 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:01.018 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:01.018 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:01.018 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.018 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:01.018 [2024-12-09 17:40:29.788821] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:01.018 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.018 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:30:01.018 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:01.018 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:01.018 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:01.018 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:30:01.018 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:30:01.018 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.018 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:01.018 Malloc0 00:30:01.018 [2024-12-09 17:40:29.881150] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:01.018 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.018 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:30:01.018 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:01.018 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:01.018 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2762920 00:30:01.018 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2762920 /var/tmp/bdevperf.sock 00:30:01.018 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2762920 ']' 00:30:01.018 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:01.018 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:30:01.018 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:30:01.018 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:01.018 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:01.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:01.018 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:30:01.018 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:01.019 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:30:01.019 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:01.019 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:01.019 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:01.019 { 00:30:01.019 "params": { 00:30:01.019 "name": "Nvme$subsystem", 00:30:01.019 "trtype": "$TEST_TRANSPORT", 00:30:01.019 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:01.019 "adrfam": "ipv4", 00:30:01.019 "trsvcid": "$NVMF_PORT", 00:30:01.019 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:01.019 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:01.019 "hdgst": ${hdgst:-false}, 00:30:01.019 "ddgst": ${ddgst:-false} 00:30:01.019 }, 00:30:01.019 "method": "bdev_nvme_attach_controller" 00:30:01.019 } 00:30:01.019 EOF 00:30:01.019 )") 00:30:01.019 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:30:01.019 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:30:01.019 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:30:01.019 17:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:01.019 "params": { 00:30:01.019 "name": "Nvme0", 00:30:01.019 "trtype": "tcp", 00:30:01.019 "traddr": "10.0.0.2", 00:30:01.019 "adrfam": "ipv4", 00:30:01.019 "trsvcid": "4420", 00:30:01.019 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:01.019 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:01.019 "hdgst": false, 00:30:01.019 "ddgst": false 00:30:01.019 }, 00:30:01.019 "method": "bdev_nvme_attach_controller" 00:30:01.019 }' 00:30:01.019 [2024-12-09 17:40:29.980504] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:30:01.019 [2024-12-09 17:40:29.980559] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2762920 ] 00:30:01.019 [2024-12-09 17:40:30.055433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:01.019 [2024-12-09 17:40:30.095795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:01.277 Running I/O for 10 seconds... 00:30:01.277 17:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:01.277 17:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:30:01.277 17:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:01.277 17:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.277 17:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:01.277 17:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.277 17:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:01.277 17:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:30:01.277 17:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:30:01.277 17:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:30:01.277 17:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:30:01.277 17:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:30:01.277 17:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:30:01.277 17:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:30:01.277 17:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:30:01.277 17:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:30:01.277 17:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.277 17:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:01.277 17:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.277 17:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=92 00:30:01.278 17:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 92 -ge 100 ']' 00:30:01.278 17:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:30:01.538 17:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:30:01.538 17:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:30:01.538 17:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:30:01.538 17:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:30:01.538 17:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.538 17:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:01.538 17:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.538 17:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=705 00:30:01.538 17:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 705 -ge 100 ']' 00:30:01.538 17:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:30:01.538 17:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:30:01.538 17:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:30:01.538 17:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:30:01.538 17:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.538 17:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:01.538 [2024-12-09 17:40:30.697824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:01.538 [2024-12-09 17:40:30.697868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.538 [2024-12-09 17:40:30.697879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:01.538 [2024-12-09 17:40:30.697886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.538 [2024-12-09 17:40:30.697894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:01.538 [2024-12-09 17:40:30.697901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.538 [2024-12-09 17:40:30.697908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:01.538 [2024-12-09 17:40:30.697914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.538 [2024-12-09 17:40:30.697921] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1125aa0 is same with the state(6) to be set 00:30:01.538 [2024-12-09 17:40:30.700517] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef0a60 is same with the state(6) to be set 00:30:01.538 [2024-12-09 17:40:30.700552] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef0a60 is same with the state(6) to be set 00:30:01.538 [2024-12-09 17:40:30.700561] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef0a60 is same with the state(6) to be set 00:30:01.538 [2024-12-09 17:40:30.700568] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef0a60 is same with the state(6) to be set 00:30:01.538 [2024-12-09 17:40:30.700574] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef0a60 is same with the state(6) to be set 00:30:01.538 [2024-12-09 17:40:30.700581] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef0a60 is same with the state(6) to be set 00:30:01.538 [2024-12-09 17:40:30.700588] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef0a60 is same with the state(6) to be set 00:30:01.538 [2024-12-09 17:40:30.700594] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef0a60 is same with the state(6) to be set 00:30:01.538 [2024-12-09 17:40:30.700600] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef0a60 is same with the state(6) to be set 00:30:01.538 [2024-12-09 17:40:30.700610] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef0a60 is same with the state(6) to be set 00:30:01.538 [2024-12-09 17:40:30.700616] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef0a60 is same with the state(6) to be set 00:30:01.538 [2024-12-09 17:40:30.700622] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef0a60 is same with the state(6) to be set 00:30:01.538 [2024-12-09 17:40:30.700628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef0a60 is same with the state(6) to be set 00:30:01.538 [2024-12-09 17:40:30.700634] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef0a60 is same with the state(6) to be set 00:30:01.538 [2024-12-09 17:40:30.700640] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef0a60 is same with the state(6) to be set 00:30:01.538 [2024-12-09 17:40:30.700647] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef0a60 is same with the state(6) to be set 00:30:01.538 [2024-12-09 17:40:30.700653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef0a60 is same with the state(6) to be set 00:30:01.538 [2024-12-09 17:40:30.700659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef0a60 is same with the state(6) to be set 00:30:01.538 [2024-12-09 17:40:30.700665] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef0a60 is same with the state(6) to be set 00:30:01.538 [2024-12-09 17:40:30.700671] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef0a60 is same with the state(6) to be set 00:30:01.538 [2024-12-09 17:40:30.700677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef0a60 is same with the state(6) to be set 00:30:01.538 [2024-12-09 17:40:30.700683] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef0a60 is same with the state(6) to be set 00:30:01.538 [2024-12-09 17:40:30.700689] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef0a60 is same with the state(6) to be set 00:30:01.538 [2024-12-09 17:40:30.700695] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef0a60 is same with the state(6) to be set 00:30:01.538 [2024-12-09 17:40:30.700701] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef0a60 is same with the state(6) to be set 00:30:01.538 [2024-12-09 17:40:30.700707] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef0a60 is same with the state(6) to be set 00:30:01.538 [2024-12-09 17:40:30.700713] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef0a60 is same with the state(6) to be set 00:30:01.538 [2024-12-09 17:40:30.700719] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef0a60 is same with the state(6) to be set 00:30:01.538 [2024-12-09 17:40:30.700725] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef0a60 is same with the state(6) to be set 00:30:01.538 [2024-12-09 17:40:30.700731] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef0a60 is same with the state(6) to be set 00:30:01.538 [2024-12-09 17:40:30.700736] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef0a60 is same with the state(6) to be set 00:30:01.538 [2024-12-09 17:40:30.700742] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef0a60 is same with the state(6) to be set 00:30:01.538 [2024-12-09 17:40:30.700748] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef0a60 is same with the state(6) to be set 00:30:01.538 [2024-12-09 17:40:30.700754] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef0a60 is same with the state(6) to be set 00:30:01.538 [2024-12-09 17:40:30.700760] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef0a60 is same with the state(6) to be set 00:30:01.538 [2024-12-09 17:40:30.700767] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef0a60 is same with the state(6) to be set 00:30:01.538 [2024-12-09 17:40:30.700774] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef0a60 is same with the state(6) to be set 00:30:01.538 [2024-12-09 17:40:30.700780] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef0a60 is same with the state(6) to be set 00:30:01.538 [2024-12-09 17:40:30.700786] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef0a60 is same with the state(6) to be set 00:30:01.538 [2024-12-09 17:40:30.700792] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef0a60 is same with the state(6) to be set 00:30:01.538 [2024-12-09 17:40:30.700801] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef0a60 is same with the state(6) to be set 00:30:01.538 [2024-12-09 17:40:30.700807] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef0a60 is same with the state(6) to be set 00:30:01.538 [2024-12-09 17:40:30.700813] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef0a60 is same with the state(6) to be set 00:30:01.539 [2024-12-09 17:40:30.700819] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef0a60 is same with the state(6) to be set 00:30:01.539 [2024-12-09 17:40:30.700825] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef0a60 is same with the state(6) to be set 00:30:01.539 [2024-12-09 17:40:30.700832] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef0a60 is same with the state(6) to be set 00:30:01.539 [2024-12-09 17:40:30.700837] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef0a60 is same with the state(6) to be set 00:30:01.539 [2024-12-09 17:40:30.700844] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef0a60 is same with the state(6) to be set 00:30:01.539 [2024-12-09 17:40:30.700850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef0a60 is same with the state(6) to be set 00:30:01.539 [2024-12-09 17:40:30.700856] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef0a60 is same with the state(6) to be set 00:30:01.539 [2024-12-09 17:40:30.700862] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef0a60 is same with the state(6) to be set 00:30:01.539 [2024-12-09 17:40:30.700868] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef0a60 is same with the state(6) to be set 00:30:01.539 [2024-12-09 17:40:30.700874] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef0a60 is same with the state(6) to be set 00:30:01.539 [2024-12-09 17:40:30.700880] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef0a60 is same with the state(6) to be set 00:30:01.539 [2024-12-09 17:40:30.700885] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef0a60 is same with the state(6) to be set 00:30:01.539 [2024-12-09 17:40:30.700892] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef0a60 is same with the state(6) to be set 00:30:01.539 [2024-12-09 17:40:30.700897] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef0a60 is same with the state(6) to be set 00:30:01.539 [2024-12-09 17:40:30.700903] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef0a60 is same with the state(6) to be set 00:30:01.539 [2024-12-09 17:40:30.700909] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef0a60 is same with the state(6) to be set 00:30:01.539 [2024-12-09 17:40:30.700915] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef0a60 is same with the state(6) to be set 00:30:01.539 [2024-12-09 17:40:30.700920] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef0a60 is same with the state(6) to be set 00:30:01.539 [2024-12-09 17:40:30.701036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.539 [2024-12-09 17:40:30.701059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.539 [2024-12-09 17:40:30.701078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.539 [2024-12-09 17:40:30.701085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.539 [2024-12-09 17:40:30.701094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.539 [2024-12-09 17:40:30.701101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.539 [2024-12-09 17:40:30.701109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.539 [2024-12-09 17:40:30.701116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.539 [2024-12-09 17:40:30.701124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.539 [2024-12-09 17:40:30.701130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.539 [2024-12-09 17:40:30.701138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.539 [2024-12-09 17:40:30.701144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.539 [2024-12-09 17:40:30.701152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.539 [2024-12-09 17:40:30.701159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.539 [2024-12-09 17:40:30.701167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.539 [2024-12-09 17:40:30.701173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.539 [2024-12-09 17:40:30.701181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.539 [2024-12-09 17:40:30.701187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.539 [2024-12-09 17:40:30.701195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.539 [2024-12-09 17:40:30.701201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.539 [2024-12-09 17:40:30.701209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.539 [2024-12-09 17:40:30.701216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.539 [2024-12-09 17:40:30.701231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.539 [2024-12-09 17:40:30.701238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.539 [2024-12-09 17:40:30.701245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.539 [2024-12-09 17:40:30.701252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.539 [2024-12-09 17:40:30.701260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.539 [2024-12-09 17:40:30.701267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.539 [2024-12-09 17:40:30.701275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.539 [2024-12-09 17:40:30.701282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.539 [2024-12-09 17:40:30.701290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.539 [2024-12-09 17:40:30.701296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.539 [2024-12-09 17:40:30.701303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.539 [2024-12-09 17:40:30.701311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.539 [2024-12-09 17:40:30.701319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.539 [2024-12-09 17:40:30.701326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.539 [2024-12-09 17:40:30.701333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.539 [2024-12-09 17:40:30.701340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.539 [2024-12-09 17:40:30.701348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.539 [2024-12-09 17:40:30.701355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.539 [2024-12-09 17:40:30.701363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.539 [2024-12-09 17:40:30.701369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.539 [2024-12-09 17:40:30.701377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.539 [2024-12-09 17:40:30.701384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.539 [2024-12-09 17:40:30.701392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.539 [2024-12-09 17:40:30.701399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.539 [2024-12-09 17:40:30.701407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.539 [2024-12-09 17:40:30.701413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.539 [2024-12-09 17:40:30.701421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.539 [2024-12-09 17:40:30.701427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.539 [2024-12-09 17:40:30.701435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.539 [2024-12-09 17:40:30.701442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.539 [2024-12-09 17:40:30.701451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.539 [2024-12-09 17:40:30.701458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.539 [2024-12-09 17:40:30.701466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.539 [2024-12-09 17:40:30.701472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.539 [2024-12-09 17:40:30.701480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.539 [2024-12-09 17:40:30.701486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.539 [2024-12-09 17:40:30.701494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.539 [2024-12-09 17:40:30.701500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.539 [2024-12-09 17:40:30.701508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.539 [2024-12-09 17:40:30.701515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.539 [2024-12-09 17:40:30.701523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.540 [2024-12-09 17:40:30.701529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.540 [2024-12-09 17:40:30.701536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.540 [2024-12-09 17:40:30.701543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.540 [2024-12-09 17:40:30.701554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.540 [2024-12-09 17:40:30.701560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.540 [2024-12-09 17:40:30.701568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.540 [2024-12-09 17:40:30.701575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.540 [2024-12-09 17:40:30.701583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.540 [2024-12-09 17:40:30.701589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.540 [2024-12-09 17:40:30.701597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.540 [2024-12-09 17:40:30.701604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.540 [2024-12-09 17:40:30.701612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.540 [2024-12-09 17:40:30.701618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.540 [2024-12-09 17:40:30.701626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.540 [2024-12-09 17:40:30.701634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.540 [2024-12-09 17:40:30.701642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.540 [2024-12-09 17:40:30.701649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.540 [2024-12-09 17:40:30.701657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.540 [2024-12-09 17:40:30.701663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.540 [2024-12-09 17:40:30.701671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.540 [2024-12-09 17:40:30.701677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.540 [2024-12-09 17:40:30.701686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.540 [2024-12-09 17:40:30.701692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.540 [2024-12-09 17:40:30.701700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.540 [2024-12-09 17:40:30.701707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.540 [2024-12-09 17:40:30.701715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.540 [2024-12-09 17:40:30.701721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.540 [2024-12-09 17:40:30.701729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.540 [2024-12-09 17:40:30.701735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.540 [2024-12-09 17:40:30.701743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.540 [2024-12-09 17:40:30.701749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.540 [2024-12-09 17:40:30.701758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.540 [2024-12-09 17:40:30.701764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.540 [2024-12-09 17:40:30.701772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.540 [2024-12-09 17:40:30.701779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.540 [2024-12-09 17:40:30.701787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.540 [2024-12-09 17:40:30.701794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.540 [2024-12-09 17:40:30.701802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.540 [2024-12-09 17:40:30.701808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.540 [2024-12-09 17:40:30.701821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.540 [2024-12-09 17:40:30.701828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.540 [2024-12-09 17:40:30.701836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.540 [2024-12-09 17:40:30.701843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.540 [2024-12-09 17:40:30.701851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.540 [2024-12-09 17:40:30.701858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.540 [2024-12-09 17:40:30.701866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.540 [2024-12-09 17:40:30.701872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.540 [2024-12-09 17:40:30.701880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.540 [2024-12-09 17:40:30.701887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.540 [2024-12-09 17:40:30.701895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.540 [2024-12-09 17:40:30.701901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.540 [2024-12-09 17:40:30.701909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.540 [2024-12-09 17:40:30.701916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.540 [2024-12-09 17:40:30.701923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.540 [2024-12-09 17:40:30.701930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.540 [2024-12-09 17:40:30.701938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.540 [2024-12-09 17:40:30.701945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.540 [2024-12-09 17:40:30.701953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.540 [2024-12-09 17:40:30.701966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.540 [2024-12-09 17:40:30.701974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.540 [2024-12-09 17:40:30.701981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.540 [2024-12-09 17:40:30.701989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.540 [2024-12-09 17:40:30.701996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.540 [2024-12-09 17:40:30.702003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.540 [2024-12-09 17:40:30.702011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.540 [2024-12-09 17:40:30.702018] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1139550 is same with the state(6) to be set 00:30:01.540 [2024-12-09 17:40:30.702961] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:01.540 task offset: 98304 on job bdev=Nvme0n1 fails 00:30:01.540 00:30:01.540 Latency(us) 00:30:01.540 [2024-12-09T16:40:30.719Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:01.540 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:01.540 Job: Nvme0n1 ended in about 0.41 seconds with error 00:30:01.540 Verification LBA range: start 0x0 length 0x400 00:30:01.540 Nvme0n1 : 0.41 1874.44 117.15 156.20 0.00 30697.80 3573.27 26713.72 00:30:01.540 [2024-12-09T16:40:30.719Z] =================================================================================================================== 00:30:01.540 [2024-12-09T16:40:30.719Z] Total : 1874.44 117.15 156.20 0.00 30697.80 3573.27 26713.72 00:30:01.540 [2024-12-09 17:40:30.705365] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:01.540 [2024-12-09 17:40:30.705386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1125aa0 (9): Bad file descriptor 00:30:01.540 17:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.540 17:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:30:01.540 17:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.540 [2024-12-09 17:40:30.706337] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:30:01.540 [2024-12-09 17:40:30.706452] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLO 17:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:01.541 CK OFFSET 0x0 len:0x400 00:30:01.541 [2024-12-09 17:40:30.706478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.541 [2024-12-09 17:40:30.706494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:30:01.541 [2024-12-09 17:40:30.706502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:30:01.541 [2024-12-09 17:40:30.706509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.541 [2024-12-09 17:40:30.706516] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1125aa0 00:30:01.541 [2024-12-09 17:40:30.706535] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1125aa0 (9): Bad file descriptor 00:30:01.541 [2024-12-09 17:40:30.706546] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:30:01.541 [2024-12-09 17:40:30.706553] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:30:01.541 [2024-12-09 17:40:30.706562] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:30:01.541 [2024-12-09 17:40:30.706569] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:30:01.799 17:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.799 17:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:30:02.734 17:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2762920 00:30:02.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2762920) - No such process 00:30:02.734 17:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:30:02.734 17:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:30:02.734 17:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:30:02.734 17:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:30:02.734 17:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:30:02.734 17:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:30:02.734 17:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:02.734 17:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:02.734 { 00:30:02.734 "params": { 00:30:02.734 "name": "Nvme$subsystem", 00:30:02.734 "trtype": "$TEST_TRANSPORT", 00:30:02.734 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:02.734 "adrfam": "ipv4", 00:30:02.734 "trsvcid": "$NVMF_PORT", 00:30:02.734 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:02.734 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:02.734 "hdgst": ${hdgst:-false}, 00:30:02.734 "ddgst": ${ddgst:-false} 00:30:02.734 }, 00:30:02.734 "method": "bdev_nvme_attach_controller" 00:30:02.734 } 00:30:02.734 EOF 00:30:02.734 )") 00:30:02.734 17:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:30:02.734 17:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:30:02.734 17:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:30:02.734 17:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:02.734 "params": { 00:30:02.734 "name": "Nvme0", 00:30:02.734 "trtype": "tcp", 00:30:02.734 "traddr": "10.0.0.2", 00:30:02.734 "adrfam": "ipv4", 00:30:02.734 "trsvcid": "4420", 00:30:02.734 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:02.734 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:02.734 "hdgst": false, 00:30:02.734 "ddgst": false 00:30:02.734 }, 00:30:02.734 "method": "bdev_nvme_attach_controller" 00:30:02.734 }' 00:30:02.734 [2024-12-09 17:40:31.771789] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:30:02.734 [2024-12-09 17:40:31.771838] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2763165 ] 00:30:02.734 [2024-12-09 17:40:31.844599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:02.734 [2024-12-09 17:40:31.882573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:02.992 Running I/O for 1 seconds... 00:30:03.927 1856.00 IOPS, 116.00 MiB/s 00:30:03.927 Latency(us) 00:30:03.927 [2024-12-09T16:40:33.106Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:03.927 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:03.927 Verification LBA range: start 0x0 length 0x400 00:30:03.927 Nvme0n1 : 1.00 1915.15 119.70 0.00 0.00 32885.01 7770.70 27088.21 00:30:03.927 [2024-12-09T16:40:33.106Z] =================================================================================================================== 00:30:03.927 [2024-12-09T16:40:33.106Z] Total : 1915.15 119.70 0.00 0.00 32885.01 7770.70 27088.21 00:30:04.186 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:30:04.186 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:30:04.186 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:04.187 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:04.187 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:30:04.187 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:04.187 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:30:04.187 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:04.187 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:30:04.187 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:04.187 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:04.187 rmmod nvme_tcp 00:30:04.187 rmmod nvme_fabrics 00:30:04.187 rmmod nvme_keyring 00:30:04.187 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:04.187 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:30:04.187 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:30:04.187 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2762727 ']' 00:30:04.187 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2762727 00:30:04.187 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2762727 ']' 00:30:04.187 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2762727 00:30:04.187 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:30:04.187 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:04.187 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2762727 00:30:04.446 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:04.446 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:04.446 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2762727' 00:30:04.446 killing process with pid 2762727 00:30:04.446 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2762727 00:30:04.446 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2762727 00:30:04.446 [2024-12-09 17:40:33.518206] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:30:04.446 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:04.446 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:04.446 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:04.446 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:30:04.446 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:30:04.446 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:04.446 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:30:04.446 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:04.446 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:04.446 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:04.446 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:04.446 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:06.985 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:06.985 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:30:06.985 00:30:06.985 real 0m12.212s 00:30:06.985 user 0m17.588s 00:30:06.985 sys 0m6.206s 00:30:06.985 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:06.985 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:06.985 ************************************ 00:30:06.985 END TEST nvmf_host_management 00:30:06.985 ************************************ 00:30:06.985 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:30:06.985 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:06.985 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:06.985 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:06.985 ************************************ 00:30:06.985 START TEST nvmf_lvol 00:30:06.985 ************************************ 00:30:06.985 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:30:06.985 * Looking for test storage... 00:30:06.985 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:06.985 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:06.985 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:30:06.985 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:06.985 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:06.985 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:06.985 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:06.985 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:06.985 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:30:06.985 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:30:06.985 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:30:06.985 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:30:06.985 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:30:06.985 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:30:06.985 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:30:06.985 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:06.985 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:30:06.985 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:30:06.986 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:06.986 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:06.986 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:30:06.986 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:30:06.986 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:06.986 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:30:06.986 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:30:06.986 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:30:06.986 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:30:06.986 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:06.986 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:30:06.986 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:30:06.986 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:06.986 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:06.986 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:30:06.986 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:06.986 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:06.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:06.986 --rc genhtml_branch_coverage=1 00:30:06.986 --rc genhtml_function_coverage=1 00:30:06.986 --rc genhtml_legend=1 00:30:06.986 --rc geninfo_all_blocks=1 00:30:06.986 --rc geninfo_unexecuted_blocks=1 00:30:06.986 00:30:06.986 ' 00:30:06.986 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:06.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:06.986 --rc genhtml_branch_coverage=1 00:30:06.986 --rc genhtml_function_coverage=1 00:30:06.986 --rc genhtml_legend=1 00:30:06.986 --rc geninfo_all_blocks=1 00:30:06.986 --rc geninfo_unexecuted_blocks=1 00:30:06.986 00:30:06.986 ' 00:30:06.986 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:06.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:06.986 --rc genhtml_branch_coverage=1 00:30:06.986 --rc genhtml_function_coverage=1 00:30:06.986 --rc genhtml_legend=1 00:30:06.986 --rc geninfo_all_blocks=1 00:30:06.986 --rc geninfo_unexecuted_blocks=1 00:30:06.986 00:30:06.986 ' 00:30:06.986 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:06.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:06.986 --rc genhtml_branch_coverage=1 00:30:06.986 --rc genhtml_function_coverage=1 00:30:06.986 --rc genhtml_legend=1 00:30:06.986 --rc geninfo_all_blocks=1 00:30:06.986 --rc geninfo_unexecuted_blocks=1 00:30:06.986 00:30:06.986 ' 00:30:06.986 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:06.986 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:30:06.986 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:06.986 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:06.986 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:06.986 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:06.986 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:06.986 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:06.986 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:06.986 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:06.986 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:06.986 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:06.986 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:30:06.986 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:30:06.986 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:06.986 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:06.986 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:06.986 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:06.986 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:06.986 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:30:06.986 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:06.986 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:06.986 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:06.986 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.986 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.986 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.986 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:30:06.986 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.986 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:30:06.986 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:06.986 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:06.986 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:06.986 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:06.986 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:06.986 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:06.986 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:06.986 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:06.986 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:06.986 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:06.986 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:06.986 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:06.986 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:30:06.986 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:30:06.986 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:06.986 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:30:06.986 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:06.986 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:06.986 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:06.986 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:06.986 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:06.986 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:06.986 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:06.986 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:06.986 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:06.986 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:06.986 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:30:06.986 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:13.556 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:13.556 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:30:13.556 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:13.556 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:13.556 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:13.556 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:13.556 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:13.556 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:30:13.556 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:13.556 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:30:13.556 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:30:13.556 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:30:13.556 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:30:13.556 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:30:13.556 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:30:13.556 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:13.556 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:13.556 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:13.556 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:13.556 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:13.556 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:13.556 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:13.556 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:13.556 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:13.556 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:13.556 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:13.556 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:13.556 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:13.556 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:13.556 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:13.556 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:13.556 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:13.556 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:13.556 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:13.556 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:13.556 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:13.556 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:13.556 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:13.556 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:13.556 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:13.556 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:13.556 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:13.556 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:13.556 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:13.556 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:13.556 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:13.557 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:13.557 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:13.557 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:13.557 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:13.557 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:13.557 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:13.557 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:13.557 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:13.557 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:13.557 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:13.557 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:13.557 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:13.557 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:13.557 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:13.557 Found net devices under 0000:af:00.0: cvl_0_0 00:30:13.557 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:13.557 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:13.557 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:13.557 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:13.557 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:13.557 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:13.557 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:13.557 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:13.557 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:13.557 Found net devices under 0000:af:00.1: cvl_0_1 00:30:13.557 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:13.557 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:13.557 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:30:13.557 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:13.557 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:13.557 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:13.557 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:13.557 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:13.557 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:13.557 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:13.557 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:13.557 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:13.557 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:13.557 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:13.557 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:13.557 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:13.557 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:13.557 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:13.557 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:13.557 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:13.557 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:13.557 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:13.557 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:13.557 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:13.557 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:13.557 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:13.557 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:13.557 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:13.557 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:13.557 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:13.557 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.311 ms 00:30:13.557 00:30:13.557 --- 10.0.0.2 ping statistics --- 00:30:13.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:13.557 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:30:13.557 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:13.557 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:13.557 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:30:13.557 00:30:13.557 --- 10.0.0.1 ping statistics --- 00:30:13.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:13.557 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:30:13.557 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:13.557 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:30:13.557 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:13.557 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:13.557 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:13.557 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:13.557 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:13.557 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:13.557 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:13.557 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:30:13.557 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:13.557 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:13.557 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:13.557 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2766900 00:30:13.557 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:30:13.557 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2766900 00:30:13.557 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2766900 ']' 00:30:13.557 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:13.557 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:13.557 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:13.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:13.557 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:13.557 17:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:13.557 [2024-12-09 17:40:41.838045] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:13.557 [2024-12-09 17:40:41.838947] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:30:13.557 [2024-12-09 17:40:41.838979] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:13.557 [2024-12-09 17:40:41.918409] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:13.557 [2024-12-09 17:40:41.958185] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:13.557 [2024-12-09 17:40:41.958223] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:13.557 [2024-12-09 17:40:41.958230] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:13.557 [2024-12-09 17:40:41.958236] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:13.557 [2024-12-09 17:40:41.958241] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:13.557 [2024-12-09 17:40:41.959561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:13.557 [2024-12-09 17:40:41.959670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:13.557 [2024-12-09 17:40:41.959671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:13.557 [2024-12-09 17:40:42.026213] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:13.557 [2024-12-09 17:40:42.027028] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:13.557 [2024-12-09 17:40:42.027298] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:13.557 [2024-12-09 17:40:42.027382] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:13.558 17:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:13.558 17:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:30:13.558 17:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:13.558 17:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:13.558 17:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:13.558 17:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:13.558 17:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:13.558 [2024-12-09 17:40:42.260561] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:13.558 17:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:13.558 17:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:30:13.558 17:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:13.816 17:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:30:13.816 17:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:30:13.816 17:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:30:14.075 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=9d0bf261-c981-495c-bfd7-d6c40be67a0b 00:30:14.075 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9d0bf261-c981-495c-bfd7-d6c40be67a0b lvol 20 00:30:14.333 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=bfd93be6-0d2f-48ca-b77f-f88936085315 00:30:14.333 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:14.592 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bfd93be6-0d2f-48ca-b77f-f88936085315 00:30:14.592 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:14.850 [2024-12-09 17:40:43.856297] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:14.850 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:15.109 17:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2767305 00:30:15.109 17:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:30:15.109 17:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:30:16.043 17:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot bfd93be6-0d2f-48ca-b77f-f88936085315 MY_SNAPSHOT 00:30:16.300 17:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=953570e5-93dc-40ae-84f5-898eefc0b199 00:30:16.300 17:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize bfd93be6-0d2f-48ca-b77f-f88936085315 30 00:30:16.557 17:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 953570e5-93dc-40ae-84f5-898eefc0b199 MY_CLONE 00:30:16.815 17:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=dbac9a03-0577-4a51-8a35-24f411dce40e 00:30:16.815 17:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate dbac9a03-0577-4a51-8a35-24f411dce40e 00:30:17.380 17:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2767305 00:30:25.487 Initializing NVMe Controllers 00:30:25.487 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:30:25.487 Controller IO queue size 128, less than required. 00:30:25.487 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:25.487 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:30:25.487 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:30:25.487 Initialization complete. Launching workers. 00:30:25.487 ======================================================== 00:30:25.487 Latency(us) 00:30:25.487 Device Information : IOPS MiB/s Average min max 00:30:25.487 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12696.90 49.60 10082.23 2666.48 67775.90 00:30:25.487 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12587.30 49.17 10168.09 3507.52 51579.69 00:30:25.487 ======================================================== 00:30:25.487 Total : 25284.20 98.77 10124.97 2666.48 67775.90 00:30:25.487 00:30:25.487 17:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:25.487 17:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete bfd93be6-0d2f-48ca-b77f-f88936085315 00:30:25.746 17:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9d0bf261-c981-495c-bfd7-d6c40be67a0b 00:30:26.006 17:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:30:26.006 17:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:30:26.006 17:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:30:26.006 17:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:26.006 17:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:30:26.006 17:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:26.006 17:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:30:26.006 17:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:26.006 17:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:26.006 rmmod nvme_tcp 00:30:26.006 rmmod nvme_fabrics 00:30:26.006 rmmod nvme_keyring 00:30:26.006 17:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:26.006 17:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:30:26.006 17:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:30:26.006 17:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2766900 ']' 00:30:26.006 17:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2766900 00:30:26.006 17:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2766900 ']' 00:30:26.006 17:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2766900 00:30:26.006 17:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:30:26.006 17:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:26.006 17:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2766900 00:30:26.006 17:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:26.006 17:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:26.006 17:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2766900' 00:30:26.006 killing process with pid 2766900 00:30:26.006 17:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2766900 00:30:26.006 17:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2766900 00:30:26.265 17:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:26.265 17:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:26.265 17:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:26.265 17:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:30:26.265 17:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:30:26.265 17:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:26.265 17:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:30:26.265 17:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:26.265 17:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:26.265 17:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:26.265 17:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:26.265 17:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:28.170 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:28.170 00:30:28.170 real 0m21.641s 00:30:28.170 user 0m55.059s 00:30:28.170 sys 0m9.707s 00:30:28.170 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:28.170 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:28.170 ************************************ 00:30:28.170 END TEST nvmf_lvol 00:30:28.170 ************************************ 00:30:28.430 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:30:28.430 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:28.430 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:28.430 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:28.430 ************************************ 00:30:28.430 START TEST nvmf_lvs_grow 00:30:28.430 ************************************ 00:30:28.430 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:30:28.430 * Looking for test storage... 00:30:28.430 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:28.430 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:28.430 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:30:28.430 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:28.430 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:28.430 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:28.430 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:28.430 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:28.430 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:30:28.430 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:30:28.430 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:30:28.430 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:30:28.430 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:30:28.430 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:30:28.430 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:30:28.431 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:28.431 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:30:28.431 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:30:28.431 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:28.431 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:28.431 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:30:28.431 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:30:28.431 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:28.431 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:30:28.431 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:30:28.431 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:30:28.431 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:30:28.431 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:28.431 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:30:28.431 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:30:28.431 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:28.431 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:28.431 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:30:28.431 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:28.431 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:28.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:28.431 --rc genhtml_branch_coverage=1 00:30:28.431 --rc genhtml_function_coverage=1 00:30:28.431 --rc genhtml_legend=1 00:30:28.431 --rc geninfo_all_blocks=1 00:30:28.431 --rc geninfo_unexecuted_blocks=1 00:30:28.431 00:30:28.431 ' 00:30:28.431 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:28.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:28.431 --rc genhtml_branch_coverage=1 00:30:28.431 --rc genhtml_function_coverage=1 00:30:28.431 --rc genhtml_legend=1 00:30:28.431 --rc geninfo_all_blocks=1 00:30:28.431 --rc geninfo_unexecuted_blocks=1 00:30:28.431 00:30:28.431 ' 00:30:28.431 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:28.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:28.431 --rc genhtml_branch_coverage=1 00:30:28.431 --rc genhtml_function_coverage=1 00:30:28.431 --rc genhtml_legend=1 00:30:28.431 --rc geninfo_all_blocks=1 00:30:28.431 --rc geninfo_unexecuted_blocks=1 00:30:28.431 00:30:28.431 ' 00:30:28.431 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:28.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:28.431 --rc genhtml_branch_coverage=1 00:30:28.431 --rc genhtml_function_coverage=1 00:30:28.431 --rc genhtml_legend=1 00:30:28.431 --rc geninfo_all_blocks=1 00:30:28.431 --rc geninfo_unexecuted_blocks=1 00:30:28.431 00:30:28.431 ' 00:30:28.431 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:28.431 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:30:28.431 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:28.431 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:28.431 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:28.431 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:28.431 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:28.431 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:28.431 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:28.431 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:28.431 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:28.431 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:28.431 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:30:28.431 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:30:28.431 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:28.431 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:28.431 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:28.431 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:28.431 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:28.431 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:30:28.431 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:28.431 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:28.431 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:28.431 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.431 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.431 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.431 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:30:28.431 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.431 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:30:28.431 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:28.431 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:28.431 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:28.431 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:28.431 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:28.431 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:28.431 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:28.431 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:28.431 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:28.431 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:28.432 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:28.432 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:28.432 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:30:28.691 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:28.691 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:28.691 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:28.691 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:28.691 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:28.691 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:28.691 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:28.691 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:28.691 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:28.691 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:28.691 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:30:28.691 17:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:35.262 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:35.262 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:30:35.262 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:35.262 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:35.262 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:35.262 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:35.262 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:35.262 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:30:35.262 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:35.262 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:30:35.262 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:30:35.262 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:30:35.262 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:30:35.262 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:30:35.262 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:30:35.262 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:35.262 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:35.262 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:35.262 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:35.262 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:35.262 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:35.262 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:35.262 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:35.262 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:35.262 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:35.262 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:35.262 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:35.262 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:35.262 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:35.262 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:35.262 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:35.262 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:35.262 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:35.262 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:35.262 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:35.262 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:35.262 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:35.262 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:35.263 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:35.263 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:35.263 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:35.263 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:35.263 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:35.263 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:35.263 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:35.263 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:35.263 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:35.263 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:35.263 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:35.263 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:35.263 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:35.263 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:35.263 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:35.263 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:35.263 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:35.263 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:35.263 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:35.263 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:35.263 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:35.263 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:35.263 Found net devices under 0000:af:00.0: cvl_0_0 00:30:35.263 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:35.263 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:35.263 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:35.263 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:35.263 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:35.263 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:35.263 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:35.263 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:35.263 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:35.263 Found net devices under 0000:af:00.1: cvl_0_1 00:30:35.263 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:35.263 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:35.263 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:30:35.263 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:35.263 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:35.263 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:35.263 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:35.263 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:35.263 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:35.263 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:35.263 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:35.263 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:35.263 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:35.263 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:35.263 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:35.263 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:35.263 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:35.263 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:35.263 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:35.263 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:35.263 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:35.263 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:35.263 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:35.263 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:35.263 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:35.263 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:35.263 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:35.263 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:35.263 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:35.263 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:35.263 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:30:35.263 00:30:35.263 --- 10.0.0.2 ping statistics --- 00:30:35.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:35.263 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:30:35.263 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:35.263 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:35.263 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:30:35.263 00:30:35.263 --- 10.0.0.1 ping statistics --- 00:30:35.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:35.263 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:30:35.263 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:35.263 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:30:35.263 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:35.263 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:35.263 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:35.263 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:35.263 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:35.263 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:35.263 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:35.263 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:30:35.263 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:35.263 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:35.263 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:35.263 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2772463 00:30:35.263 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:30:35.263 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2772463 00:30:35.263 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2772463 ']' 00:30:35.263 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:35.263 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:35.263 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:35.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:35.264 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:35.264 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:35.264 [2024-12-09 17:41:03.542816] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:35.264 [2024-12-09 17:41:03.543685] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:30:35.264 [2024-12-09 17:41:03.543717] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:35.264 [2024-12-09 17:41:03.620406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:35.264 [2024-12-09 17:41:03.660725] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:35.264 [2024-12-09 17:41:03.660761] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:35.264 [2024-12-09 17:41:03.660768] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:35.264 [2024-12-09 17:41:03.660776] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:35.264 [2024-12-09 17:41:03.660780] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:35.264 [2024-12-09 17:41:03.661286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:35.264 [2024-12-09 17:41:03.729500] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:35.264 [2024-12-09 17:41:03.729705] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:35.264 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:35.264 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:30:35.264 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:35.264 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:35.264 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:35.264 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:35.264 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:35.264 [2024-12-09 17:41:03.969903] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:35.264 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:30:35.264 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:35.264 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:35.264 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:35.264 ************************************ 00:30:35.264 START TEST lvs_grow_clean 00:30:35.264 ************************************ 00:30:35.264 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:30:35.264 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:30:35.264 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:30:35.264 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:30:35.264 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:30:35.264 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:30:35.264 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:30:35.264 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:35.264 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:35.264 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:35.264 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:30:35.264 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:30:35.523 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=30d49c60-3a06-423f-92ed-750180c395b6 00:30:35.523 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 30d49c60-3a06-423f-92ed-750180c395b6 00:30:35.523 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:30:35.523 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:30:35.523 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:30:35.523 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 30d49c60-3a06-423f-92ed-750180c395b6 lvol 150 00:30:35.782 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=d29050c8-0cab-412b-8a9c-4e67dc8cd4aa 00:30:35.782 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:35.782 17:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:30:36.042 [2024-12-09 17:41:05.025671] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:30:36.042 [2024-12-09 17:41:05.025797] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:30:36.042 true 00:30:36.042 17:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 30d49c60-3a06-423f-92ed-750180c395b6 00:30:36.042 17:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:30:36.300 17:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:30:36.300 17:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:36.300 17:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d29050c8-0cab-412b-8a9c-4e67dc8cd4aa 00:30:36.559 17:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:36.818 [2024-12-09 17:41:05.762164] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:36.818 17:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:36.818 17:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2772954 00:30:36.818 17:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:36.818 17:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:30:36.818 17:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2772954 /var/tmp/bdevperf.sock 00:30:36.818 17:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2772954 ']' 00:30:36.818 17:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:36.818 17:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:36.818 17:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:36.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:36.819 17:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:36.819 17:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:30:37.078 [2024-12-09 17:41:06.010662] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:30:37.078 [2024-12-09 17:41:06.010711] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2772954 ] 00:30:37.078 [2024-12-09 17:41:06.086183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:37.078 [2024-12-09 17:41:06.126405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:37.078 17:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:37.078 17:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:30:37.078 17:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:30:37.337 Nvme0n1 00:30:37.337 17:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:30:37.597 [ 00:30:37.597 { 00:30:37.597 "name": "Nvme0n1", 00:30:37.597 "aliases": [ 00:30:37.597 "d29050c8-0cab-412b-8a9c-4e67dc8cd4aa" 00:30:37.597 ], 00:30:37.597 "product_name": "NVMe disk", 00:30:37.597 "block_size": 4096, 00:30:37.597 "num_blocks": 38912, 00:30:37.597 "uuid": "d29050c8-0cab-412b-8a9c-4e67dc8cd4aa", 00:30:37.597 "numa_id": 1, 00:30:37.597 "assigned_rate_limits": { 00:30:37.597 "rw_ios_per_sec": 0, 00:30:37.597 "rw_mbytes_per_sec": 0, 00:30:37.597 "r_mbytes_per_sec": 0, 00:30:37.597 "w_mbytes_per_sec": 0 00:30:37.597 }, 00:30:37.597 "claimed": false, 00:30:37.597 "zoned": false, 00:30:37.597 "supported_io_types": { 00:30:37.597 "read": true, 00:30:37.597 "write": true, 00:30:37.597 "unmap": true, 00:30:37.597 "flush": true, 00:30:37.597 "reset": true, 00:30:37.597 "nvme_admin": true, 00:30:37.597 "nvme_io": true, 00:30:37.597 "nvme_io_md": false, 00:30:37.597 "write_zeroes": true, 00:30:37.597 "zcopy": false, 00:30:37.597 "get_zone_info": false, 00:30:37.597 "zone_management": false, 00:30:37.597 "zone_append": false, 00:30:37.597 "compare": true, 00:30:37.597 "compare_and_write": true, 00:30:37.597 "abort": true, 00:30:37.597 "seek_hole": false, 00:30:37.597 "seek_data": false, 00:30:37.597 "copy": true, 00:30:37.597 "nvme_iov_md": false 00:30:37.597 }, 00:30:37.597 "memory_domains": [ 00:30:37.597 { 00:30:37.597 "dma_device_id": "system", 00:30:37.597 "dma_device_type": 1 00:30:37.597 } 00:30:37.597 ], 00:30:37.597 "driver_specific": { 00:30:37.597 "nvme": [ 00:30:37.597 { 00:30:37.597 "trid": { 00:30:37.597 "trtype": "TCP", 00:30:37.597 "adrfam": "IPv4", 00:30:37.597 "traddr": "10.0.0.2", 00:30:37.597 "trsvcid": "4420", 00:30:37.597 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:37.597 }, 00:30:37.597 "ctrlr_data": { 00:30:37.597 "cntlid": 1, 00:30:37.597 "vendor_id": "0x8086", 00:30:37.597 "model_number": "SPDK bdev Controller", 00:30:37.597 "serial_number": "SPDK0", 00:30:37.597 "firmware_revision": "25.01", 00:30:37.597 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:37.597 "oacs": { 00:30:37.597 "security": 0, 00:30:37.597 "format": 0, 00:30:37.597 "firmware": 0, 00:30:37.597 "ns_manage": 0 00:30:37.597 }, 00:30:37.597 "multi_ctrlr": true, 00:30:37.597 "ana_reporting": false 00:30:37.597 }, 00:30:37.597 "vs": { 00:30:37.597 "nvme_version": "1.3" 00:30:37.597 }, 00:30:37.597 "ns_data": { 00:30:37.597 "id": 1, 00:30:37.597 "can_share": true 00:30:37.597 } 00:30:37.597 } 00:30:37.597 ], 00:30:37.597 "mp_policy": "active_passive" 00:30:37.597 } 00:30:37.597 } 00:30:37.597 ] 00:30:37.597 17:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2772963 00:30:37.597 17:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:37.597 17:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:30:37.597 Running I/O for 10 seconds... 00:30:38.976 Latency(us) 00:30:38.976 [2024-12-09T16:41:08.155Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:38.976 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:38.977 Nvme0n1 : 1.00 23148.00 90.42 0.00 0.00 0.00 0.00 0.00 00:30:38.977 [2024-12-09T16:41:08.156Z] =================================================================================================================== 00:30:38.977 [2024-12-09T16:41:08.156Z] Total : 23148.00 90.42 0.00 0.00 0.00 0.00 0.00 00:30:38.977 00:30:39.545 17:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 30d49c60-3a06-423f-92ed-750180c395b6 00:30:39.804 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:39.804 Nvme0n1 : 2.00 23512.00 91.84 0.00 0.00 0.00 0.00 0.00 00:30:39.804 [2024-12-09T16:41:08.983Z] =================================================================================================================== 00:30:39.804 [2024-12-09T16:41:08.983Z] Total : 23512.00 91.84 0.00 0.00 0.00 0.00 0.00 00:30:39.804 00:30:39.804 true 00:30:39.804 17:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 30d49c60-3a06-423f-92ed-750180c395b6 00:30:39.804 17:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:30:40.063 17:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:30:40.063 17:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:30:40.063 17:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2772963 00:30:40.632 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:40.632 Nvme0n1 : 3.00 23633.33 92.32 0.00 0.00 0.00 0.00 0.00 00:30:40.632 [2024-12-09T16:41:09.811Z] =================================================================================================================== 00:30:40.632 [2024-12-09T16:41:09.811Z] Total : 23633.33 92.32 0.00 0.00 0.00 0.00 0.00 00:30:40.632 00:30:42.010 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:42.010 Nvme0n1 : 4.00 23598.75 92.18 0.00 0.00 0.00 0.00 0.00 00:30:42.010 [2024-12-09T16:41:11.189Z] =================================================================================================================== 00:30:42.010 [2024-12-09T16:41:11.189Z] Total : 23598.75 92.18 0.00 0.00 0.00 0.00 0.00 00:30:42.010 00:30:42.949 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:42.949 Nvme0n1 : 5.00 23679.60 92.50 0.00 0.00 0.00 0.00 0.00 00:30:42.949 [2024-12-09T16:41:12.128Z] =================================================================================================================== 00:30:42.949 [2024-12-09T16:41:12.128Z] Total : 23679.60 92.50 0.00 0.00 0.00 0.00 0.00 00:30:42.949 00:30:43.887 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:43.887 Nvme0n1 : 6.00 23754.67 92.79 0.00 0.00 0.00 0.00 0.00 00:30:43.888 [2024-12-09T16:41:13.067Z] =================================================================================================================== 00:30:43.888 [2024-12-09T16:41:13.067Z] Total : 23754.67 92.79 0.00 0.00 0.00 0.00 0.00 00:30:43.888 00:30:44.825 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:44.825 Nvme0n1 : 7.00 23808.29 93.00 0.00 0.00 0.00 0.00 0.00 00:30:44.825 [2024-12-09T16:41:14.004Z] =================================================================================================================== 00:30:44.825 [2024-12-09T16:41:14.004Z] Total : 23808.29 93.00 0.00 0.00 0.00 0.00 0.00 00:30:44.825 00:30:45.864 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:45.864 Nvme0n1 : 8.00 23832.62 93.10 0.00 0.00 0.00 0.00 0.00 00:30:45.864 [2024-12-09T16:41:15.043Z] =================================================================================================================== 00:30:45.864 [2024-12-09T16:41:15.043Z] Total : 23832.62 93.10 0.00 0.00 0.00 0.00 0.00 00:30:45.864 00:30:46.804 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:46.804 Nvme0n1 : 9.00 23858.67 93.20 0.00 0.00 0.00 0.00 0.00 00:30:46.804 [2024-12-09T16:41:15.983Z] =================================================================================================================== 00:30:46.804 [2024-12-09T16:41:15.983Z] Total : 23858.67 93.20 0.00 0.00 0.00 0.00 0.00 00:30:46.804 00:30:47.742 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:47.742 Nvme0n1 : 10.00 23884.30 93.30 0.00 0.00 0.00 0.00 0.00 00:30:47.742 [2024-12-09T16:41:16.921Z] =================================================================================================================== 00:30:47.742 [2024-12-09T16:41:16.921Z] Total : 23884.30 93.30 0.00 0.00 0.00 0.00 0.00 00:30:47.742 00:30:47.742 00:30:47.742 Latency(us) 00:30:47.742 [2024-12-09T16:41:16.921Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:47.742 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:47.742 Nvme0n1 : 10.00 23890.58 93.32 0.00 0.00 5354.82 3120.76 26339.23 00:30:47.742 [2024-12-09T16:41:16.921Z] =================================================================================================================== 00:30:47.742 [2024-12-09T16:41:16.921Z] Total : 23890.58 93.32 0.00 0.00 5354.82 3120.76 26339.23 00:30:47.742 { 00:30:47.742 "results": [ 00:30:47.742 { 00:30:47.742 "job": "Nvme0n1", 00:30:47.742 "core_mask": "0x2", 00:30:47.742 "workload": "randwrite", 00:30:47.742 "status": "finished", 00:30:47.742 "queue_depth": 128, 00:30:47.742 "io_size": 4096, 00:30:47.742 "runtime": 10.002728, 00:30:47.742 "iops": 23890.58264905334, 00:30:47.742 "mibps": 93.32258847286461, 00:30:47.742 "io_failed": 0, 00:30:47.742 "io_timeout": 0, 00:30:47.742 "avg_latency_us": 5354.815300888273, 00:30:47.742 "min_latency_us": 3120.7619047619046, 00:30:47.742 "max_latency_us": 26339.230476190478 00:30:47.742 } 00:30:47.742 ], 00:30:47.742 "core_count": 1 00:30:47.742 } 00:30:47.742 17:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2772954 00:30:47.742 17:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2772954 ']' 00:30:47.742 17:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2772954 00:30:47.742 17:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:30:47.742 17:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:47.742 17:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2772954 00:30:47.742 17:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:47.742 17:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:47.742 17:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2772954' 00:30:47.742 killing process with pid 2772954 00:30:47.742 17:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2772954 00:30:47.742 Received shutdown signal, test time was about 10.000000 seconds 00:30:47.742 00:30:47.742 Latency(us) 00:30:47.742 [2024-12-09T16:41:16.921Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:47.742 [2024-12-09T16:41:16.921Z] =================================================================================================================== 00:30:47.742 [2024-12-09T16:41:16.921Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:47.742 17:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2772954 00:30:48.002 17:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:48.261 17:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:48.520 17:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 30d49c60-3a06-423f-92ed-750180c395b6 00:30:48.520 17:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:30:48.520 17:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:30:48.520 17:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:30:48.520 17:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:48.780 [2024-12-09 17:41:17.805743] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:30:48.780 17:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 30d49c60-3a06-423f-92ed-750180c395b6 00:30:48.780 17:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:30:48.780 17:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 30d49c60-3a06-423f-92ed-750180c395b6 00:30:48.780 17:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:48.780 17:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:48.780 17:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:48.780 17:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:48.780 17:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:48.780 17:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:48.780 17:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:48.780 17:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:30:48.780 17:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 30d49c60-3a06-423f-92ed-750180c395b6 00:30:49.039 request: 00:30:49.039 { 00:30:49.039 "uuid": "30d49c60-3a06-423f-92ed-750180c395b6", 00:30:49.039 "method": "bdev_lvol_get_lvstores", 00:30:49.039 "req_id": 1 00:30:49.039 } 00:30:49.039 Got JSON-RPC error response 00:30:49.039 response: 00:30:49.039 { 00:30:49.039 "code": -19, 00:30:49.039 "message": "No such device" 00:30:49.039 } 00:30:49.039 17:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:30:49.039 17:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:49.039 17:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:49.039 17:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:49.039 17:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:49.298 aio_bdev 00:30:49.298 17:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev d29050c8-0cab-412b-8a9c-4e67dc8cd4aa 00:30:49.298 17:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=d29050c8-0cab-412b-8a9c-4e67dc8cd4aa 00:30:49.298 17:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:49.298 17:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:30:49.298 17:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:49.298 17:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:49.298 17:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:49.298 17:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d29050c8-0cab-412b-8a9c-4e67dc8cd4aa -t 2000 00:30:49.557 [ 00:30:49.557 { 00:30:49.557 "name": "d29050c8-0cab-412b-8a9c-4e67dc8cd4aa", 00:30:49.557 "aliases": [ 00:30:49.557 "lvs/lvol" 00:30:49.557 ], 00:30:49.557 "product_name": "Logical Volume", 00:30:49.557 "block_size": 4096, 00:30:49.557 "num_blocks": 38912, 00:30:49.557 "uuid": "d29050c8-0cab-412b-8a9c-4e67dc8cd4aa", 00:30:49.557 "assigned_rate_limits": { 00:30:49.557 "rw_ios_per_sec": 0, 00:30:49.557 "rw_mbytes_per_sec": 0, 00:30:49.557 "r_mbytes_per_sec": 0, 00:30:49.557 "w_mbytes_per_sec": 0 00:30:49.557 }, 00:30:49.557 "claimed": false, 00:30:49.557 "zoned": false, 00:30:49.557 "supported_io_types": { 00:30:49.557 "read": true, 00:30:49.557 "write": true, 00:30:49.557 "unmap": true, 00:30:49.557 "flush": false, 00:30:49.557 "reset": true, 00:30:49.557 "nvme_admin": false, 00:30:49.557 "nvme_io": false, 00:30:49.557 "nvme_io_md": false, 00:30:49.557 "write_zeroes": true, 00:30:49.557 "zcopy": false, 00:30:49.557 "get_zone_info": false, 00:30:49.557 "zone_management": false, 00:30:49.557 "zone_append": false, 00:30:49.557 "compare": false, 00:30:49.557 "compare_and_write": false, 00:30:49.557 "abort": false, 00:30:49.557 "seek_hole": true, 00:30:49.557 "seek_data": true, 00:30:49.557 "copy": false, 00:30:49.557 "nvme_iov_md": false 00:30:49.557 }, 00:30:49.557 "driver_specific": { 00:30:49.557 "lvol": { 00:30:49.557 "lvol_store_uuid": "30d49c60-3a06-423f-92ed-750180c395b6", 00:30:49.557 "base_bdev": "aio_bdev", 00:30:49.557 "thin_provision": false, 00:30:49.557 "num_allocated_clusters": 38, 00:30:49.557 "snapshot": false, 00:30:49.557 "clone": false, 00:30:49.557 "esnap_clone": false 00:30:49.557 } 00:30:49.557 } 00:30:49.557 } 00:30:49.557 ] 00:30:49.557 17:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:30:49.557 17:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 30d49c60-3a06-423f-92ed-750180c395b6 00:30:49.557 17:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:30:49.816 17:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:30:49.816 17:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 30d49c60-3a06-423f-92ed-750180c395b6 00:30:49.816 17:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:30:50.076 17:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:30:50.076 17:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d29050c8-0cab-412b-8a9c-4e67dc8cd4aa 00:30:50.076 17:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 30d49c60-3a06-423f-92ed-750180c395b6 00:30:50.335 17:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:50.594 17:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:50.594 00:30:50.594 real 0m15.579s 00:30:50.594 user 0m15.094s 00:30:50.594 sys 0m1.491s 00:30:50.594 17:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:50.594 17:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:30:50.594 ************************************ 00:30:50.594 END TEST lvs_grow_clean 00:30:50.594 ************************************ 00:30:50.594 17:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:30:50.594 17:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:50.594 17:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:50.594 17:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:50.594 ************************************ 00:30:50.594 START TEST lvs_grow_dirty 00:30:50.594 ************************************ 00:30:50.594 17:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:30:50.594 17:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:30:50.594 17:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:30:50.594 17:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:30:50.594 17:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:30:50.594 17:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:30:50.594 17:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:30:50.594 17:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:50.594 17:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:50.594 17:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:50.853 17:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:30:50.853 17:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:30:51.113 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=0551c7fa-347b-4bae-a07e-fecb54325c17 00:30:51.113 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0551c7fa-347b-4bae-a07e-fecb54325c17 00:30:51.113 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:30:51.372 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:30:51.372 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:30:51.372 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 0551c7fa-347b-4bae-a07e-fecb54325c17 lvol 150 00:30:51.372 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=fa02e24c-166c-4b58-afee-af76f2a8228f 00:30:51.372 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:51.372 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:30:51.631 [2024-12-09 17:41:20.709676] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:30:51.631 [2024-12-09 17:41:20.709805] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:30:51.631 true 00:30:51.631 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0551c7fa-347b-4bae-a07e-fecb54325c17 00:30:51.631 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:30:51.889 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:30:51.889 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:52.148 17:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 fa02e24c-166c-4b58-afee-af76f2a8228f 00:30:52.148 17:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:52.406 [2024-12-09 17:41:21.486096] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:52.406 17:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:52.665 17:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2775503 00:30:52.665 17:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:52.665 17:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:30:52.665 17:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2775503 /var/tmp/bdevperf.sock 00:30:52.665 17:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2775503 ']' 00:30:52.665 17:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:52.665 17:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:52.665 17:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:52.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:52.665 17:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:52.665 17:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:52.665 [2024-12-09 17:41:21.746305] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:30:52.665 [2024-12-09 17:41:21.746353] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2775503 ] 00:30:52.666 [2024-12-09 17:41:21.819990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:52.924 [2024-12-09 17:41:21.861271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:52.924 17:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:52.924 17:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:30:52.924 17:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:30:53.184 Nvme0n1 00:30:53.184 17:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:30:53.443 [ 00:30:53.443 { 00:30:53.443 "name": "Nvme0n1", 00:30:53.443 "aliases": [ 00:30:53.443 "fa02e24c-166c-4b58-afee-af76f2a8228f" 00:30:53.443 ], 00:30:53.443 "product_name": "NVMe disk", 00:30:53.443 "block_size": 4096, 00:30:53.443 "num_blocks": 38912, 00:30:53.443 "uuid": "fa02e24c-166c-4b58-afee-af76f2a8228f", 00:30:53.443 "numa_id": 1, 00:30:53.443 "assigned_rate_limits": { 00:30:53.443 "rw_ios_per_sec": 0, 00:30:53.443 "rw_mbytes_per_sec": 0, 00:30:53.443 "r_mbytes_per_sec": 0, 00:30:53.443 "w_mbytes_per_sec": 0 00:30:53.443 }, 00:30:53.443 "claimed": false, 00:30:53.443 "zoned": false, 00:30:53.443 "supported_io_types": { 00:30:53.443 "read": true, 00:30:53.443 "write": true, 00:30:53.443 "unmap": true, 00:30:53.443 "flush": true, 00:30:53.443 "reset": true, 00:30:53.443 "nvme_admin": true, 00:30:53.443 "nvme_io": true, 00:30:53.443 "nvme_io_md": false, 00:30:53.443 "write_zeroes": true, 00:30:53.443 "zcopy": false, 00:30:53.443 "get_zone_info": false, 00:30:53.443 "zone_management": false, 00:30:53.443 "zone_append": false, 00:30:53.443 "compare": true, 00:30:53.443 "compare_and_write": true, 00:30:53.443 "abort": true, 00:30:53.443 "seek_hole": false, 00:30:53.443 "seek_data": false, 00:30:53.443 "copy": true, 00:30:53.443 "nvme_iov_md": false 00:30:53.443 }, 00:30:53.443 "memory_domains": [ 00:30:53.443 { 00:30:53.443 "dma_device_id": "system", 00:30:53.443 "dma_device_type": 1 00:30:53.443 } 00:30:53.443 ], 00:30:53.443 "driver_specific": { 00:30:53.443 "nvme": [ 00:30:53.443 { 00:30:53.443 "trid": { 00:30:53.443 "trtype": "TCP", 00:30:53.443 "adrfam": "IPv4", 00:30:53.443 "traddr": "10.0.0.2", 00:30:53.443 "trsvcid": "4420", 00:30:53.443 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:53.443 }, 00:30:53.443 "ctrlr_data": { 00:30:53.443 "cntlid": 1, 00:30:53.443 "vendor_id": "0x8086", 00:30:53.443 "model_number": "SPDK bdev Controller", 00:30:53.443 "serial_number": "SPDK0", 00:30:53.443 "firmware_revision": "25.01", 00:30:53.443 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:53.443 "oacs": { 00:30:53.443 "security": 0, 00:30:53.443 "format": 0, 00:30:53.443 "firmware": 0, 00:30:53.443 "ns_manage": 0 00:30:53.443 }, 00:30:53.443 "multi_ctrlr": true, 00:30:53.443 "ana_reporting": false 00:30:53.443 }, 00:30:53.443 "vs": { 00:30:53.443 "nvme_version": "1.3" 00:30:53.443 }, 00:30:53.443 "ns_data": { 00:30:53.443 "id": 1, 00:30:53.443 "can_share": true 00:30:53.443 } 00:30:53.443 } 00:30:53.443 ], 00:30:53.443 "mp_policy": "active_passive" 00:30:53.443 } 00:30:53.443 } 00:30:53.443 ] 00:30:53.443 17:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:53.443 17:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2775520 00:30:53.443 17:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:30:53.443 Running I/O for 10 seconds... 00:30:54.820 Latency(us) 00:30:54.820 [2024-12-09T16:41:23.999Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:54.820 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:54.820 Nvme0n1 : 1.00 22987.00 89.79 0.00 0.00 0.00 0.00 0.00 00:30:54.820 [2024-12-09T16:41:24.000Z] =================================================================================================================== 00:30:54.821 [2024-12-09T16:41:24.000Z] Total : 22987.00 89.79 0.00 0.00 0.00 0.00 0.00 00:30:54.821 00:30:55.389 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 0551c7fa-347b-4bae-a07e-fecb54325c17 00:30:55.647 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:55.647 Nvme0n1 : 2.00 23400.00 91.41 0.00 0.00 0.00 0.00 0.00 00:30:55.647 [2024-12-09T16:41:24.826Z] =================================================================================================================== 00:30:55.647 [2024-12-09T16:41:24.826Z] Total : 23400.00 91.41 0.00 0.00 0.00 0.00 0.00 00:30:55.647 00:30:55.647 true 00:30:55.648 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0551c7fa-347b-4bae-a07e-fecb54325c17 00:30:55.648 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:30:55.907 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:30:55.907 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:30:55.907 17:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2775520 00:30:56.473 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:56.473 Nvme0n1 : 3.00 23553.67 92.01 0.00 0.00 0.00 0.00 0.00 00:30:56.473 [2024-12-09T16:41:25.652Z] =================================================================================================================== 00:30:56.473 [2024-12-09T16:41:25.652Z] Total : 23553.67 92.01 0.00 0.00 0.00 0.00 0.00 00:30:56.473 00:30:57.852 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:57.852 Nvme0n1 : 4.00 23650.25 92.38 0.00 0.00 0.00 0.00 0.00 00:30:57.852 [2024-12-09T16:41:27.031Z] =================================================================================================================== 00:30:57.852 [2024-12-09T16:41:27.031Z] Total : 23650.25 92.38 0.00 0.00 0.00 0.00 0.00 00:30:57.852 00:30:58.789 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:58.789 Nvme0n1 : 5.00 23717.80 92.65 0.00 0.00 0.00 0.00 0.00 00:30:58.789 [2024-12-09T16:41:27.969Z] =================================================================================================================== 00:30:58.790 [2024-12-09T16:41:27.969Z] Total : 23717.80 92.65 0.00 0.00 0.00 0.00 0.00 00:30:58.790 00:30:59.728 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:59.728 Nvme0n1 : 6.00 23786.50 92.92 0.00 0.00 0.00 0.00 0.00 00:30:59.728 [2024-12-09T16:41:28.907Z] =================================================================================================================== 00:30:59.728 [2024-12-09T16:41:28.907Z] Total : 23786.50 92.92 0.00 0.00 0.00 0.00 0.00 00:30:59.728 00:31:00.665 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:00.665 Nvme0n1 : 7.00 23835.57 93.11 0.00 0.00 0.00 0.00 0.00 00:31:00.665 [2024-12-09T16:41:29.844Z] =================================================================================================================== 00:31:00.665 [2024-12-09T16:41:29.844Z] Total : 23835.57 93.11 0.00 0.00 0.00 0.00 0.00 00:31:00.665 00:31:01.603 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:01.603 Nvme0n1 : 8.00 23840.62 93.13 0.00 0.00 0.00 0.00 0.00 00:31:01.603 [2024-12-09T16:41:30.782Z] =================================================================================================================== 00:31:01.603 [2024-12-09T16:41:30.782Z] Total : 23840.62 93.13 0.00 0.00 0.00 0.00 0.00 00:31:01.603 00:31:02.538 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:02.538 Nvme0n1 : 9.00 23858.67 93.20 0.00 0.00 0.00 0.00 0.00 00:31:02.538 [2024-12-09T16:41:31.717Z] =================================================================================================================== 00:31:02.538 [2024-12-09T16:41:31.717Z] Total : 23858.67 93.20 0.00 0.00 0.00 0.00 0.00 00:31:02.538 00:31:03.475 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:03.475 Nvme0n1 : 10.00 23873.10 93.25 0.00 0.00 0.00 0.00 0.00 00:31:03.475 [2024-12-09T16:41:32.654Z] =================================================================================================================== 00:31:03.475 [2024-12-09T16:41:32.654Z] Total : 23873.10 93.25 0.00 0.00 0.00 0.00 0.00 00:31:03.475 00:31:03.475 00:31:03.475 Latency(us) 00:31:03.475 [2024-12-09T16:41:32.654Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:03.475 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:03.475 Nvme0n1 : 10.00 23877.23 93.27 0.00 0.00 5357.59 3120.76 27962.03 00:31:03.475 [2024-12-09T16:41:32.654Z] =================================================================================================================== 00:31:03.475 [2024-12-09T16:41:32.654Z] Total : 23877.23 93.27 0.00 0.00 5357.59 3120.76 27962.03 00:31:03.475 { 00:31:03.475 "results": [ 00:31:03.475 { 00:31:03.475 "job": "Nvme0n1", 00:31:03.475 "core_mask": "0x2", 00:31:03.475 "workload": "randwrite", 00:31:03.475 "status": "finished", 00:31:03.475 "queue_depth": 128, 00:31:03.475 "io_size": 4096, 00:31:03.475 "runtime": 10.00363, 00:31:03.475 "iops": 23877.23256457906, 00:31:03.475 "mibps": 93.27043970538695, 00:31:03.475 "io_failed": 0, 00:31:03.475 "io_timeout": 0, 00:31:03.475 "avg_latency_us": 5357.5869022071, 00:31:03.475 "min_latency_us": 3120.7619047619046, 00:31:03.475 "max_latency_us": 27962.02666666667 00:31:03.475 } 00:31:03.475 ], 00:31:03.475 "core_count": 1 00:31:03.475 } 00:31:03.734 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2775503 00:31:03.734 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2775503 ']' 00:31:03.734 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2775503 00:31:03.734 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:31:03.734 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:03.734 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2775503 00:31:03.734 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:03.734 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:03.734 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2775503' 00:31:03.734 killing process with pid 2775503 00:31:03.734 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2775503 00:31:03.734 Received shutdown signal, test time was about 10.000000 seconds 00:31:03.734 00:31:03.734 Latency(us) 00:31:03.734 [2024-12-09T16:41:32.913Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:03.734 [2024-12-09T16:41:32.913Z] =================================================================================================================== 00:31:03.734 [2024-12-09T16:41:32.913Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:03.734 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2775503 00:31:03.734 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:03.993 17:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:04.253 17:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:31:04.253 17:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0551c7fa-347b-4bae-a07e-fecb54325c17 00:31:04.512 17:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:31:04.512 17:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:31:04.512 17:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2772463 00:31:04.512 17:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2772463 00:31:04.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2772463 Killed "${NVMF_APP[@]}" "$@" 00:31:04.512 17:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:31:04.512 17:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:31:04.512 17:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:04.512 17:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:04.512 17:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:04.512 17:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2777328 00:31:04.512 17:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:31:04.512 17:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2777328 00:31:04.512 17:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2777328 ']' 00:31:04.512 17:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:04.512 17:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:04.512 17:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:04.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:04.512 17:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:04.512 17:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:04.512 [2024-12-09 17:41:33.594804] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:04.512 [2024-12-09 17:41:33.595730] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:31:04.512 [2024-12-09 17:41:33.595770] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:04.512 [2024-12-09 17:41:33.674916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:04.772 [2024-12-09 17:41:33.714560] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:04.772 [2024-12-09 17:41:33.714593] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:04.772 [2024-12-09 17:41:33.714600] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:04.772 [2024-12-09 17:41:33.714606] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:04.772 [2024-12-09 17:41:33.714611] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:04.772 [2024-12-09 17:41:33.715096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:04.772 [2024-12-09 17:41:33.782422] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:04.772 [2024-12-09 17:41:33.782619] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:04.772 17:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:04.772 17:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:31:04.772 17:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:04.772 17:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:04.772 17:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:04.772 17:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:04.772 17:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:05.031 [2024-12-09 17:41:34.020476] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:31:05.031 [2024-12-09 17:41:34.020683] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:31:05.031 [2024-12-09 17:41:34.020770] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:31:05.031 17:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:31:05.031 17:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev fa02e24c-166c-4b58-afee-af76f2a8228f 00:31:05.031 17:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=fa02e24c-166c-4b58-afee-af76f2a8228f 00:31:05.032 17:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:05.032 17:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:31:05.032 17:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:05.032 17:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:05.032 17:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:05.291 17:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b fa02e24c-166c-4b58-afee-af76f2a8228f -t 2000 00:31:05.291 [ 00:31:05.291 { 00:31:05.291 "name": "fa02e24c-166c-4b58-afee-af76f2a8228f", 00:31:05.291 "aliases": [ 00:31:05.291 "lvs/lvol" 00:31:05.291 ], 00:31:05.291 "product_name": "Logical Volume", 00:31:05.291 "block_size": 4096, 00:31:05.291 "num_blocks": 38912, 00:31:05.291 "uuid": "fa02e24c-166c-4b58-afee-af76f2a8228f", 00:31:05.291 "assigned_rate_limits": { 00:31:05.291 "rw_ios_per_sec": 0, 00:31:05.291 "rw_mbytes_per_sec": 0, 00:31:05.291 "r_mbytes_per_sec": 0, 00:31:05.291 "w_mbytes_per_sec": 0 00:31:05.291 }, 00:31:05.291 "claimed": false, 00:31:05.291 "zoned": false, 00:31:05.291 "supported_io_types": { 00:31:05.291 "read": true, 00:31:05.291 "write": true, 00:31:05.291 "unmap": true, 00:31:05.291 "flush": false, 00:31:05.291 "reset": true, 00:31:05.291 "nvme_admin": false, 00:31:05.291 "nvme_io": false, 00:31:05.291 "nvme_io_md": false, 00:31:05.291 "write_zeroes": true, 00:31:05.291 "zcopy": false, 00:31:05.291 "get_zone_info": false, 00:31:05.291 "zone_management": false, 00:31:05.291 "zone_append": false, 00:31:05.291 "compare": false, 00:31:05.291 "compare_and_write": false, 00:31:05.291 "abort": false, 00:31:05.291 "seek_hole": true, 00:31:05.291 "seek_data": true, 00:31:05.291 "copy": false, 00:31:05.291 "nvme_iov_md": false 00:31:05.291 }, 00:31:05.291 "driver_specific": { 00:31:05.291 "lvol": { 00:31:05.291 "lvol_store_uuid": "0551c7fa-347b-4bae-a07e-fecb54325c17", 00:31:05.291 "base_bdev": "aio_bdev", 00:31:05.291 "thin_provision": false, 00:31:05.291 "num_allocated_clusters": 38, 00:31:05.291 "snapshot": false, 00:31:05.291 "clone": false, 00:31:05.291 "esnap_clone": false 00:31:05.291 } 00:31:05.291 } 00:31:05.291 } 00:31:05.291 ] 00:31:05.291 17:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:31:05.291 17:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:31:05.291 17:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0551c7fa-347b-4bae-a07e-fecb54325c17 00:31:05.550 17:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:31:05.550 17:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0551c7fa-347b-4bae-a07e-fecb54325c17 00:31:05.550 17:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:31:05.809 17:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:31:05.809 17:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:06.069 [2024-12-09 17:41:34.995520] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:31:06.069 17:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0551c7fa-347b-4bae-a07e-fecb54325c17 00:31:06.069 17:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:31:06.069 17:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0551c7fa-347b-4bae-a07e-fecb54325c17 00:31:06.069 17:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:06.069 17:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:06.069 17:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:06.069 17:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:06.069 17:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:06.069 17:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:06.069 17:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:06.069 17:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:31:06.069 17:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0551c7fa-347b-4bae-a07e-fecb54325c17 00:31:06.069 request: 00:31:06.069 { 00:31:06.069 "uuid": "0551c7fa-347b-4bae-a07e-fecb54325c17", 00:31:06.069 "method": "bdev_lvol_get_lvstores", 00:31:06.069 "req_id": 1 00:31:06.069 } 00:31:06.069 Got JSON-RPC error response 00:31:06.069 response: 00:31:06.069 { 00:31:06.069 "code": -19, 00:31:06.069 "message": "No such device" 00:31:06.069 } 00:31:06.069 17:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:31:06.069 17:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:06.069 17:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:06.069 17:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:06.069 17:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:06.328 aio_bdev 00:31:06.328 17:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev fa02e24c-166c-4b58-afee-af76f2a8228f 00:31:06.328 17:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=fa02e24c-166c-4b58-afee-af76f2a8228f 00:31:06.328 17:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:06.328 17:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:31:06.328 17:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:06.328 17:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:06.328 17:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:06.587 17:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b fa02e24c-166c-4b58-afee-af76f2a8228f -t 2000 00:31:06.846 [ 00:31:06.846 { 00:31:06.846 "name": "fa02e24c-166c-4b58-afee-af76f2a8228f", 00:31:06.846 "aliases": [ 00:31:06.846 "lvs/lvol" 00:31:06.846 ], 00:31:06.846 "product_name": "Logical Volume", 00:31:06.846 "block_size": 4096, 00:31:06.846 "num_blocks": 38912, 00:31:06.846 "uuid": "fa02e24c-166c-4b58-afee-af76f2a8228f", 00:31:06.846 "assigned_rate_limits": { 00:31:06.846 "rw_ios_per_sec": 0, 00:31:06.846 "rw_mbytes_per_sec": 0, 00:31:06.846 "r_mbytes_per_sec": 0, 00:31:06.846 "w_mbytes_per_sec": 0 00:31:06.846 }, 00:31:06.846 "claimed": false, 00:31:06.846 "zoned": false, 00:31:06.846 "supported_io_types": { 00:31:06.846 "read": true, 00:31:06.846 "write": true, 00:31:06.846 "unmap": true, 00:31:06.846 "flush": false, 00:31:06.846 "reset": true, 00:31:06.846 "nvme_admin": false, 00:31:06.846 "nvme_io": false, 00:31:06.846 "nvme_io_md": false, 00:31:06.846 "write_zeroes": true, 00:31:06.846 "zcopy": false, 00:31:06.846 "get_zone_info": false, 00:31:06.846 "zone_management": false, 00:31:06.846 "zone_append": false, 00:31:06.846 "compare": false, 00:31:06.846 "compare_and_write": false, 00:31:06.846 "abort": false, 00:31:06.846 "seek_hole": true, 00:31:06.846 "seek_data": true, 00:31:06.846 "copy": false, 00:31:06.846 "nvme_iov_md": false 00:31:06.846 }, 00:31:06.846 "driver_specific": { 00:31:06.846 "lvol": { 00:31:06.846 "lvol_store_uuid": "0551c7fa-347b-4bae-a07e-fecb54325c17", 00:31:06.846 "base_bdev": "aio_bdev", 00:31:06.846 "thin_provision": false, 00:31:06.846 "num_allocated_clusters": 38, 00:31:06.846 "snapshot": false, 00:31:06.846 "clone": false, 00:31:06.846 "esnap_clone": false 00:31:06.846 } 00:31:06.846 } 00:31:06.846 } 00:31:06.846 ] 00:31:06.846 17:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:31:06.846 17:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0551c7fa-347b-4bae-a07e-fecb54325c17 00:31:06.846 17:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:31:06.846 17:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:31:06.846 17:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0551c7fa-347b-4bae-a07e-fecb54325c17 00:31:06.846 17:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:31:07.105 17:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:31:07.105 17:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete fa02e24c-166c-4b58-afee-af76f2a8228f 00:31:07.365 17:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0551c7fa-347b-4bae-a07e-fecb54325c17 00:31:07.624 17:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:07.883 17:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:07.883 00:31:07.883 real 0m17.160s 00:31:07.883 user 0m34.611s 00:31:07.883 sys 0m3.734s 00:31:07.883 17:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:07.883 17:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:07.883 ************************************ 00:31:07.883 END TEST lvs_grow_dirty 00:31:07.883 ************************************ 00:31:07.883 17:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:31:07.883 17:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:31:07.883 17:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:31:07.883 17:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:31:07.883 17:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:31:07.883 17:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:31:07.883 17:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:31:07.883 17:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:31:07.883 17:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:31:07.883 nvmf_trace.0 00:31:07.883 17:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:31:07.883 17:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:31:07.883 17:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:07.883 17:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:31:07.883 17:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:07.883 17:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:31:07.883 17:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:07.883 17:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:07.883 rmmod nvme_tcp 00:31:07.883 rmmod nvme_fabrics 00:31:07.883 rmmod nvme_keyring 00:31:07.883 17:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:07.883 17:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:31:07.883 17:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:31:07.883 17:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2777328 ']' 00:31:07.883 17:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2777328 00:31:07.883 17:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2777328 ']' 00:31:07.883 17:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2777328 00:31:07.883 17:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:31:07.883 17:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:07.883 17:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2777328 00:31:08.143 17:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:08.143 17:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:08.143 17:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2777328' 00:31:08.143 killing process with pid 2777328 00:31:08.143 17:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2777328 00:31:08.143 17:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2777328 00:31:08.143 17:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:08.143 17:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:08.143 17:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:08.143 17:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:31:08.143 17:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:31:08.143 17:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:08.143 17:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:31:08.143 17:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:08.143 17:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:08.143 17:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:08.143 17:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:08.143 17:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:10.680 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:10.680 00:31:10.680 real 0m41.908s 00:31:10.680 user 0m52.234s 00:31:10.680 sys 0m10.077s 00:31:10.680 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:10.680 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:10.680 ************************************ 00:31:10.680 END TEST nvmf_lvs_grow 00:31:10.680 ************************************ 00:31:10.680 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:31:10.680 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:10.680 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:10.680 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:10.680 ************************************ 00:31:10.681 START TEST nvmf_bdev_io_wait 00:31:10.681 ************************************ 00:31:10.681 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:31:10.681 * Looking for test storage... 00:31:10.681 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:10.681 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:10.681 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:31:10.681 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:10.681 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:10.681 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:10.681 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:10.681 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:10.681 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:31:10.681 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:31:10.681 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:31:10.681 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:31:10.681 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:31:10.681 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:31:10.681 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:31:10.681 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:10.681 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:31:10.681 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:31:10.681 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:10.681 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:10.681 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:31:10.681 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:31:10.681 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:10.681 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:31:10.681 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:31:10.681 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:31:10.681 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:31:10.681 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:10.681 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:31:10.681 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:31:10.681 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:10.681 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:10.681 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:31:10.681 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:10.681 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:10.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:10.681 --rc genhtml_branch_coverage=1 00:31:10.681 --rc genhtml_function_coverage=1 00:31:10.681 --rc genhtml_legend=1 00:31:10.681 --rc geninfo_all_blocks=1 00:31:10.681 --rc geninfo_unexecuted_blocks=1 00:31:10.681 00:31:10.681 ' 00:31:10.681 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:10.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:10.681 --rc genhtml_branch_coverage=1 00:31:10.681 --rc genhtml_function_coverage=1 00:31:10.681 --rc genhtml_legend=1 00:31:10.681 --rc geninfo_all_blocks=1 00:31:10.681 --rc geninfo_unexecuted_blocks=1 00:31:10.681 00:31:10.681 ' 00:31:10.681 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:10.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:10.681 --rc genhtml_branch_coverage=1 00:31:10.681 --rc genhtml_function_coverage=1 00:31:10.681 --rc genhtml_legend=1 00:31:10.681 --rc geninfo_all_blocks=1 00:31:10.681 --rc geninfo_unexecuted_blocks=1 00:31:10.681 00:31:10.681 ' 00:31:10.681 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:10.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:10.681 --rc genhtml_branch_coverage=1 00:31:10.681 --rc genhtml_function_coverage=1 00:31:10.681 --rc genhtml_legend=1 00:31:10.681 --rc geninfo_all_blocks=1 00:31:10.681 --rc geninfo_unexecuted_blocks=1 00:31:10.681 00:31:10.681 ' 00:31:10.681 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:10.681 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:31:10.681 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:10.681 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:10.681 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:10.681 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:10.681 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:10.681 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:10.681 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:10.681 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:10.681 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:10.681 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:10.681 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:31:10.681 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:31:10.681 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:10.681 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:10.681 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:10.681 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:10.681 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:10.681 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:31:10.681 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:10.681 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:10.681 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:10.681 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:10.682 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:10.682 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:10.682 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:31:10.682 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:10.682 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:31:10.682 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:10.682 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:10.682 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:10.682 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:10.682 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:10.682 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:10.682 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:10.682 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:10.682 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:10.682 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:10.682 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:10.682 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:10.682 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:31:10.682 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:10.682 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:10.682 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:10.682 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:10.682 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:10.682 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:10.682 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:10.682 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:10.682 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:10.682 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:10.682 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:31:10.682 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:17.251 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:17.252 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:17.252 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:17.252 Found net devices under 0000:af:00.0: cvl_0_0 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:17.252 Found net devices under 0000:af:00.1: cvl_0_1 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:17.252 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:17.253 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:17.253 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:17.253 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:17.253 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:17.253 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:17.253 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:17.253 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:17.253 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:17.253 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:17.253 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.390 ms 00:31:17.253 00:31:17.253 --- 10.0.0.2 ping statistics --- 00:31:17.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:17.253 rtt min/avg/max/mdev = 0.390/0.390/0.390/0.000 ms 00:31:17.253 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:17.253 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:17.253 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:31:17.253 00:31:17.253 --- 10.0.0.1 ping statistics --- 00:31:17.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:17.253 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:31:17.253 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:17.253 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:31:17.253 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:17.253 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:17.253 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:17.253 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:17.253 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:17.253 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:17.253 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:17.253 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:31:17.253 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:17.253 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:17.253 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:17.253 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2781335 00:31:17.253 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2781335 00:31:17.253 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:31:17.253 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2781335 ']' 00:31:17.253 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:17.253 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:17.253 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:17.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:17.253 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:17.253 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:17.253 [2024-12-09 17:41:45.519979] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:17.253 [2024-12-09 17:41:45.520924] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:31:17.253 [2024-12-09 17:41:45.520961] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:17.253 [2024-12-09 17:41:45.600643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:17.253 [2024-12-09 17:41:45.641974] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:17.253 [2024-12-09 17:41:45.642010] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:17.253 [2024-12-09 17:41:45.642016] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:17.253 [2024-12-09 17:41:45.642022] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:17.253 [2024-12-09 17:41:45.642027] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:17.253 [2024-12-09 17:41:45.643506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:17.253 [2024-12-09 17:41:45.643620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:17.253 [2024-12-09 17:41:45.643725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:17.253 [2024-12-09 17:41:45.643726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:17.253 [2024-12-09 17:41:45.644073] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:17.253 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:17.253 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:31:17.253 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:17.253 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:17.253 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:17.253 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:17.253 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:31:17.253 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.253 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:17.253 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.253 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:31:17.253 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.253 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:17.253 [2024-12-09 17:41:45.768209] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:17.253 [2024-12-09 17:41:45.768433] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:17.253 [2024-12-09 17:41:45.768957] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:17.253 [2024-12-09 17:41:45.768991] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:17.253 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.253 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:17.253 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.253 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:17.253 [2024-12-09 17:41:45.780482] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:17.253 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.253 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:17.253 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.253 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:17.253 Malloc0 00:31:17.253 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.253 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:17.253 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.253 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:17.253 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.253 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:17.253 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.253 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:17.253 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.254 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:17.254 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.254 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:17.254 [2024-12-09 17:41:45.852611] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:17.254 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.254 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2781390 00:31:17.254 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:31:17.254 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:31:17.254 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2781393 00:31:17.254 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:31:17.254 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:31:17.254 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:17.254 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:17.254 { 00:31:17.254 "params": { 00:31:17.254 "name": "Nvme$subsystem", 00:31:17.254 "trtype": "$TEST_TRANSPORT", 00:31:17.254 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:17.254 "adrfam": "ipv4", 00:31:17.254 "trsvcid": "$NVMF_PORT", 00:31:17.254 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:17.254 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:17.254 "hdgst": ${hdgst:-false}, 00:31:17.254 "ddgst": ${ddgst:-false} 00:31:17.254 }, 00:31:17.254 "method": "bdev_nvme_attach_controller" 00:31:17.254 } 00:31:17.254 EOF 00:31:17.254 )") 00:31:17.254 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:31:17.254 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:31:17.254 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2781395 00:31:17.254 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:31:17.254 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:31:17.254 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:17.254 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:17.254 { 00:31:17.254 "params": { 00:31:17.254 "name": "Nvme$subsystem", 00:31:17.254 "trtype": "$TEST_TRANSPORT", 00:31:17.254 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:17.254 "adrfam": "ipv4", 00:31:17.254 "trsvcid": "$NVMF_PORT", 00:31:17.254 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:17.254 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:17.254 "hdgst": ${hdgst:-false}, 00:31:17.254 "ddgst": ${ddgst:-false} 00:31:17.254 }, 00:31:17.254 "method": "bdev_nvme_attach_controller" 00:31:17.254 } 00:31:17.254 EOF 00:31:17.254 )") 00:31:17.254 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:31:17.254 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:31:17.254 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2781399 00:31:17.254 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:31:17.254 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:31:17.254 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:31:17.254 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:31:17.254 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:17.254 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:31:17.254 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:31:17.254 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:17.254 { 00:31:17.254 "params": { 00:31:17.254 "name": "Nvme$subsystem", 00:31:17.254 "trtype": "$TEST_TRANSPORT", 00:31:17.254 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:17.254 "adrfam": "ipv4", 00:31:17.254 "trsvcid": "$NVMF_PORT", 00:31:17.254 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:17.254 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:17.254 "hdgst": ${hdgst:-false}, 00:31:17.254 "ddgst": ${ddgst:-false} 00:31:17.254 }, 00:31:17.254 "method": "bdev_nvme_attach_controller" 00:31:17.254 } 00:31:17.254 EOF 00:31:17.254 )") 00:31:17.254 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:31:17.254 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:31:17.254 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:31:17.254 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:17.254 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:17.254 { 00:31:17.254 "params": { 00:31:17.254 "name": "Nvme$subsystem", 00:31:17.254 "trtype": "$TEST_TRANSPORT", 00:31:17.254 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:17.254 "adrfam": "ipv4", 00:31:17.254 "trsvcid": "$NVMF_PORT", 00:31:17.254 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:17.254 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:17.254 "hdgst": ${hdgst:-false}, 00:31:17.254 "ddgst": ${ddgst:-false} 00:31:17.254 }, 00:31:17.254 "method": "bdev_nvme_attach_controller" 00:31:17.254 } 00:31:17.254 EOF 00:31:17.254 )") 00:31:17.254 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:31:17.254 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2781390 00:31:17.254 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:31:17.254 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:31:17.254 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:31:17.254 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:31:17.254 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:31:17.254 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:17.254 "params": { 00:31:17.254 "name": "Nvme1", 00:31:17.254 "trtype": "tcp", 00:31:17.254 "traddr": "10.0.0.2", 00:31:17.254 "adrfam": "ipv4", 00:31:17.254 "trsvcid": "4420", 00:31:17.254 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:17.254 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:17.254 "hdgst": false, 00:31:17.254 "ddgst": false 00:31:17.254 }, 00:31:17.254 "method": "bdev_nvme_attach_controller" 00:31:17.254 }' 00:31:17.254 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:31:17.254 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:31:17.254 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:17.254 "params": { 00:31:17.255 "name": "Nvme1", 00:31:17.255 "trtype": "tcp", 00:31:17.255 "traddr": "10.0.0.2", 00:31:17.255 "adrfam": "ipv4", 00:31:17.255 "trsvcid": "4420", 00:31:17.255 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:17.255 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:17.255 "hdgst": false, 00:31:17.255 "ddgst": false 00:31:17.255 }, 00:31:17.255 "method": "bdev_nvme_attach_controller" 00:31:17.255 }' 00:31:17.255 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:31:17.255 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:17.255 "params": { 00:31:17.255 "name": "Nvme1", 00:31:17.255 "trtype": "tcp", 00:31:17.255 "traddr": "10.0.0.2", 00:31:17.255 "adrfam": "ipv4", 00:31:17.255 "trsvcid": "4420", 00:31:17.255 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:17.255 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:17.255 "hdgst": false, 00:31:17.255 "ddgst": false 00:31:17.255 }, 00:31:17.255 "method": "bdev_nvme_attach_controller" 00:31:17.255 }' 00:31:17.255 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:31:17.255 17:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:17.255 "params": { 00:31:17.255 "name": "Nvme1", 00:31:17.255 "trtype": "tcp", 00:31:17.255 "traddr": "10.0.0.2", 00:31:17.255 "adrfam": "ipv4", 00:31:17.255 "trsvcid": "4420", 00:31:17.255 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:17.255 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:17.255 "hdgst": false, 00:31:17.255 "ddgst": false 00:31:17.255 }, 00:31:17.255 "method": "bdev_nvme_attach_controller" 00:31:17.255 }' 00:31:17.255 [2024-12-09 17:41:45.904234] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:31:17.255 [2024-12-09 17:41:45.904279] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:31:17.255 [2024-12-09 17:41:45.904866] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:31:17.255 [2024-12-09 17:41:45.904905] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:31:17.255 [2024-12-09 17:41:45.905128] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:31:17.255 [2024-12-09 17:41:45.905179] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:31:17.255 [2024-12-09 17:41:45.907815] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:31:17.255 [2024-12-09 17:41:45.907863] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:31:17.255 [2024-12-09 17:41:46.066758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:17.255 [2024-12-09 17:41:46.110176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:17.255 [2024-12-09 17:41:46.120040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:17.255 [2024-12-09 17:41:46.158147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:31:17.255 [2024-12-09 17:41:46.210868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:17.255 [2024-12-09 17:41:46.255136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:31:17.255 [2024-12-09 17:41:46.311593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:17.255 [2024-12-09 17:41:46.365174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:31:17.255 Running I/O for 1 seconds... 00:31:17.513 Running I/O for 1 seconds... 00:31:17.513 Running I/O for 1 seconds... 00:31:17.513 Running I/O for 1 seconds... 00:31:18.446 16386.00 IOPS, 64.01 MiB/s 00:31:18.446 Latency(us) 00:31:18.446 [2024-12-09T16:41:47.625Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:18.446 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:31:18.446 Nvme1n1 : 1.01 16452.86 64.27 0.00 0.00 7760.89 3229.99 9112.62 00:31:18.446 [2024-12-09T16:41:47.625Z] =================================================================================================================== 00:31:18.446 [2024-12-09T16:41:47.625Z] Total : 16452.86 64.27 0.00 0.00 7760.89 3229.99 9112.62 00:31:18.446 6867.00 IOPS, 26.82 MiB/s 00:31:18.446 Latency(us) 00:31:18.446 [2024-12-09T16:41:47.625Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:18.446 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:31:18.446 Nvme1n1 : 1.01 6907.87 26.98 0.00 0.00 18399.59 1513.57 26464.06 00:31:18.446 [2024-12-09T16:41:47.625Z] =================================================================================================================== 00:31:18.446 [2024-12-09T16:41:47.625Z] Total : 6907.87 26.98 0.00 0.00 18399.59 1513.57 26464.06 00:31:18.446 243768.00 IOPS, 952.22 MiB/s 00:31:18.446 Latency(us) 00:31:18.446 [2024-12-09T16:41:47.625Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:18.446 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:31:18.446 Nvme1n1 : 1.00 243405.05 950.80 0.00 0.00 523.52 220.40 1490.16 00:31:18.446 [2024-12-09T16:41:47.625Z] =================================================================================================================== 00:31:18.446 [2024-12-09T16:41:47.625Z] Total : 243405.05 950.80 0.00 0.00 523.52 220.40 1490.16 00:31:18.446 17:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2781393 00:31:18.446 7469.00 IOPS, 29.18 MiB/s 00:31:18.446 Latency(us) 00:31:18.446 [2024-12-09T16:41:47.625Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:18.446 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:31:18.446 Nvme1n1 : 1.01 7562.31 29.54 0.00 0.00 16882.59 3776.12 36700.16 00:31:18.446 [2024-12-09T16:41:47.625Z] =================================================================================================================== 00:31:18.446 [2024-12-09T16:41:47.625Z] Total : 7562.31 29.54 0.00 0.00 16882.59 3776.12 36700.16 00:31:18.705 17:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2781395 00:31:18.705 17:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2781399 00:31:18.705 17:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:18.705 17:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:18.705 17:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:18.705 17:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:18.705 17:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:31:18.705 17:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:31:18.705 17:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:18.705 17:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:31:18.705 17:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:18.705 17:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:31:18.705 17:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:18.705 17:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:18.705 rmmod nvme_tcp 00:31:18.705 rmmod nvme_fabrics 00:31:18.705 rmmod nvme_keyring 00:31:18.705 17:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:18.705 17:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:31:18.705 17:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:31:18.705 17:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2781335 ']' 00:31:18.705 17:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2781335 00:31:18.705 17:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2781335 ']' 00:31:18.705 17:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2781335 00:31:18.705 17:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:31:18.705 17:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:18.705 17:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2781335 00:31:18.705 17:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:18.705 17:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:18.705 17:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2781335' 00:31:18.705 killing process with pid 2781335 00:31:18.705 17:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2781335 00:31:18.705 17:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2781335 00:31:18.964 17:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:18.964 17:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:18.964 17:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:18.964 17:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:31:18.964 17:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:31:18.964 17:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:18.965 17:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:31:18.965 17:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:18.965 17:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:18.965 17:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:18.965 17:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:18.965 17:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:21.501 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:21.501 00:31:21.501 real 0m10.705s 00:31:21.501 user 0m15.057s 00:31:21.501 sys 0m6.387s 00:31:21.501 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:21.501 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:21.501 ************************************ 00:31:21.501 END TEST nvmf_bdev_io_wait 00:31:21.501 ************************************ 00:31:21.501 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:31:21.501 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:21.501 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:21.501 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:21.501 ************************************ 00:31:21.501 START TEST nvmf_queue_depth 00:31:21.501 ************************************ 00:31:21.501 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:31:21.501 * Looking for test storage... 00:31:21.501 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:21.501 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:21.501 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:31:21.501 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:21.501 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:21.501 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:21.501 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:21.501 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:21.501 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:31:21.502 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:31:21.502 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:31:21.502 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:31:21.502 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:31:21.502 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:31:21.502 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:31:21.502 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:21.502 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:31:21.502 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:31:21.502 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:21.502 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:21.502 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:31:21.502 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:31:21.502 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:21.502 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:31:21.502 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:31:21.502 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:31:21.502 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:31:21.502 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:21.502 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:31:21.502 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:31:21.502 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:21.502 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:21.502 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:31:21.502 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:21.502 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:21.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:21.502 --rc genhtml_branch_coverage=1 00:31:21.502 --rc genhtml_function_coverage=1 00:31:21.502 --rc genhtml_legend=1 00:31:21.502 --rc geninfo_all_blocks=1 00:31:21.502 --rc geninfo_unexecuted_blocks=1 00:31:21.502 00:31:21.502 ' 00:31:21.502 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:21.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:21.502 --rc genhtml_branch_coverage=1 00:31:21.502 --rc genhtml_function_coverage=1 00:31:21.502 --rc genhtml_legend=1 00:31:21.502 --rc geninfo_all_blocks=1 00:31:21.502 --rc geninfo_unexecuted_blocks=1 00:31:21.502 00:31:21.502 ' 00:31:21.502 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:21.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:21.502 --rc genhtml_branch_coverage=1 00:31:21.502 --rc genhtml_function_coverage=1 00:31:21.502 --rc genhtml_legend=1 00:31:21.502 --rc geninfo_all_blocks=1 00:31:21.502 --rc geninfo_unexecuted_blocks=1 00:31:21.502 00:31:21.502 ' 00:31:21.502 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:21.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:21.502 --rc genhtml_branch_coverage=1 00:31:21.502 --rc genhtml_function_coverage=1 00:31:21.502 --rc genhtml_legend=1 00:31:21.502 --rc geninfo_all_blocks=1 00:31:21.502 --rc geninfo_unexecuted_blocks=1 00:31:21.502 00:31:21.502 ' 00:31:21.502 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:21.502 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:31:21.502 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:21.502 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:21.502 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:21.502 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:21.502 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:21.502 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:21.502 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:21.502 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:21.502 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:21.502 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:21.502 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:31:21.502 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:31:21.502 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:21.502 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:21.502 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:21.502 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:21.502 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:21.502 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:31:21.502 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:21.502 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:21.502 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:21.502 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:21.502 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:21.502 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:21.502 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:31:21.502 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:21.502 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:31:21.502 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:21.502 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:21.502 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:21.502 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:21.503 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:21.503 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:21.503 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:21.503 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:21.503 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:21.503 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:21.503 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:31:21.503 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:31:21.503 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:21.503 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:31:21.503 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:21.503 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:21.503 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:21.503 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:21.503 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:21.503 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:21.503 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:21.503 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:21.503 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:21.503 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:21.503 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:31:21.503 17:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:28.075 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:28.075 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:31:28.075 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:28.075 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:28.075 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:28.076 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:28.076 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:28.076 Found net devices under 0000:af:00.0: cvl_0_0 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:28.076 Found net devices under 0000:af:00.1: cvl_0_1 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:28.076 17:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:28.076 17:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:28.076 17:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:28.076 17:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:28.076 17:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:28.076 17:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:28.076 17:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:28.077 17:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:28.077 17:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:28.077 17:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:28.077 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:28.077 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:31:28.077 00:31:28.077 --- 10.0.0.2 ping statistics --- 00:31:28.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:28.077 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:31:28.077 17:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:28.077 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:28.077 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:31:28.077 00:31:28.077 --- 10.0.0.1 ping statistics --- 00:31:28.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:28.077 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:31:28.077 17:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:28.077 17:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:31:28.077 17:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:28.077 17:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:28.077 17:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:28.077 17:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:28.077 17:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:28.077 17:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:28.077 17:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:28.077 17:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:31:28.077 17:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:28.077 17:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:28.077 17:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:28.077 17:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2785313 00:31:28.077 17:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:31:28.077 17:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2785313 00:31:28.077 17:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2785313 ']' 00:31:28.077 17:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:28.077 17:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:28.077 17:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:28.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:28.077 17:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:28.077 17:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:28.077 [2024-12-09 17:41:56.397635] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:28.077 [2024-12-09 17:41:56.398555] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:31:28.077 [2024-12-09 17:41:56.398590] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:28.077 [2024-12-09 17:41:56.476106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:28.077 [2024-12-09 17:41:56.512884] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:28.077 [2024-12-09 17:41:56.512920] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:28.077 [2024-12-09 17:41:56.512928] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:28.077 [2024-12-09 17:41:56.512934] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:28.077 [2024-12-09 17:41:56.512938] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:28.077 [2024-12-09 17:41:56.513466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:28.077 [2024-12-09 17:41:56.580139] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:28.077 [2024-12-09 17:41:56.580338] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:28.077 17:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:28.077 17:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:31:28.077 17:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:28.077 17:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:28.077 17:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:28.077 17:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:28.077 17:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:28.077 17:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:28.077 17:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:28.077 [2024-12-09 17:41:56.654158] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:28.077 17:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:28.077 17:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:28.077 17:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:28.077 17:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:28.077 Malloc0 00:31:28.077 17:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:28.077 17:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:28.077 17:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:28.077 17:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:28.077 17:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:28.077 17:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:28.077 17:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:28.077 17:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:28.077 17:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:28.077 17:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:28.077 17:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:28.077 17:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:28.077 [2024-12-09 17:41:56.730311] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:28.077 17:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:28.077 17:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2785339 00:31:28.077 17:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:28.077 17:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:31:28.077 17:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2785339 /var/tmp/bdevperf.sock 00:31:28.077 17:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2785339 ']' 00:31:28.077 17:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:28.077 17:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:28.077 17:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:28.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:28.077 17:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:28.077 17:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:28.077 [2024-12-09 17:41:56.783479] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:31:28.077 [2024-12-09 17:41:56.783524] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2785339 ] 00:31:28.077 [2024-12-09 17:41:56.857770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:28.077 [2024-12-09 17:41:56.896691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:28.077 17:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:28.077 17:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:31:28.078 17:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:28.078 17:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:28.078 17:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:28.078 NVMe0n1 00:31:28.078 17:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:28.078 17:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:28.336 Running I/O for 10 seconds... 00:31:30.210 11702.00 IOPS, 45.71 MiB/s [2024-12-09T16:42:00.765Z] 12143.50 IOPS, 47.44 MiB/s [2024-12-09T16:42:01.332Z] 12280.00 IOPS, 47.97 MiB/s [2024-12-09T16:42:02.708Z] 12340.50 IOPS, 48.21 MiB/s [2024-12-09T16:42:03.643Z] 12433.20 IOPS, 48.57 MiB/s [2024-12-09T16:42:04.580Z] 12456.33 IOPS, 48.66 MiB/s [2024-12-09T16:42:05.518Z] 12502.86 IOPS, 48.84 MiB/s [2024-12-09T16:42:06.558Z] 12542.88 IOPS, 49.00 MiB/s [2024-12-09T16:42:07.495Z] 12566.78 IOPS, 49.09 MiB/s [2024-12-09T16:42:07.495Z] 12590.90 IOPS, 49.18 MiB/s 00:31:38.316 Latency(us) 00:31:38.316 [2024-12-09T16:42:07.495Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:38.316 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:31:38.316 Verification LBA range: start 0x0 length 0x4000 00:31:38.316 NVMe0n1 : 10.06 12612.25 49.27 0.00 0.00 80940.25 18724.57 53427.44 00:31:38.316 [2024-12-09T16:42:07.495Z] =================================================================================================================== 00:31:38.316 [2024-12-09T16:42:07.495Z] Total : 12612.25 49.27 0.00 0.00 80940.25 18724.57 53427.44 00:31:38.316 { 00:31:38.316 "results": [ 00:31:38.316 { 00:31:38.316 "job": "NVMe0n1", 00:31:38.316 "core_mask": "0x1", 00:31:38.316 "workload": "verify", 00:31:38.316 "status": "finished", 00:31:38.316 "verify_range": { 00:31:38.316 "start": 0, 00:31:38.316 "length": 16384 00:31:38.316 }, 00:31:38.316 "queue_depth": 1024, 00:31:38.316 "io_size": 4096, 00:31:38.316 "runtime": 10.058477, 00:31:38.316 "iops": 12612.247361106458, 00:31:38.316 "mibps": 49.2665912543221, 00:31:38.316 "io_failed": 0, 00:31:38.316 "io_timeout": 0, 00:31:38.316 "avg_latency_us": 80940.2534542315, 00:31:38.316 "min_latency_us": 18724.571428571428, 00:31:38.316 "max_latency_us": 53427.44380952381 00:31:38.316 } 00:31:38.316 ], 00:31:38.316 "core_count": 1 00:31:38.316 } 00:31:38.316 17:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2785339 00:31:38.316 17:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2785339 ']' 00:31:38.316 17:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2785339 00:31:38.316 17:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:31:38.316 17:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:38.317 17:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2785339 00:31:38.317 17:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:38.317 17:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:38.317 17:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2785339' 00:31:38.317 killing process with pid 2785339 00:31:38.317 17:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2785339 00:31:38.317 Received shutdown signal, test time was about 10.000000 seconds 00:31:38.317 00:31:38.317 Latency(us) 00:31:38.317 [2024-12-09T16:42:07.496Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:38.317 [2024-12-09T16:42:07.496Z] =================================================================================================================== 00:31:38.317 [2024-12-09T16:42:07.496Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:38.317 17:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2785339 00:31:38.575 17:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:31:38.575 17:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:31:38.575 17:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:38.575 17:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:31:38.575 17:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:38.575 17:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:31:38.575 17:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:38.575 17:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:38.575 rmmod nvme_tcp 00:31:38.575 rmmod nvme_fabrics 00:31:38.575 rmmod nvme_keyring 00:31:38.575 17:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:38.575 17:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:31:38.575 17:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:31:38.575 17:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2785313 ']' 00:31:38.575 17:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2785313 00:31:38.575 17:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2785313 ']' 00:31:38.575 17:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2785313 00:31:38.575 17:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:31:38.575 17:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:38.575 17:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2785313 00:31:38.834 17:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:38.834 17:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:38.834 17:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2785313' 00:31:38.834 killing process with pid 2785313 00:31:38.834 17:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2785313 00:31:38.834 17:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2785313 00:31:38.834 17:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:38.835 17:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:38.835 17:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:38.835 17:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:31:38.835 17:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:31:38.835 17:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:38.835 17:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:31:38.835 17:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:38.835 17:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:38.835 17:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:38.835 17:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:38.835 17:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:41.371 17:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:41.371 00:31:41.371 real 0m19.868s 00:31:41.371 user 0m22.938s 00:31:41.371 sys 0m6.254s 00:31:41.371 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:41.371 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:41.371 ************************************ 00:31:41.371 END TEST nvmf_queue_depth 00:31:41.371 ************************************ 00:31:41.371 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:31:41.371 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:41.371 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:41.371 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:41.371 ************************************ 00:31:41.371 START TEST nvmf_target_multipath 00:31:41.371 ************************************ 00:31:41.371 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:31:41.371 * Looking for test storage... 00:31:41.371 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:41.371 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:41.371 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:31:41.371 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:41.371 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:41.371 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:41.371 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:41.371 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:41.371 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:31:41.371 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:31:41.371 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:31:41.371 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:31:41.371 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:31:41.371 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:31:41.371 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:31:41.371 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:41.371 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:31:41.371 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:31:41.371 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:41.371 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:41.371 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:31:41.371 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:31:41.371 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:41.371 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:31:41.371 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:31:41.371 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:31:41.371 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:31:41.371 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:41.371 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:31:41.371 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:31:41.371 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:41.371 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:41.371 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:31:41.371 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:41.371 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:41.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:41.371 --rc genhtml_branch_coverage=1 00:31:41.371 --rc genhtml_function_coverage=1 00:31:41.371 --rc genhtml_legend=1 00:31:41.371 --rc geninfo_all_blocks=1 00:31:41.371 --rc geninfo_unexecuted_blocks=1 00:31:41.371 00:31:41.371 ' 00:31:41.372 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:41.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:41.372 --rc genhtml_branch_coverage=1 00:31:41.372 --rc genhtml_function_coverage=1 00:31:41.372 --rc genhtml_legend=1 00:31:41.372 --rc geninfo_all_blocks=1 00:31:41.372 --rc geninfo_unexecuted_blocks=1 00:31:41.372 00:31:41.372 ' 00:31:41.372 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:41.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:41.372 --rc genhtml_branch_coverage=1 00:31:41.372 --rc genhtml_function_coverage=1 00:31:41.372 --rc genhtml_legend=1 00:31:41.372 --rc geninfo_all_blocks=1 00:31:41.372 --rc geninfo_unexecuted_blocks=1 00:31:41.372 00:31:41.372 ' 00:31:41.372 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:41.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:41.372 --rc genhtml_branch_coverage=1 00:31:41.372 --rc genhtml_function_coverage=1 00:31:41.372 --rc genhtml_legend=1 00:31:41.372 --rc geninfo_all_blocks=1 00:31:41.372 --rc geninfo_unexecuted_blocks=1 00:31:41.372 00:31:41.372 ' 00:31:41.372 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:41.372 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:31:41.372 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:41.372 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:41.372 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:41.372 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:41.372 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:41.372 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:41.372 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:41.372 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:41.372 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:41.372 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:41.372 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:31:41.372 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:31:41.372 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:41.372 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:41.372 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:41.372 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:41.372 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:41.372 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:31:41.372 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:41.372 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:41.372 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:41.372 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:41.372 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:41.372 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:41.372 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:31:41.372 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:41.372 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:31:41.372 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:41.372 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:41.372 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:41.372 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:41.372 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:41.372 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:41.372 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:41.372 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:41.372 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:41.372 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:41.372 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:41.372 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:41.372 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:31:41.372 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:41.372 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:31:41.372 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:41.372 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:41.372 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:41.372 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:41.372 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:41.372 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:41.372 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:41.372 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:41.372 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:41.372 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:41.372 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:31:41.372 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:47.945 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:47.945 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:47.945 Found net devices under 0000:af:00.0: cvl_0_0 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:47.945 Found net devices under 0000:af:00.1: cvl_0_1 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:47.945 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:47.946 17:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:47.946 17:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:47.946 17:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:47.946 17:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:47.946 17:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:47.946 17:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:47.946 17:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:47.946 17:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:47.946 17:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:47.946 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:47.946 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.321 ms 00:31:47.946 00:31:47.946 --- 10.0.0.2 ping statistics --- 00:31:47.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:47.946 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:31:47.946 17:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:47.946 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:47.946 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.235 ms 00:31:47.946 00:31:47.946 --- 10.0.0.1 ping statistics --- 00:31:47.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:47.946 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:31:47.946 17:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:47.946 17:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:31:47.946 17:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:47.946 17:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:47.946 17:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:47.946 17:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:47.946 17:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:47.946 17:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:47.946 17:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:47.946 17:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:31:47.946 17:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:31:47.946 only one NIC for nvmf test 00:31:47.946 17:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:31:47.946 17:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:47.946 17:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:31:47.946 17:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:47.946 17:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:31:47.946 17:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:47.946 17:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:47.946 rmmod nvme_tcp 00:31:47.946 rmmod nvme_fabrics 00:31:47.946 rmmod nvme_keyring 00:31:47.946 17:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:47.946 17:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:31:47.946 17:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:31:47.946 17:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:31:47.946 17:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:47.946 17:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:47.946 17:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:47.946 17:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:31:47.946 17:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:31:47.946 17:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:47.946 17:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:31:47.946 17:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:47.946 17:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:47.946 17:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:47.946 17:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:47.946 17:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:49.324 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:49.324 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:31:49.324 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:31:49.324 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:49.324 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:31:49.324 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:49.324 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:31:49.324 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:49.324 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:49.324 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:49.324 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:31:49.324 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:31:49.324 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:31:49.324 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:49.324 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:49.324 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:49.324 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:31:49.324 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:31:49.324 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:49.324 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:31:49.324 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:49.324 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:49.324 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:49.324 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:49.324 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:49.324 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:49.324 00:31:49.324 real 0m8.325s 00:31:49.324 user 0m1.819s 00:31:49.324 sys 0m4.470s 00:31:49.324 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:49.324 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:31:49.324 ************************************ 00:31:49.324 END TEST nvmf_target_multipath 00:31:49.324 ************************************ 00:31:49.324 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:31:49.324 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:49.324 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:49.324 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:49.324 ************************************ 00:31:49.324 START TEST nvmf_zcopy 00:31:49.324 ************************************ 00:31:49.324 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:31:49.583 * Looking for test storage... 00:31:49.583 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:49.583 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:49.583 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:31:49.583 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:49.583 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:49.583 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:49.583 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:49.583 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:49.583 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:31:49.583 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:31:49.583 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:31:49.583 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:31:49.583 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:31:49.583 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:31:49.583 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:31:49.583 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:49.583 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:31:49.583 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:31:49.583 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:49.583 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:49.583 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:31:49.583 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:31:49.583 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:49.583 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:31:49.583 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:31:49.583 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:31:49.583 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:31:49.583 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:49.583 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:31:49.583 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:31:49.583 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:49.583 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:49.583 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:31:49.583 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:49.583 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:49.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:49.583 --rc genhtml_branch_coverage=1 00:31:49.583 --rc genhtml_function_coverage=1 00:31:49.583 --rc genhtml_legend=1 00:31:49.583 --rc geninfo_all_blocks=1 00:31:49.583 --rc geninfo_unexecuted_blocks=1 00:31:49.583 00:31:49.583 ' 00:31:49.584 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:49.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:49.584 --rc genhtml_branch_coverage=1 00:31:49.584 --rc genhtml_function_coverage=1 00:31:49.584 --rc genhtml_legend=1 00:31:49.584 --rc geninfo_all_blocks=1 00:31:49.584 --rc geninfo_unexecuted_blocks=1 00:31:49.584 00:31:49.584 ' 00:31:49.584 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:49.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:49.584 --rc genhtml_branch_coverage=1 00:31:49.584 --rc genhtml_function_coverage=1 00:31:49.584 --rc genhtml_legend=1 00:31:49.584 --rc geninfo_all_blocks=1 00:31:49.584 --rc geninfo_unexecuted_blocks=1 00:31:49.584 00:31:49.584 ' 00:31:49.584 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:49.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:49.584 --rc genhtml_branch_coverage=1 00:31:49.584 --rc genhtml_function_coverage=1 00:31:49.584 --rc genhtml_legend=1 00:31:49.584 --rc geninfo_all_blocks=1 00:31:49.584 --rc geninfo_unexecuted_blocks=1 00:31:49.584 00:31:49.584 ' 00:31:49.584 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:49.584 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:31:49.584 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:49.584 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:49.584 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:49.584 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:49.584 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:49.584 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:49.584 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:49.584 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:49.584 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:49.584 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:49.584 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:31:49.584 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:31:49.584 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:49.584 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:49.584 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:49.584 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:49.584 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:49.584 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:31:49.584 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:49.584 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:49.584 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:49.584 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:49.584 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:49.584 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:49.584 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:31:49.584 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:49.584 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:31:49.584 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:49.584 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:49.584 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:49.584 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:49.584 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:49.584 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:49.584 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:49.584 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:49.584 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:49.584 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:49.584 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:31:49.584 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:49.584 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:49.584 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:49.584 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:49.584 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:49.584 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:49.584 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:49.584 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:49.584 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:49.584 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:49.584 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:31:49.584 17:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:56.154 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:56.154 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:31:56.154 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:56.154 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:56.154 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:56.154 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:56.154 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:56.154 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:31:56.154 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:56.154 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:31:56.154 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:31:56.154 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:31:56.154 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:31:56.154 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:31:56.154 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:31:56.154 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:56.154 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:56.154 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:56.154 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:56.154 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:56.154 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:56.154 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:56.154 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:56.154 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:56.154 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:56.154 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:56.154 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:56.154 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:56.154 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:56.154 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:56.154 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:56.154 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:56.154 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:56.154 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:56.154 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:56.154 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:56.154 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:56.154 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:56.154 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:56.154 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:56.154 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:56.154 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:56.154 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:56.154 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:56.154 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:56.154 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:56.154 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:56.154 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:56.154 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:56.154 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:56.154 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:56.154 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:56.154 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:56.154 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:56.154 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:56.154 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:56.154 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:56.154 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:56.154 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:56.154 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:56.154 Found net devices under 0000:af:00.0: cvl_0_0 00:31:56.154 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:56.154 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:56.154 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:56.154 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:56.154 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:56.154 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:56.154 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:56.154 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:56.154 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:56.154 Found net devices under 0000:af:00.1: cvl_0_1 00:31:56.154 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:56.154 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:56.154 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:56.155 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:56.155 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.321 ms 00:31:56.155 00:31:56.155 --- 10.0.0.2 ping statistics --- 00:31:56.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:56.155 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:56.155 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:56.155 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:31:56.155 00:31:56.155 --- 10.0.0.1 ping statistics --- 00:31:56.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:56.155 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2794416 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2794416 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2794416 ']' 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:56.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:56.155 [2024-12-09 17:42:24.634352] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:56.155 [2024-12-09 17:42:24.635197] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:31:56.155 [2024-12-09 17:42:24.635233] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:56.155 [2024-12-09 17:42:24.710409] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:56.155 [2024-12-09 17:42:24.747276] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:56.155 [2024-12-09 17:42:24.747307] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:56.155 [2024-12-09 17:42:24.747314] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:56.155 [2024-12-09 17:42:24.747320] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:56.155 [2024-12-09 17:42:24.747324] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:56.155 [2024-12-09 17:42:24.747854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:56.155 [2024-12-09 17:42:24.813650] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:56.155 [2024-12-09 17:42:24.813842] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:56.155 [2024-12-09 17:42:24.892592] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:56.155 [2024-12-09 17:42:24.920800] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:56.155 malloc0 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:56.155 { 00:31:56.155 "params": { 00:31:56.155 "name": "Nvme$subsystem", 00:31:56.155 "trtype": "$TEST_TRANSPORT", 00:31:56.155 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:56.155 "adrfam": "ipv4", 00:31:56.155 "trsvcid": "$NVMF_PORT", 00:31:56.155 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:56.155 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:56.155 "hdgst": ${hdgst:-false}, 00:31:56.155 "ddgst": ${ddgst:-false} 00:31:56.155 }, 00:31:56.155 "method": "bdev_nvme_attach_controller" 00:31:56.155 } 00:31:56.155 EOF 00:31:56.155 )") 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:31:56.155 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:31:56.156 17:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:56.156 "params": { 00:31:56.156 "name": "Nvme1", 00:31:56.156 "trtype": "tcp", 00:31:56.156 "traddr": "10.0.0.2", 00:31:56.156 "adrfam": "ipv4", 00:31:56.156 "trsvcid": "4420", 00:31:56.156 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:56.156 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:56.156 "hdgst": false, 00:31:56.156 "ddgst": false 00:31:56.156 }, 00:31:56.156 "method": "bdev_nvme_attach_controller" 00:31:56.156 }' 00:31:56.156 [2024-12-09 17:42:25.010050] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:31:56.156 [2024-12-09 17:42:25.010091] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2794481 ] 00:31:56.156 [2024-12-09 17:42:25.083869] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:56.156 [2024-12-09 17:42:25.123248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:56.414 Running I/O for 10 seconds... 00:31:58.285 8585.00 IOPS, 67.07 MiB/s [2024-12-09T16:42:28.841Z] 8643.00 IOPS, 67.52 MiB/s [2024-12-09T16:42:29.777Z] 8656.00 IOPS, 67.62 MiB/s [2024-12-09T16:42:30.712Z] 8677.25 IOPS, 67.79 MiB/s [2024-12-09T16:42:31.648Z] 8685.60 IOPS, 67.86 MiB/s [2024-12-09T16:42:32.585Z] 8695.17 IOPS, 67.93 MiB/s [2024-12-09T16:42:33.520Z] 8700.86 IOPS, 67.98 MiB/s [2024-12-09T16:42:34.456Z] 8702.50 IOPS, 67.99 MiB/s [2024-12-09T16:42:35.832Z] 8706.11 IOPS, 68.02 MiB/s [2024-12-09T16:42:35.832Z] 8706.70 IOPS, 68.02 MiB/s 00:32:06.653 Latency(us) 00:32:06.653 [2024-12-09T16:42:35.832Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:06.653 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:32:06.653 Verification LBA range: start 0x0 length 0x1000 00:32:06.653 Nvme1n1 : 10.01 8708.64 68.04 0.00 0.00 14655.89 2044.10 21346.01 00:32:06.653 [2024-12-09T16:42:35.832Z] =================================================================================================================== 00:32:06.653 [2024-12-09T16:42:35.832Z] Total : 8708.64 68.04 0.00 0.00 14655.89 2044.10 21346.01 00:32:06.653 17:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2796239 00:32:06.653 17:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:32:06.653 17:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:06.653 17:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:32:06.653 17:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:32:06.653 17:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:32:06.653 17:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:32:06.653 17:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:06.653 17:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:06.653 { 00:32:06.653 "params": { 00:32:06.653 "name": "Nvme$subsystem", 00:32:06.653 "trtype": "$TEST_TRANSPORT", 00:32:06.653 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:06.653 "adrfam": "ipv4", 00:32:06.653 "trsvcid": "$NVMF_PORT", 00:32:06.653 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:06.653 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:06.653 "hdgst": ${hdgst:-false}, 00:32:06.653 "ddgst": ${ddgst:-false} 00:32:06.653 }, 00:32:06.653 "method": "bdev_nvme_attach_controller" 00:32:06.653 } 00:32:06.653 EOF 00:32:06.653 )") 00:32:06.653 17:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:32:06.653 [2024-12-09 17:42:35.596192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.653 [2024-12-09 17:42:35.596227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.653 17:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:32:06.653 17:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:32:06.653 17:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:06.653 "params": { 00:32:06.654 "name": "Nvme1", 00:32:06.654 "trtype": "tcp", 00:32:06.654 "traddr": "10.0.0.2", 00:32:06.654 "adrfam": "ipv4", 00:32:06.654 "trsvcid": "4420", 00:32:06.654 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:06.654 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:06.654 "hdgst": false, 00:32:06.654 "ddgst": false 00:32:06.654 }, 00:32:06.654 "method": "bdev_nvme_attach_controller" 00:32:06.654 }' 00:32:06.654 [2024-12-09 17:42:35.608155] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.654 [2024-12-09 17:42:35.608167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.654 [2024-12-09 17:42:35.620150] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.654 [2024-12-09 17:42:35.620159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.654 [2024-12-09 17:42:35.632151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.654 [2024-12-09 17:42:35.632159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.654 [2024-12-09 17:42:35.635889] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:32:06.654 [2024-12-09 17:42:35.635927] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2796239 ] 00:32:06.654 [2024-12-09 17:42:35.644150] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.654 [2024-12-09 17:42:35.644161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.654 [2024-12-09 17:42:35.656150] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.654 [2024-12-09 17:42:35.656159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.654 [2024-12-09 17:42:35.668150] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.654 [2024-12-09 17:42:35.668159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.654 [2024-12-09 17:42:35.680151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.654 [2024-12-09 17:42:35.680159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.654 [2024-12-09 17:42:35.692150] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.654 [2024-12-09 17:42:35.692159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.654 [2024-12-09 17:42:35.704150] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.654 [2024-12-09 17:42:35.704160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.654 [2024-12-09 17:42:35.710367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:06.654 [2024-12-09 17:42:35.716151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.654 [2024-12-09 17:42:35.716161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.654 [2024-12-09 17:42:35.728152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.654 [2024-12-09 17:42:35.728164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.654 [2024-12-09 17:42:35.740150] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.654 [2024-12-09 17:42:35.740160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.654 [2024-12-09 17:42:35.750265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:06.654 [2024-12-09 17:42:35.752150] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.654 [2024-12-09 17:42:35.752161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.654 [2024-12-09 17:42:35.764160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.654 [2024-12-09 17:42:35.764178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.654 [2024-12-09 17:42:35.776159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.654 [2024-12-09 17:42:35.776174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.654 [2024-12-09 17:42:35.788163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.654 [2024-12-09 17:42:35.788181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.654 [2024-12-09 17:42:35.800155] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.654 [2024-12-09 17:42:35.800169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.654 [2024-12-09 17:42:35.812153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.654 [2024-12-09 17:42:35.812164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.654 [2024-12-09 17:42:35.824150] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.654 [2024-12-09 17:42:35.824160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.913 [2024-12-09 17:42:35.836164] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.913 [2024-12-09 17:42:35.836184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.913 [2024-12-09 17:42:35.848156] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.913 [2024-12-09 17:42:35.848170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.913 [2024-12-09 17:42:35.860157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.913 [2024-12-09 17:42:35.860172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.913 [2024-12-09 17:42:35.872151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.913 [2024-12-09 17:42:35.872160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.913 [2024-12-09 17:42:35.884150] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.913 [2024-12-09 17:42:35.884159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.913 [2024-12-09 17:42:35.896152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.913 [2024-12-09 17:42:35.896163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.913 [2024-12-09 17:42:35.908153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.913 [2024-12-09 17:42:35.908166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.913 [2024-12-09 17:42:35.920154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.913 [2024-12-09 17:42:35.920168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.913 [2024-12-09 17:42:35.932160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.913 [2024-12-09 17:42:35.932176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.913 Running I/O for 5 seconds... 00:32:06.913 [2024-12-09 17:42:35.944179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.913 [2024-12-09 17:42:35.944198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.913 [2024-12-09 17:42:35.960655] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.913 [2024-12-09 17:42:35.960673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.913 [2024-12-09 17:42:35.976002] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.913 [2024-12-09 17:42:35.976020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.913 [2024-12-09 17:42:35.989735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.913 [2024-12-09 17:42:35.989752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.913 [2024-12-09 17:42:36.004235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.913 [2024-12-09 17:42:36.004254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.913 [2024-12-09 17:42:36.015768] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.913 [2024-12-09 17:42:36.015786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.914 [2024-12-09 17:42:36.029839] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.914 [2024-12-09 17:42:36.029857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.914 [2024-12-09 17:42:36.044363] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.914 [2024-12-09 17:42:36.044382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.914 [2024-12-09 17:42:36.055325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.914 [2024-12-09 17:42:36.055343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.914 [2024-12-09 17:42:36.069692] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.914 [2024-12-09 17:42:36.069709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.914 [2024-12-09 17:42:36.083856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.914 [2024-12-09 17:42:36.083874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.172 [2024-12-09 17:42:36.096889] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.172 [2024-12-09 17:42:36.096907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.172 [2024-12-09 17:42:36.112551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.172 [2024-12-09 17:42:36.112568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.172 [2024-12-09 17:42:36.128158] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.172 [2024-12-09 17:42:36.128178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.172 [2024-12-09 17:42:36.141443] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.172 [2024-12-09 17:42:36.141462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.172 [2024-12-09 17:42:36.156267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.172 [2024-12-09 17:42:36.156289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.172 [2024-12-09 17:42:36.169039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.172 [2024-12-09 17:42:36.169057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.172 [2024-12-09 17:42:36.183546] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.172 [2024-12-09 17:42:36.183564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.172 [2024-12-09 17:42:36.197046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.172 [2024-12-09 17:42:36.197064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.172 [2024-12-09 17:42:36.212149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.172 [2024-12-09 17:42:36.212171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.172 [2024-12-09 17:42:36.225899] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.172 [2024-12-09 17:42:36.225917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.172 [2024-12-09 17:42:36.240617] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.172 [2024-12-09 17:42:36.240635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.172 [2024-12-09 17:42:36.251629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.172 [2024-12-09 17:42:36.251647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.172 [2024-12-09 17:42:36.266128] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.172 [2024-12-09 17:42:36.266145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.172 [2024-12-09 17:42:36.280835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.172 [2024-12-09 17:42:36.280852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.172 [2024-12-09 17:42:36.296477] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.172 [2024-12-09 17:42:36.296494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.172 [2024-12-09 17:42:36.312299] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.172 [2024-12-09 17:42:36.312317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.172 [2024-12-09 17:42:36.325472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.172 [2024-12-09 17:42:36.325489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.172 [2024-12-09 17:42:36.339724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.172 [2024-12-09 17:42:36.339741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.430 [2024-12-09 17:42:36.353526] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.430 [2024-12-09 17:42:36.353545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.430 [2024-12-09 17:42:36.367586] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.430 [2024-12-09 17:42:36.367605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.430 [2024-12-09 17:42:36.382188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.430 [2024-12-09 17:42:36.382206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.430 [2024-12-09 17:42:36.396548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.430 [2024-12-09 17:42:36.396566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.430 [2024-12-09 17:42:36.411878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.430 [2024-12-09 17:42:36.411897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.430 [2024-12-09 17:42:36.425810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.430 [2024-12-09 17:42:36.425828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.430 [2024-12-09 17:42:36.440145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.430 [2024-12-09 17:42:36.440164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.430 [2024-12-09 17:42:36.454109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.430 [2024-12-09 17:42:36.454128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.430 [2024-12-09 17:42:36.468560] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.430 [2024-12-09 17:42:36.468578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.430 [2024-12-09 17:42:36.484193] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.430 [2024-12-09 17:42:36.484212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.430 [2024-12-09 17:42:36.497699] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.430 [2024-12-09 17:42:36.497717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.430 [2024-12-09 17:42:36.512377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.430 [2024-12-09 17:42:36.512395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.430 [2024-12-09 17:42:36.523670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.430 [2024-12-09 17:42:36.523688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.430 [2024-12-09 17:42:36.538446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.430 [2024-12-09 17:42:36.538464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.430 [2024-12-09 17:42:36.552693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.430 [2024-12-09 17:42:36.552711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.430 [2024-12-09 17:42:36.567593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.430 [2024-12-09 17:42:36.567611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.430 [2024-12-09 17:42:36.581495] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.430 [2024-12-09 17:42:36.581514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.430 [2024-12-09 17:42:36.596006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.430 [2024-12-09 17:42:36.596025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.431 [2024-12-09 17:42:36.607492] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.431 [2024-12-09 17:42:36.607511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.689 [2024-12-09 17:42:36.622298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.689 [2024-12-09 17:42:36.622318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.689 [2024-12-09 17:42:36.636670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.689 [2024-12-09 17:42:36.636688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.689 [2024-12-09 17:42:36.652185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.689 [2024-12-09 17:42:36.652204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.689 [2024-12-09 17:42:36.665310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.689 [2024-12-09 17:42:36.665329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.689 [2024-12-09 17:42:36.677599] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.689 [2024-12-09 17:42:36.677617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.689 [2024-12-09 17:42:36.692026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.689 [2024-12-09 17:42:36.692044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.689 [2024-12-09 17:42:36.703227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.689 [2024-12-09 17:42:36.703246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.689 [2024-12-09 17:42:36.717424] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.689 [2024-12-09 17:42:36.717442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.689 [2024-12-09 17:42:36.731902] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.689 [2024-12-09 17:42:36.731921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.689 [2024-12-09 17:42:36.745115] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.689 [2024-12-09 17:42:36.745134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.689 [2024-12-09 17:42:36.759717] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.689 [2024-12-09 17:42:36.759736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.689 [2024-12-09 17:42:36.772224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.689 [2024-12-09 17:42:36.772242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.689 [2024-12-09 17:42:36.786440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.689 [2024-12-09 17:42:36.786459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.689 [2024-12-09 17:42:36.801177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.689 [2024-12-09 17:42:36.801195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.689 [2024-12-09 17:42:36.816725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.689 [2024-12-09 17:42:36.816743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.689 [2024-12-09 17:42:36.828105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.689 [2024-12-09 17:42:36.828123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.689 [2024-12-09 17:42:36.842192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.689 [2024-12-09 17:42:36.842210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.689 [2024-12-09 17:42:36.856829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.689 [2024-12-09 17:42:36.856847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.948 [2024-12-09 17:42:36.872513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.948 [2024-12-09 17:42:36.872531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.948 [2024-12-09 17:42:36.884736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.948 [2024-12-09 17:42:36.884753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.948 [2024-12-09 17:42:36.897726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.948 [2024-12-09 17:42:36.897744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.948 [2024-12-09 17:42:36.911978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.948 [2024-12-09 17:42:36.911996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.948 [2024-12-09 17:42:36.924805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.948 [2024-12-09 17:42:36.924822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.948 [2024-12-09 17:42:36.940625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.948 [2024-12-09 17:42:36.940642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.948 16921.00 IOPS, 132.20 MiB/s [2024-12-09T16:42:37.127Z] [2024-12-09 17:42:36.956512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.948 [2024-12-09 17:42:36.956530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.948 [2024-12-09 17:42:36.969937] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.948 [2024-12-09 17:42:36.969961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.948 [2024-12-09 17:42:36.984734] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.948 [2024-12-09 17:42:36.984752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.948 [2024-12-09 17:42:37.000103] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.948 [2024-12-09 17:42:37.000125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.948 [2024-12-09 17:42:37.013024] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.948 [2024-12-09 17:42:37.013042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.948 [2024-12-09 17:42:37.027912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.948 [2024-12-09 17:42:37.027931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.948 [2024-12-09 17:42:37.040536] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.948 [2024-12-09 17:42:37.040553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.948 [2024-12-09 17:42:37.053695] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.948 [2024-12-09 17:42:37.053712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.948 [2024-12-09 17:42:37.068316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.948 [2024-12-09 17:42:37.068333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.948 [2024-12-09 17:42:37.079073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.948 [2024-12-09 17:42:37.079090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.948 [2024-12-09 17:42:37.093708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.948 [2024-12-09 17:42:37.093726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.948 [2024-12-09 17:42:37.108471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.948 [2024-12-09 17:42:37.108488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.948 [2024-12-09 17:42:37.123820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.948 [2024-12-09 17:42:37.123840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.207 [2024-12-09 17:42:37.137957] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.207 [2024-12-09 17:42:37.137975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.207 [2024-12-09 17:42:37.153342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.207 [2024-12-09 17:42:37.153361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.207 [2024-12-09 17:42:37.167752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.207 [2024-12-09 17:42:37.167770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.207 [2024-12-09 17:42:37.181402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.207 [2024-12-09 17:42:37.181420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.207 [2024-12-09 17:42:37.196113] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.207 [2024-12-09 17:42:37.196131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.207 [2024-12-09 17:42:37.207106] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.207 [2024-12-09 17:42:37.207124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.207 [2024-12-09 17:42:37.221823] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.207 [2024-12-09 17:42:37.221841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.207 [2024-12-09 17:42:37.236565] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.207 [2024-12-09 17:42:37.236583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.207 [2024-12-09 17:42:37.251580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.207 [2024-12-09 17:42:37.251598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.207 [2024-12-09 17:42:37.266105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.207 [2024-12-09 17:42:37.266128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.207 [2024-12-09 17:42:37.280386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.207 [2024-12-09 17:42:37.280413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.207 [2024-12-09 17:42:37.293778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.207 [2024-12-09 17:42:37.293795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.207 [2024-12-09 17:42:37.308725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.207 [2024-12-09 17:42:37.308743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.207 [2024-12-09 17:42:37.320360] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.207 [2024-12-09 17:42:37.320377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.207 [2024-12-09 17:42:37.333645] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.208 [2024-12-09 17:42:37.333663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.208 [2024-12-09 17:42:37.348345] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.208 [2024-12-09 17:42:37.348363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.208 [2024-12-09 17:42:37.358640] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.208 [2024-12-09 17:42:37.358657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.208 [2024-12-09 17:42:37.373079] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.208 [2024-12-09 17:42:37.373096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.466 [2024-12-09 17:42:37.387991] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.466 [2024-12-09 17:42:37.388010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.466 [2024-12-09 17:42:37.401892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.466 [2024-12-09 17:42:37.401909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.466 [2024-12-09 17:42:37.416351] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.466 [2024-12-09 17:42:37.416370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.466 [2024-12-09 17:42:37.426961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.466 [2024-12-09 17:42:37.426979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.466 [2024-12-09 17:42:37.441475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.466 [2024-12-09 17:42:37.441493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.466 [2024-12-09 17:42:37.456797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.466 [2024-12-09 17:42:37.456815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.466 [2024-12-09 17:42:37.471796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.466 [2024-12-09 17:42:37.471814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.467 [2024-12-09 17:42:37.485852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.467 [2024-12-09 17:42:37.485869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.467 [2024-12-09 17:42:37.500382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.467 [2024-12-09 17:42:37.500405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.467 [2024-12-09 17:42:37.511526] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.467 [2024-12-09 17:42:37.511543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.467 [2024-12-09 17:42:37.525936] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.467 [2024-12-09 17:42:37.525959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.467 [2024-12-09 17:42:37.540320] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.467 [2024-12-09 17:42:37.540338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.467 [2024-12-09 17:42:37.550874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.467 [2024-12-09 17:42:37.550892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.467 [2024-12-09 17:42:37.565772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.467 [2024-12-09 17:42:37.565790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.467 [2024-12-09 17:42:37.580230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.467 [2024-12-09 17:42:37.580248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.467 [2024-12-09 17:42:37.593770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.467 [2024-12-09 17:42:37.593789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.467 [2024-12-09 17:42:37.608419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.467 [2024-12-09 17:42:37.608438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.467 [2024-12-09 17:42:37.618616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.467 [2024-12-09 17:42:37.618636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.467 [2024-12-09 17:42:37.633049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.467 [2024-12-09 17:42:37.633066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.726 [2024-12-09 17:42:37.647913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.726 [2024-12-09 17:42:37.647932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.726 [2024-12-09 17:42:37.659542] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.726 [2024-12-09 17:42:37.659561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.726 [2024-12-09 17:42:37.673885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.726 [2024-12-09 17:42:37.673903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.726 [2024-12-09 17:42:37.688358] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.726 [2024-12-09 17:42:37.688377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.726 [2024-12-09 17:42:37.700778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.726 [2024-12-09 17:42:37.700796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.726 [2024-12-09 17:42:37.713778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.726 [2024-12-09 17:42:37.713795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.726 [2024-12-09 17:42:37.728000] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.726 [2024-12-09 17:42:37.728018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.726 [2024-12-09 17:42:37.741797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.726 [2024-12-09 17:42:37.741815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.726 [2024-12-09 17:42:37.756458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.726 [2024-12-09 17:42:37.756475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.726 [2024-12-09 17:42:37.772082] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.726 [2024-12-09 17:42:37.772101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.726 [2024-12-09 17:42:37.786129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.726 [2024-12-09 17:42:37.786151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.726 [2024-12-09 17:42:37.800892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.726 [2024-12-09 17:42:37.800910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.726 [2024-12-09 17:42:37.816154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.726 [2024-12-09 17:42:37.816173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.726 [2024-12-09 17:42:37.830182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.726 [2024-12-09 17:42:37.830200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.726 [2024-12-09 17:42:37.844640] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.726 [2024-12-09 17:42:37.844657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.726 [2024-12-09 17:42:37.860357] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.726 [2024-12-09 17:42:37.860375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.726 [2024-12-09 17:42:37.873698] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.726 [2024-12-09 17:42:37.873717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.726 [2024-12-09 17:42:37.888703] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.726 [2024-12-09 17:42:37.888722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.726 [2024-12-09 17:42:37.901533] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.726 [2024-12-09 17:42:37.901552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.985 [2024-12-09 17:42:37.916480] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.985 [2024-12-09 17:42:37.916498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.985 [2024-12-09 17:42:37.929250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.985 [2024-12-09 17:42:37.929269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.985 [2024-12-09 17:42:37.944509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.985 [2024-12-09 17:42:37.944527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.985 16904.00 IOPS, 132.06 MiB/s [2024-12-09T16:42:38.164Z] [2024-12-09 17:42:37.960313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.985 [2024-12-09 17:42:37.960332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.985 [2024-12-09 17:42:37.974026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.985 [2024-12-09 17:42:37.974044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.985 [2024-12-09 17:42:37.988810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.985 [2024-12-09 17:42:37.988828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.985 [2024-12-09 17:42:38.004338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.985 [2024-12-09 17:42:38.004357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.985 [2024-12-09 17:42:38.015277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.985 [2024-12-09 17:42:38.015295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.985 [2024-12-09 17:42:38.030019] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.985 [2024-12-09 17:42:38.030037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.985 [2024-12-09 17:42:38.044629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.985 [2024-12-09 17:42:38.044647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.985 [2024-12-09 17:42:38.060108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.985 [2024-12-09 17:42:38.060126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.985 [2024-12-09 17:42:38.073493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.985 [2024-12-09 17:42:38.073511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.985 [2024-12-09 17:42:38.083907] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.985 [2024-12-09 17:42:38.083925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.985 [2024-12-09 17:42:38.097766] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.985 [2024-12-09 17:42:38.097785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.985 [2024-12-09 17:42:38.112287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.985 [2024-12-09 17:42:38.112306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.985 [2024-12-09 17:42:38.124736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.985 [2024-12-09 17:42:38.124754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.985 [2024-12-09 17:42:38.137859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.985 [2024-12-09 17:42:38.137877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.985 [2024-12-09 17:42:38.152845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.985 [2024-12-09 17:42:38.152863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.244 [2024-12-09 17:42:38.168094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.244 [2024-12-09 17:42:38.168113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.244 [2024-12-09 17:42:38.180671] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.244 [2024-12-09 17:42:38.180690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.244 [2024-12-09 17:42:38.195977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.244 [2024-12-09 17:42:38.195996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.244 [2024-12-09 17:42:38.210247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.244 [2024-12-09 17:42:38.210265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.244 [2024-12-09 17:42:38.224711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.244 [2024-12-09 17:42:38.224729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.244 [2024-12-09 17:42:38.239711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.244 [2024-12-09 17:42:38.239729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.244 [2024-12-09 17:42:38.253957] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.244 [2024-12-09 17:42:38.253975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.244 [2024-12-09 17:42:38.268235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.244 [2024-12-09 17:42:38.268253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.244 [2024-12-09 17:42:38.280719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.244 [2024-12-09 17:42:38.280737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.244 [2024-12-09 17:42:38.296432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.244 [2024-12-09 17:42:38.296453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.244 [2024-12-09 17:42:38.308599] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.244 [2024-12-09 17:42:38.308616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.244 [2024-12-09 17:42:38.321832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.244 [2024-12-09 17:42:38.321856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.244 [2024-12-09 17:42:38.336352] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.244 [2024-12-09 17:42:38.336369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.244 [2024-12-09 17:42:38.347069] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.244 [2024-12-09 17:42:38.347086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.244 [2024-12-09 17:42:38.361540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.244 [2024-12-09 17:42:38.361557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.244 [2024-12-09 17:42:38.375393] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.244 [2024-12-09 17:42:38.375411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.244 [2024-12-09 17:42:38.389859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.244 [2024-12-09 17:42:38.389877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.244 [2024-12-09 17:42:38.404262] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.244 [2024-12-09 17:42:38.404280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.244 [2024-12-09 17:42:38.414585] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.244 [2024-12-09 17:42:38.414603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.503 [2024-12-09 17:42:38.429066] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.503 [2024-12-09 17:42:38.429085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.503 [2024-12-09 17:42:38.443845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.503 [2024-12-09 17:42:38.443863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.503 [2024-12-09 17:42:38.457447] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.503 [2024-12-09 17:42:38.457464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.503 [2024-12-09 17:42:38.472438] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.503 [2024-12-09 17:42:38.472456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.503 [2024-12-09 17:42:38.487754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.503 [2024-12-09 17:42:38.487772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.503 [2024-12-09 17:42:38.500740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.503 [2024-12-09 17:42:38.500758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.503 [2024-12-09 17:42:38.513049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.503 [2024-12-09 17:42:38.513066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.503 [2024-12-09 17:42:38.527786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.503 [2024-12-09 17:42:38.527804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.503 [2024-12-09 17:42:38.541556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.503 [2024-12-09 17:42:38.541575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.503 [2024-12-09 17:42:38.556093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.503 [2024-12-09 17:42:38.556111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.503 [2024-12-09 17:42:38.568861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.503 [2024-12-09 17:42:38.568879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.503 [2024-12-09 17:42:38.581882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.503 [2024-12-09 17:42:38.581900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.503 [2024-12-09 17:42:38.596524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.503 [2024-12-09 17:42:38.596541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.503 [2024-12-09 17:42:38.611543] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.503 [2024-12-09 17:42:38.611564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.503 [2024-12-09 17:42:38.625896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.503 [2024-12-09 17:42:38.625916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.503 [2024-12-09 17:42:38.641204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.503 [2024-12-09 17:42:38.641231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.503 [2024-12-09 17:42:38.657084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.503 [2024-12-09 17:42:38.657102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.503 [2024-12-09 17:42:38.671851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.503 [2024-12-09 17:42:38.671869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.762 [2024-12-09 17:42:38.686431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.762 [2024-12-09 17:42:38.686450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.762 [2024-12-09 17:42:38.700927] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.762 [2024-12-09 17:42:38.700945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.762 [2024-12-09 17:42:38.715981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.762 [2024-12-09 17:42:38.715999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.762 [2024-12-09 17:42:38.730066] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.762 [2024-12-09 17:42:38.730084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.762 [2024-12-09 17:42:38.744771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.762 [2024-12-09 17:42:38.744790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.762 [2024-12-09 17:42:38.760004] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.762 [2024-12-09 17:42:38.760022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.762 [2024-12-09 17:42:38.774084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.762 [2024-12-09 17:42:38.774101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.762 [2024-12-09 17:42:38.788383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.762 [2024-12-09 17:42:38.788400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.762 [2024-12-09 17:42:38.799758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.762 [2024-12-09 17:42:38.799776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.762 [2024-12-09 17:42:38.813616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.762 [2024-12-09 17:42:38.813634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.762 [2024-12-09 17:42:38.829070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.762 [2024-12-09 17:42:38.829087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.762 [2024-12-09 17:42:38.843788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.762 [2024-12-09 17:42:38.843812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.762 [2024-12-09 17:42:38.856359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.762 [2024-12-09 17:42:38.856377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.762 [2024-12-09 17:42:38.870344] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.762 [2024-12-09 17:42:38.870362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.762 [2024-12-09 17:42:38.884496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.762 [2024-12-09 17:42:38.884513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.762 [2024-12-09 17:42:38.900025] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.762 [2024-12-09 17:42:38.900043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.762 [2024-12-09 17:42:38.914306] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.762 [2024-12-09 17:42:38.914324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.762 [2024-12-09 17:42:38.928906] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.762 [2024-12-09 17:42:38.928924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.021 [2024-12-09 17:42:38.944338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.021 [2024-12-09 17:42:38.944365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.021 16930.33 IOPS, 132.27 MiB/s [2024-12-09T16:42:39.200Z] [2024-12-09 17:42:38.957006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.021 [2024-12-09 17:42:38.957024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.021 [2024-12-09 17:42:38.972040] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.021 [2024-12-09 17:42:38.972058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.021 [2024-12-09 17:42:38.985509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.021 [2024-12-09 17:42:38.985527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.021 [2024-12-09 17:42:39.000478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.021 [2024-12-09 17:42:39.000495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.021 [2024-12-09 17:42:39.012575] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.021 [2024-12-09 17:42:39.012592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.021 [2024-12-09 17:42:39.026400] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.021 [2024-12-09 17:42:39.026417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.021 [2024-12-09 17:42:39.041055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.021 [2024-12-09 17:42:39.041072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.021 [2024-12-09 17:42:39.056062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.021 [2024-12-09 17:42:39.056080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.021 [2024-12-09 17:42:39.069593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.021 [2024-12-09 17:42:39.069611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.021 [2024-12-09 17:42:39.084563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.021 [2024-12-09 17:42:39.084580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.021 [2024-12-09 17:42:39.097235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.021 [2024-12-09 17:42:39.097253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.021 [2024-12-09 17:42:39.111987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.022 [2024-12-09 17:42:39.112009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.022 [2024-12-09 17:42:39.124445] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.022 [2024-12-09 17:42:39.124462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.022 [2024-12-09 17:42:39.138150] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.022 [2024-12-09 17:42:39.138168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.022 [2024-12-09 17:42:39.152882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.022 [2024-12-09 17:42:39.152900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.022 [2024-12-09 17:42:39.168383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.022 [2024-12-09 17:42:39.168401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.022 [2024-12-09 17:42:39.180107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.022 [2024-12-09 17:42:39.180125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.022 [2024-12-09 17:42:39.193450] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.022 [2024-12-09 17:42:39.193467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.280 [2024-12-09 17:42:39.208212] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.280 [2024-12-09 17:42:39.208237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.280 [2024-12-09 17:42:39.220732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.280 [2024-12-09 17:42:39.220751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.280 [2024-12-09 17:42:39.235437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.280 [2024-12-09 17:42:39.235456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.280 [2024-12-09 17:42:39.250213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.280 [2024-12-09 17:42:39.250236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.280 [2024-12-09 17:42:39.264763] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.280 [2024-12-09 17:42:39.264781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.280 [2024-12-09 17:42:39.280921] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.280 [2024-12-09 17:42:39.280940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.280 [2024-12-09 17:42:39.295975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.280 [2024-12-09 17:42:39.295994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.280 [2024-12-09 17:42:39.309338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.280 [2024-12-09 17:42:39.309359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.280 [2024-12-09 17:42:39.324273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.280 [2024-12-09 17:42:39.324292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.280 [2024-12-09 17:42:39.338082] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.280 [2024-12-09 17:42:39.338100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.280 [2024-12-09 17:42:39.352667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.280 [2024-12-09 17:42:39.352685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.280 [2024-12-09 17:42:39.368284] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.280 [2024-12-09 17:42:39.368303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.280 [2024-12-09 17:42:39.379811] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.281 [2024-12-09 17:42:39.379834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.281 [2024-12-09 17:42:39.393692] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.281 [2024-12-09 17:42:39.393710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.281 [2024-12-09 17:42:39.408655] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.281 [2024-12-09 17:42:39.408673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.281 [2024-12-09 17:42:39.423829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.281 [2024-12-09 17:42:39.423848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.281 [2024-12-09 17:42:39.436700] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.281 [2024-12-09 17:42:39.436718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.281 [2024-12-09 17:42:39.452558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.281 [2024-12-09 17:42:39.452576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.539 [2024-12-09 17:42:39.467891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.539 [2024-12-09 17:42:39.467910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.539 [2024-12-09 17:42:39.480270] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.539 [2024-12-09 17:42:39.480288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.539 [2024-12-09 17:42:39.494171] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.539 [2024-12-09 17:42:39.494190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.539 [2024-12-09 17:42:39.508983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.539 [2024-12-09 17:42:39.509001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.539 [2024-12-09 17:42:39.523592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.539 [2024-12-09 17:42:39.523612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.539 [2024-12-09 17:42:39.538112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.539 [2024-12-09 17:42:39.538131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.540 [2024-12-09 17:42:39.552527] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.540 [2024-12-09 17:42:39.552544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.540 [2024-12-09 17:42:39.567785] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.540 [2024-12-09 17:42:39.567804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.540 [2024-12-09 17:42:39.581997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.540 [2024-12-09 17:42:39.582015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.540 [2024-12-09 17:42:39.596655] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.540 [2024-12-09 17:42:39.596672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.540 [2024-12-09 17:42:39.611622] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.540 [2024-12-09 17:42:39.611640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.540 [2024-12-09 17:42:39.626105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.540 [2024-12-09 17:42:39.626124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.540 [2024-12-09 17:42:39.640779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.540 [2024-12-09 17:42:39.640797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.540 [2024-12-09 17:42:39.655611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.540 [2024-12-09 17:42:39.655630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.540 [2024-12-09 17:42:39.669764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.540 [2024-12-09 17:42:39.669783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.540 [2024-12-09 17:42:39.684694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.540 [2024-12-09 17:42:39.684711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.540 [2024-12-09 17:42:39.700275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.540 [2024-12-09 17:42:39.700295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.540 [2024-12-09 17:42:39.713069] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.540 [2024-12-09 17:42:39.713092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.798 [2024-12-09 17:42:39.728228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.798 [2024-12-09 17:42:39.728248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.798 [2024-12-09 17:42:39.741194] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.798 [2024-12-09 17:42:39.741211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.798 [2024-12-09 17:42:39.756416] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.798 [2024-12-09 17:42:39.756434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.798 [2024-12-09 17:42:39.767109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.798 [2024-12-09 17:42:39.767127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.798 [2024-12-09 17:42:39.781572] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.798 [2024-12-09 17:42:39.781590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.798 [2024-12-09 17:42:39.796197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.798 [2024-12-09 17:42:39.796215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.798 [2024-12-09 17:42:39.808616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.798 [2024-12-09 17:42:39.808634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.798 [2024-12-09 17:42:39.823879] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.798 [2024-12-09 17:42:39.823897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.798 [2024-12-09 17:42:39.837864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.798 [2024-12-09 17:42:39.837882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.798 [2024-12-09 17:42:39.852548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.798 [2024-12-09 17:42:39.852565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.798 [2024-12-09 17:42:39.868240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.798 [2024-12-09 17:42:39.868258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.798 [2024-12-09 17:42:39.881682] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.798 [2024-12-09 17:42:39.881699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.798 [2024-12-09 17:42:39.896490] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.798 [2024-12-09 17:42:39.896508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.798 [2024-12-09 17:42:39.908453] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.798 [2024-12-09 17:42:39.908472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.798 [2024-12-09 17:42:39.921944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.798 [2024-12-09 17:42:39.921963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.798 [2024-12-09 17:42:39.936458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.798 [2024-12-09 17:42:39.936475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.798 [2024-12-09 17:42:39.949327] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.798 [2024-12-09 17:42:39.949344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.798 16921.25 IOPS, 132.20 MiB/s [2024-12-09T16:42:39.977Z] [2024-12-09 17:42:39.964178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.798 [2024-12-09 17:42:39.964196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.798 [2024-12-09 17:42:39.975230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.798 [2024-12-09 17:42:39.975249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.056 [2024-12-09 17:42:39.990054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.056 [2024-12-09 17:42:39.990072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.056 [2024-12-09 17:42:40.004534] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.056 [2024-12-09 17:42:40.004551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.056 [2024-12-09 17:42:40.016450] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.056 [2024-12-09 17:42:40.016468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.056 [2024-12-09 17:42:40.031784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.056 [2024-12-09 17:42:40.031803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.056 [2024-12-09 17:42:40.043930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.056 [2024-12-09 17:42:40.043950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.056 [2024-12-09 17:42:40.058107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.056 [2024-12-09 17:42:40.058126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.056 [2024-12-09 17:42:40.072738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.056 [2024-12-09 17:42:40.072757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.057 [2024-12-09 17:42:40.088365] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.057 [2024-12-09 17:42:40.088385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.057 [2024-12-09 17:42:40.100880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.057 [2024-12-09 17:42:40.100898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.057 [2024-12-09 17:42:40.116311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.057 [2024-12-09 17:42:40.116330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.057 [2024-12-09 17:42:40.128948] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.057 [2024-12-09 17:42:40.128965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.057 [2024-12-09 17:42:40.141616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.057 [2024-12-09 17:42:40.141634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.057 [2024-12-09 17:42:40.156349] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.057 [2024-12-09 17:42:40.156377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.057 [2024-12-09 17:42:40.169694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.057 [2024-12-09 17:42:40.169712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.057 [2024-12-09 17:42:40.184621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.057 [2024-12-09 17:42:40.184639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.057 [2024-12-09 17:42:40.199958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.057 [2024-12-09 17:42:40.199976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.057 [2024-12-09 17:42:40.213378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.057 [2024-12-09 17:42:40.213397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.057 [2024-12-09 17:42:40.228065] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.057 [2024-12-09 17:42:40.228083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.315 [2024-12-09 17:42:40.240392] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.315 [2024-12-09 17:42:40.240413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.315 [2024-12-09 17:42:40.254539] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.315 [2024-12-09 17:42:40.254558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.316 [2024-12-09 17:42:40.269559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.316 [2024-12-09 17:42:40.269577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.316 [2024-12-09 17:42:40.283931] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.316 [2024-12-09 17:42:40.283949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.316 [2024-12-09 17:42:40.296163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.316 [2024-12-09 17:42:40.296180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.316 [2024-12-09 17:42:40.310469] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.316 [2024-12-09 17:42:40.310486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.316 [2024-12-09 17:42:40.324784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.316 [2024-12-09 17:42:40.324802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.316 [2024-12-09 17:42:40.340577] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.316 [2024-12-09 17:42:40.340594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.316 [2024-12-09 17:42:40.356098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.316 [2024-12-09 17:42:40.356116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.316 [2024-12-09 17:42:40.370169] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.316 [2024-12-09 17:42:40.370187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.316 [2024-12-09 17:42:40.384483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.316 [2024-12-09 17:42:40.384501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.316 [2024-12-09 17:42:40.400594] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.316 [2024-12-09 17:42:40.400612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.316 [2024-12-09 17:42:40.416515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.316 [2024-12-09 17:42:40.416532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.316 [2024-12-09 17:42:40.431866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.316 [2024-12-09 17:42:40.431884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.316 [2024-12-09 17:42:40.444659] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.316 [2024-12-09 17:42:40.444680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.316 [2024-12-09 17:42:40.460163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.316 [2024-12-09 17:42:40.460181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.316 [2024-12-09 17:42:40.473590] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.316 [2024-12-09 17:42:40.473608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.316 [2024-12-09 17:42:40.488711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.316 [2024-12-09 17:42:40.488730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.575 [2024-12-09 17:42:40.504466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.575 [2024-12-09 17:42:40.504492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.575 [2024-12-09 17:42:40.519627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.575 [2024-12-09 17:42:40.519645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.575 [2024-12-09 17:42:40.533921] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.575 [2024-12-09 17:42:40.533939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.575 [2024-12-09 17:42:40.548633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.575 [2024-12-09 17:42:40.548650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.575 [2024-12-09 17:42:40.563955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.575 [2024-12-09 17:42:40.563973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.575 [2024-12-09 17:42:40.577782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.575 [2024-12-09 17:42:40.577801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.575 [2024-12-09 17:42:40.592000] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.575 [2024-12-09 17:42:40.592019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.575 [2024-12-09 17:42:40.605865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.575 [2024-12-09 17:42:40.605883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.575 [2024-12-09 17:42:40.620290] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.575 [2024-12-09 17:42:40.620308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.575 [2024-12-09 17:42:40.631386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.575 [2024-12-09 17:42:40.631405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.575 [2024-12-09 17:42:40.646013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.575 [2024-12-09 17:42:40.646031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.575 [2024-12-09 17:42:40.660245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.575 [2024-12-09 17:42:40.660263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.575 [2024-12-09 17:42:40.673732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.575 [2024-12-09 17:42:40.673750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.575 [2024-12-09 17:42:40.688575] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.575 [2024-12-09 17:42:40.688593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.575 [2024-12-09 17:42:40.703773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.575 [2024-12-09 17:42:40.703807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.575 [2024-12-09 17:42:40.717834] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.575 [2024-12-09 17:42:40.717858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.575 [2024-12-09 17:42:40.732338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.575 [2024-12-09 17:42:40.732358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.575 [2024-12-09 17:42:40.744822] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.575 [2024-12-09 17:42:40.744840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.834 [2024-12-09 17:42:40.757610] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.834 [2024-12-09 17:42:40.757629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.834 [2024-12-09 17:42:40.772234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.834 [2024-12-09 17:42:40.772254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.834 [2024-12-09 17:42:40.783263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.834 [2024-12-09 17:42:40.783283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.834 [2024-12-09 17:42:40.798293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.834 [2024-12-09 17:42:40.798313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.834 [2024-12-09 17:42:40.812947] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.834 [2024-12-09 17:42:40.812966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.834 [2024-12-09 17:42:40.828482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.834 [2024-12-09 17:42:40.828500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.834 [2024-12-09 17:42:40.843867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.834 [2024-12-09 17:42:40.843886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.834 [2024-12-09 17:42:40.858014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.834 [2024-12-09 17:42:40.858033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.834 [2024-12-09 17:42:40.872774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.834 [2024-12-09 17:42:40.872792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.834 [2024-12-09 17:42:40.887959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.834 [2024-12-09 17:42:40.887978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.834 [2024-12-09 17:42:40.901558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.834 [2024-12-09 17:42:40.901577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.834 [2024-12-09 17:42:40.916227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.834 [2024-12-09 17:42:40.916245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.834 [2024-12-09 17:42:40.927121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.834 [2024-12-09 17:42:40.927139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.834 [2024-12-09 17:42:40.941917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.834 [2024-12-09 17:42:40.941936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.834 [2024-12-09 17:42:40.956512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.834 [2024-12-09 17:42:40.956529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.834 16909.80 IOPS, 132.11 MiB/s 00:32:11.834 Latency(us) 00:32:11.834 [2024-12-09T16:42:41.013Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:11.834 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:32:11.834 Nvme1n1 : 5.01 16912.53 132.13 0.00 0.00 7561.74 2059.70 12919.95 00:32:11.834 [2024-12-09T16:42:41.013Z] =================================================================================================================== 00:32:11.834 [2024-12-09T16:42:41.013Z] Total : 16912.53 132.13 0.00 0.00 7561.74 2059.70 12919.95 00:32:11.834 [2024-12-09 17:42:40.968159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.834 [2024-12-09 17:42:40.968177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.834 [2024-12-09 17:42:40.980156] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.834 [2024-12-09 17:42:40.980171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.834 [2024-12-09 17:42:40.992168] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.834 [2024-12-09 17:42:40.992186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.834 [2024-12-09 17:42:41.004161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.834 [2024-12-09 17:42:41.004195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.095 [2024-12-09 17:42:41.016171] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.095 [2024-12-09 17:42:41.016193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.095 [2024-12-09 17:42:41.028157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.095 [2024-12-09 17:42:41.028171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.095 [2024-12-09 17:42:41.040156] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.095 [2024-12-09 17:42:41.040171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.095 [2024-12-09 17:42:41.052153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.095 [2024-12-09 17:42:41.052165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.095 [2024-12-09 17:42:41.064154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.095 [2024-12-09 17:42:41.064167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.095 [2024-12-09 17:42:41.076151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.095 [2024-12-09 17:42:41.076162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.095 [2024-12-09 17:42:41.088153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.095 [2024-12-09 17:42:41.088163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.095 [2024-12-09 17:42:41.100157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.095 [2024-12-09 17:42:41.100170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.095 [2024-12-09 17:42:41.112150] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.095 [2024-12-09 17:42:41.112160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.095 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2796239) - No such process 00:32:12.095 17:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2796239 00:32:12.095 17:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:12.095 17:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:12.095 17:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:12.095 17:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:12.095 17:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:32:12.095 17:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:12.095 17:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:12.095 delay0 00:32:12.095 17:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:12.095 17:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:32:12.095 17:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:12.095 17:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:12.095 17:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:12.096 17:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:32:12.096 [2024-12-09 17:42:41.257543] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:32:20.215 Initializing NVMe Controllers 00:32:20.215 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:20.215 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:20.215 Initialization complete. Launching workers. 00:32:20.215 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 4808 00:32:20.215 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 5083, failed to submit 45 00:32:20.215 success 4921, unsuccessful 162, failed 0 00:32:20.215 17:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:32:20.215 17:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:32:20.215 17:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:20.215 17:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:32:20.215 17:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:20.215 17:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:32:20.215 17:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:20.215 17:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:20.215 rmmod nvme_tcp 00:32:20.215 rmmod nvme_fabrics 00:32:20.215 rmmod nvme_keyring 00:32:20.215 17:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:20.215 17:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:32:20.215 17:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:32:20.215 17:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2794416 ']' 00:32:20.215 17:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2794416 00:32:20.215 17:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2794416 ']' 00:32:20.215 17:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2794416 00:32:20.215 17:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:32:20.215 17:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:20.215 17:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2794416 00:32:20.215 17:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:20.215 17:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:20.215 17:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2794416' 00:32:20.215 killing process with pid 2794416 00:32:20.215 17:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2794416 00:32:20.215 17:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2794416 00:32:20.215 17:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:20.215 17:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:20.215 17:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:20.215 17:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:32:20.215 17:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:32:20.215 17:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:20.215 17:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:32:20.215 17:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:20.215 17:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:20.215 17:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:20.215 17:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:20.215 17:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:21.593 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:21.593 00:32:21.593 real 0m32.057s 00:32:21.593 user 0m41.512s 00:32:21.593 sys 0m12.779s 00:32:21.593 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:21.593 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:21.593 ************************************ 00:32:21.593 END TEST nvmf_zcopy 00:32:21.593 ************************************ 00:32:21.593 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:32:21.593 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:21.593 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:21.593 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:21.593 ************************************ 00:32:21.593 START TEST nvmf_nmic 00:32:21.593 ************************************ 00:32:21.593 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:32:21.593 * Looking for test storage... 00:32:21.593 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:21.593 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:21.594 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:32:21.594 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:21.854 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:21.854 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:21.854 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:21.854 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:21.854 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:32:21.854 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:32:21.854 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:32:21.854 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:32:21.854 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:32:21.854 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:32:21.854 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:32:21.854 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:21.854 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:32:21.854 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:32:21.854 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:21.854 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:21.854 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:32:21.854 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:32:21.854 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:21.854 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:32:21.854 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:32:21.854 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:32:21.854 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:32:21.854 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:21.854 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:32:21.854 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:32:21.854 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:21.854 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:21.854 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:32:21.854 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:21.854 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:21.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:21.854 --rc genhtml_branch_coverage=1 00:32:21.854 --rc genhtml_function_coverage=1 00:32:21.854 --rc genhtml_legend=1 00:32:21.854 --rc geninfo_all_blocks=1 00:32:21.854 --rc geninfo_unexecuted_blocks=1 00:32:21.854 00:32:21.854 ' 00:32:21.854 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:21.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:21.854 --rc genhtml_branch_coverage=1 00:32:21.854 --rc genhtml_function_coverage=1 00:32:21.854 --rc genhtml_legend=1 00:32:21.854 --rc geninfo_all_blocks=1 00:32:21.854 --rc geninfo_unexecuted_blocks=1 00:32:21.854 00:32:21.854 ' 00:32:21.854 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:21.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:21.854 --rc genhtml_branch_coverage=1 00:32:21.854 --rc genhtml_function_coverage=1 00:32:21.854 --rc genhtml_legend=1 00:32:21.854 --rc geninfo_all_blocks=1 00:32:21.854 --rc geninfo_unexecuted_blocks=1 00:32:21.854 00:32:21.854 ' 00:32:21.854 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:21.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:21.854 --rc genhtml_branch_coverage=1 00:32:21.854 --rc genhtml_function_coverage=1 00:32:21.854 --rc genhtml_legend=1 00:32:21.854 --rc geninfo_all_blocks=1 00:32:21.854 --rc geninfo_unexecuted_blocks=1 00:32:21.854 00:32:21.854 ' 00:32:21.854 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:21.854 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:32:21.854 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:21.854 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:21.854 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:21.854 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:21.854 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:21.854 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:21.854 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:21.854 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:21.854 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:21.854 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:21.854 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:32:21.854 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:32:21.854 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:21.854 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:21.854 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:21.854 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:21.854 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:21.854 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:32:21.854 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:21.854 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:21.854 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:21.854 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:21.854 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:21.854 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:21.854 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:32:21.854 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:21.854 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:32:21.854 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:21.854 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:21.854 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:21.855 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:21.855 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:21.855 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:21.855 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:21.855 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:21.855 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:21.855 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:21.855 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:21.855 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:21.855 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:32:21.855 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:21.855 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:21.855 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:21.855 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:21.855 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:21.855 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:21.855 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:21.855 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:21.855 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:21.855 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:21.855 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:32:21.855 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:28.424 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:28.424 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:28.424 Found net devices under 0000:af:00.0: cvl_0_0 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:28.424 Found net devices under 0000:af:00.1: cvl_0_1 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:28.424 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:28.425 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:28.425 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:28.425 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:28.425 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:28.425 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:28.425 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.327 ms 00:32:28.425 00:32:28.425 --- 10.0.0.2 ping statistics --- 00:32:28.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:28.425 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:32:28.425 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:28.425 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:28.425 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:32:28.425 00:32:28.425 --- 10.0.0.1 ping statistics --- 00:32:28.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:28.425 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:32:28.425 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:28.425 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:32:28.425 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:28.425 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:28.425 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:28.425 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:28.425 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:28.425 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:28.425 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:28.425 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:32:28.425 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:28.425 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:28.425 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:28.425 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2801557 00:32:28.425 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2801557 00:32:28.425 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:32:28.425 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2801557 ']' 00:32:28.425 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:28.425 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:28.425 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:28.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:28.425 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:28.425 17:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:28.425 [2024-12-09 17:42:56.776149] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:28.425 [2024-12-09 17:42:56.777024] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:32:28.425 [2024-12-09 17:42:56.777057] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:28.425 [2024-12-09 17:42:56.853318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:28.425 [2024-12-09 17:42:56.896549] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:28.425 [2024-12-09 17:42:56.896587] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:28.425 [2024-12-09 17:42:56.896594] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:28.425 [2024-12-09 17:42:56.896601] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:28.425 [2024-12-09 17:42:56.896606] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:28.425 [2024-12-09 17:42:56.898127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:28.425 [2024-12-09 17:42:56.898268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:28.425 [2024-12-09 17:42:56.898306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:28.425 [2024-12-09 17:42:56.898306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:28.425 [2024-12-09 17:42:56.967691] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:28.425 [2024-12-09 17:42:56.968362] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:28.425 [2024-12-09 17:42:56.968598] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:28.425 [2024-12-09 17:42:56.968757] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:28.425 [2024-12-09 17:42:56.968841] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:28.425 17:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:28.425 17:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:32:28.425 17:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:28.425 17:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:28.425 17:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:28.425 17:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:28.425 17:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:28.425 17:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.425 17:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:28.425 [2024-12-09 17:42:57.043079] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:28.425 17:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.425 17:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:28.425 17:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.425 17:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:28.425 Malloc0 00:32:28.425 17:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.425 17:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:28.425 17:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.425 17:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:28.425 17:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.425 17:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:28.425 17:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.425 17:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:28.425 17:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.425 17:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:28.425 17:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.425 17:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:28.425 [2024-12-09 17:42:57.119273] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:28.425 17:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.425 17:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:32:28.425 test case1: single bdev can't be used in multiple subsystems 00:32:28.425 17:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:32:28.425 17:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.425 17:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:28.425 17:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.425 17:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:28.425 17:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.425 17:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:28.425 17:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.425 17:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:32:28.425 17:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:32:28.425 17:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.425 17:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:28.425 [2024-12-09 17:42:57.142831] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:32:28.425 [2024-12-09 17:42:57.142849] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:32:28.425 [2024-12-09 17:42:57.142856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.425 request: 00:32:28.425 { 00:32:28.425 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:32:28.425 "namespace": { 00:32:28.425 "bdev_name": "Malloc0", 00:32:28.425 "no_auto_visible": false, 00:32:28.425 "hide_metadata": false 00:32:28.425 }, 00:32:28.426 "method": "nvmf_subsystem_add_ns", 00:32:28.426 "req_id": 1 00:32:28.426 } 00:32:28.426 Got JSON-RPC error response 00:32:28.426 response: 00:32:28.426 { 00:32:28.426 "code": -32602, 00:32:28.426 "message": "Invalid parameters" 00:32:28.426 } 00:32:28.426 17:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:28.426 17:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:32:28.426 17:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:32:28.426 17:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:32:28.426 Adding namespace failed - expected result. 00:32:28.426 17:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:32:28.426 test case2: host connect to nvmf target in multiple paths 00:32:28.426 17:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:28.426 17:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.426 17:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:28.426 [2024-12-09 17:42:57.154908] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:28.426 17:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.426 17:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:28.426 17:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:32:28.685 17:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:32:28.685 17:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:32:28.685 17:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:32:28.685 17:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:32:28.685 17:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:32:30.587 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:32:30.587 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:32:30.587 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:32:30.587 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:32:30.587 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:32:30.587 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:32:30.587 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:32:30.862 [global] 00:32:30.862 thread=1 00:32:30.862 invalidate=1 00:32:30.862 rw=write 00:32:30.862 time_based=1 00:32:30.862 runtime=1 00:32:30.862 ioengine=libaio 00:32:30.862 direct=1 00:32:30.862 bs=4096 00:32:30.862 iodepth=1 00:32:30.862 norandommap=0 00:32:30.862 numjobs=1 00:32:30.862 00:32:30.862 verify_dump=1 00:32:30.862 verify_backlog=512 00:32:30.862 verify_state_save=0 00:32:30.862 do_verify=1 00:32:30.862 verify=crc32c-intel 00:32:30.862 [job0] 00:32:30.862 filename=/dev/nvme0n1 00:32:30.862 Could not set queue depth (nvme0n1) 00:32:31.119 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:31.119 fio-3.35 00:32:31.119 Starting 1 thread 00:32:32.490 00:32:32.490 job0: (groupid=0, jobs=1): err= 0: pid=2802364: Mon Dec 9 17:43:01 2024 00:32:32.490 read: IOPS=559, BW=2239KiB/s (2293kB/s)(2284KiB/1020msec) 00:32:32.490 slat (nsec): min=7355, max=20150, avg=8379.53, stdev=1124.31 00:32:32.490 clat (usec): min=184, max=42054, avg=1419.78, stdev=6959.97 00:32:32.490 lat (usec): min=193, max=42066, avg=1428.16, stdev=6960.60 00:32:32.490 clat percentiles (usec): 00:32:32.490 | 1.00th=[ 192], 5.00th=[ 194], 10.00th=[ 196], 20.00th=[ 198], 00:32:32.490 | 30.00th=[ 198], 40.00th=[ 200], 50.00th=[ 200], 60.00th=[ 202], 00:32:32.490 | 70.00th=[ 204], 80.00th=[ 206], 90.00th=[ 210], 95.00th=[ 217], 00:32:32.490 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:32:32.490 | 99.99th=[42206] 00:32:32.490 write: IOPS=1003, BW=4016KiB/s (4112kB/s)(4096KiB/1020msec); 0 zone resets 00:32:32.490 slat (usec): min=10, max=28735, avg=40.18, stdev=897.60 00:32:32.490 clat (usec): min=128, max=255, avg=154.24, stdev=24.06 00:32:32.490 lat (usec): min=139, max=28990, avg=194.42, stdev=901.07 00:32:32.490 clat percentiles (usec): 00:32:32.490 | 1.00th=[ 133], 5.00th=[ 135], 10.00th=[ 137], 20.00th=[ 139], 00:32:32.490 | 30.00th=[ 139], 40.00th=[ 141], 50.00th=[ 143], 60.00th=[ 145], 00:32:32.490 | 70.00th=[ 169], 80.00th=[ 184], 90.00th=[ 186], 95.00th=[ 188], 00:32:32.490 | 99.00th=[ 245], 99.50th=[ 249], 99.90th=[ 253], 99.95th=[ 255], 00:32:32.490 | 99.99th=[ 255] 00:32:32.490 bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 00:32:32.490 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:32:32.490 lat (usec) : 250=98.62%, 500=0.31% 00:32:32.490 lat (msec) : 50=1.07% 00:32:32.490 cpu : usr=1.18%, sys=2.65%, ctx=1600, majf=0, minf=1 00:32:32.490 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:32.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:32.490 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:32.490 issued rwts: total=571,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:32.490 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:32.490 00:32:32.490 Run status group 0 (all jobs): 00:32:32.490 READ: bw=2239KiB/s (2293kB/s), 2239KiB/s-2239KiB/s (2293kB/s-2293kB/s), io=2284KiB (2339kB), run=1020-1020msec 00:32:32.490 WRITE: bw=4016KiB/s (4112kB/s), 4016KiB/s-4016KiB/s (4112kB/s-4112kB/s), io=4096KiB (4194kB), run=1020-1020msec 00:32:32.490 00:32:32.490 Disk stats (read/write): 00:32:32.490 nvme0n1: ios=594/1024, merge=0/0, ticks=1672/140, in_queue=1812, util=98.60% 00:32:32.490 17:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:32.490 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:32:32.490 17:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:32.490 17:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:32:32.490 17:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:32:32.490 17:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:32.490 17:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:32:32.490 17:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:32.490 17:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:32:32.490 17:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:32:32.490 17:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:32:32.490 17:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:32.490 17:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:32:32.490 17:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:32.490 17:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:32:32.490 17:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:32.490 17:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:32.490 rmmod nvme_tcp 00:32:32.490 rmmod nvme_fabrics 00:32:32.490 rmmod nvme_keyring 00:32:32.490 17:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:32.490 17:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:32:32.490 17:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:32:32.490 17:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2801557 ']' 00:32:32.490 17:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2801557 00:32:32.490 17:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2801557 ']' 00:32:32.490 17:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2801557 00:32:32.490 17:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:32:32.490 17:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:32.490 17:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2801557 00:32:32.490 17:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:32.490 17:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:32.490 17:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2801557' 00:32:32.490 killing process with pid 2801557 00:32:32.490 17:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2801557 00:32:32.490 17:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2801557 00:32:32.749 17:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:32.749 17:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:32.749 17:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:32.749 17:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:32:32.749 17:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:32:32.749 17:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:32.749 17:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:32:32.749 17:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:32.749 17:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:32.749 17:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:32.749 17:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:32.749 17:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:35.349 17:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:35.349 00:32:35.349 real 0m13.232s 00:32:35.349 user 0m24.292s 00:32:35.349 sys 0m6.075s 00:32:35.349 17:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:35.349 17:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:35.349 ************************************ 00:32:35.349 END TEST nvmf_nmic 00:32:35.349 ************************************ 00:32:35.349 17:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:32:35.349 17:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:35.349 17:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:35.349 17:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:35.349 ************************************ 00:32:35.349 START TEST nvmf_fio_target 00:32:35.349 ************************************ 00:32:35.349 17:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:32:35.349 * Looking for test storage... 00:32:35.349 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:35.349 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:35.349 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:32:35.349 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:35.349 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:35.349 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:35.349 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:35.349 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:35.349 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:32:35.349 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:32:35.349 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:32:35.349 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:32:35.349 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:32:35.349 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:32:35.349 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:32:35.349 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:35.349 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:32:35.349 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:32:35.349 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:35.349 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:35.349 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:32:35.349 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:32:35.349 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:35.349 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:32:35.349 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:32:35.349 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:32:35.349 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:32:35.349 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:35.349 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:32:35.349 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:32:35.349 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:35.349 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:35.349 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:32:35.349 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:35.349 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:35.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:35.349 --rc genhtml_branch_coverage=1 00:32:35.349 --rc genhtml_function_coverage=1 00:32:35.349 --rc genhtml_legend=1 00:32:35.349 --rc geninfo_all_blocks=1 00:32:35.349 --rc geninfo_unexecuted_blocks=1 00:32:35.349 00:32:35.349 ' 00:32:35.349 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:35.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:35.349 --rc genhtml_branch_coverage=1 00:32:35.349 --rc genhtml_function_coverage=1 00:32:35.349 --rc genhtml_legend=1 00:32:35.349 --rc geninfo_all_blocks=1 00:32:35.349 --rc geninfo_unexecuted_blocks=1 00:32:35.349 00:32:35.349 ' 00:32:35.349 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:35.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:35.349 --rc genhtml_branch_coverage=1 00:32:35.349 --rc genhtml_function_coverage=1 00:32:35.349 --rc genhtml_legend=1 00:32:35.349 --rc geninfo_all_blocks=1 00:32:35.349 --rc geninfo_unexecuted_blocks=1 00:32:35.349 00:32:35.349 ' 00:32:35.349 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:35.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:35.349 --rc genhtml_branch_coverage=1 00:32:35.349 --rc genhtml_function_coverage=1 00:32:35.349 --rc genhtml_legend=1 00:32:35.349 --rc geninfo_all_blocks=1 00:32:35.349 --rc geninfo_unexecuted_blocks=1 00:32:35.349 00:32:35.349 ' 00:32:35.349 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:35.349 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:32:35.349 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:35.349 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:35.349 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:35.349 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:35.349 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:35.349 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:35.349 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:35.349 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:35.349 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:35.349 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:35.349 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:32:35.349 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:32:35.349 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:35.350 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:35.350 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:35.350 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:35.350 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:35.350 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:32:35.350 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:35.350 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:35.350 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:35.350 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:35.350 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:35.350 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:35.350 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:32:35.350 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:35.350 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:32:35.350 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:35.350 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:35.350 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:35.350 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:35.350 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:35.350 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:35.350 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:35.350 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:35.350 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:35.350 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:35.350 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:35.350 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:35.350 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:35.350 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:32:35.350 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:35.350 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:35.350 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:35.350 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:35.350 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:35.350 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:35.350 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:35.350 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:35.350 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:35.350 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:35.350 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:32:35.350 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:40.666 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:40.666 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:32:40.666 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:40.666 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:40.666 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:40.666 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:40.666 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:40.666 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:32:40.666 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:40.666 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:32:40.666 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:32:40.666 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:32:40.666 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:32:40.666 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:32:40.666 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:32:40.666 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:40.666 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:40.666 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:40.666 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:40.666 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:40.666 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:40.666 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:40.666 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:40.666 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:40.666 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:40.666 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:40.666 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:40.666 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:40.666 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:40.666 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:40.666 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:40.666 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:40.666 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:40.666 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:40.666 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:40.666 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:40.666 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:40.666 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:40.666 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:40.666 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:40.666 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:40.666 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:40.666 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:40.666 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:40.666 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:40.666 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:40.666 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:40.666 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:40.666 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:40.666 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:40.666 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:40.666 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:40.666 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:40.666 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:40.666 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:40.666 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:40.666 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:40.666 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:40.666 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:40.666 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:40.666 Found net devices under 0000:af:00.0: cvl_0_0 00:32:40.666 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:40.666 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:40.666 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:40.666 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:40.666 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:40.666 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:40.666 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:40.666 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:40.666 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:40.666 Found net devices under 0000:af:00.1: cvl_0_1 00:32:40.666 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:40.666 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:40.666 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:32:40.666 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:40.667 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:40.667 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:40.667 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:40.667 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:40.667 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:40.667 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:40.667 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:40.667 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:40.667 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:40.667 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:40.667 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:40.667 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:40.667 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:40.667 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:40.667 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:40.667 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:40.667 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:40.925 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:40.925 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:40.925 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:40.925 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:40.925 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:40.925 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:40.925 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:40.925 17:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:40.925 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:40.925 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:32:40.925 00:32:40.925 --- 10.0.0.2 ping statistics --- 00:32:40.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:40.925 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:32:40.925 17:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:40.925 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:40.925 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:32:40.925 00:32:40.925 --- 10.0.0.1 ping statistics --- 00:32:40.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:40.925 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:32:40.925 17:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:40.925 17:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:32:40.925 17:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:40.925 17:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:40.925 17:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:40.925 17:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:40.925 17:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:40.925 17:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:40.925 17:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:40.925 17:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:32:40.925 17:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:40.925 17:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:40.925 17:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:40.925 17:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2806005 00:32:40.925 17:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2806005 00:32:40.925 17:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:32:40.925 17:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2806005 ']' 00:32:40.925 17:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:40.925 17:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:40.925 17:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:40.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:40.925 17:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:40.925 17:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:41.184 [2024-12-09 17:43:10.116669] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:41.184 [2024-12-09 17:43:10.117598] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:32:41.184 [2024-12-09 17:43:10.117634] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:41.184 [2024-12-09 17:43:10.197799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:41.184 [2024-12-09 17:43:10.240128] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:41.184 [2024-12-09 17:43:10.240163] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:41.184 [2024-12-09 17:43:10.240170] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:41.184 [2024-12-09 17:43:10.240176] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:41.184 [2024-12-09 17:43:10.240181] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:41.184 [2024-12-09 17:43:10.241692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:41.184 [2024-12-09 17:43:10.241801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:41.184 [2024-12-09 17:43:10.241908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:41.184 [2024-12-09 17:43:10.241909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:41.184 [2024-12-09 17:43:10.310837] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:41.184 [2024-12-09 17:43:10.311234] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:41.184 [2024-12-09 17:43:10.311654] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:41.184 [2024-12-09 17:43:10.311850] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:41.184 [2024-12-09 17:43:10.311885] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:42.118 17:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:42.118 17:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:32:42.118 17:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:42.118 17:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:42.118 17:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:42.118 17:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:42.118 17:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:42.118 [2024-12-09 17:43:11.154554] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:42.118 17:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:42.376 17:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:32:42.376 17:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:42.635 17:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:32:42.635 17:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:42.894 17:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:32:42.894 17:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:42.894 17:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:32:42.894 17:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:32:43.153 17:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:43.411 17:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:32:43.411 17:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:43.670 17:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:32:43.670 17:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:43.670 17:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:32:43.670 17:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:32:43.929 17:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:44.188 17:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:32:44.188 17:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:44.446 17:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:32:44.446 17:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:32:44.446 17:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:44.703 [2024-12-09 17:43:13.738471] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:44.703 17:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:32:44.961 17:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:32:45.219 17:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:45.219 17:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:32:45.219 17:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:32:45.219 17:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:32:45.219 17:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:32:45.219 17:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:32:45.219 17:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:32:47.745 17:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:32:47.745 17:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:32:47.745 17:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:32:47.745 17:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:32:47.745 17:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:32:47.745 17:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:32:47.745 17:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:32:47.745 [global] 00:32:47.745 thread=1 00:32:47.745 invalidate=1 00:32:47.745 rw=write 00:32:47.745 time_based=1 00:32:47.745 runtime=1 00:32:47.745 ioengine=libaio 00:32:47.745 direct=1 00:32:47.745 bs=4096 00:32:47.745 iodepth=1 00:32:47.745 norandommap=0 00:32:47.745 numjobs=1 00:32:47.745 00:32:47.745 verify_dump=1 00:32:47.745 verify_backlog=512 00:32:47.745 verify_state_save=0 00:32:47.745 do_verify=1 00:32:47.745 verify=crc32c-intel 00:32:47.745 [job0] 00:32:47.745 filename=/dev/nvme0n1 00:32:47.745 [job1] 00:32:47.745 filename=/dev/nvme0n2 00:32:47.745 [job2] 00:32:47.745 filename=/dev/nvme0n3 00:32:47.745 [job3] 00:32:47.745 filename=/dev/nvme0n4 00:32:47.745 Could not set queue depth (nvme0n1) 00:32:47.745 Could not set queue depth (nvme0n2) 00:32:47.745 Could not set queue depth (nvme0n3) 00:32:47.745 Could not set queue depth (nvme0n4) 00:32:47.745 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:47.745 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:47.745 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:47.745 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:47.745 fio-3.35 00:32:47.745 Starting 4 threads 00:32:49.123 00:32:49.123 job0: (groupid=0, jobs=1): err= 0: pid=2807202: Mon Dec 9 17:43:17 2024 00:32:49.123 read: IOPS=22, BW=91.2KiB/s (93.4kB/s)(92.0KiB/1009msec) 00:32:49.123 slat (nsec): min=9503, max=28265, avg=22200.52, stdev=4312.60 00:32:49.123 clat (usec): min=287, max=41961, avg=39329.03, stdev=8517.93 00:32:49.123 lat (usec): min=310, max=41985, avg=39351.23, stdev=8517.85 00:32:49.123 clat percentiles (usec): 00:32:49.123 | 1.00th=[ 289], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:32:49.123 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:49.123 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:32:49.123 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:32:49.123 | 99.99th=[42206] 00:32:49.123 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:32:49.123 slat (usec): min=10, max=176, avg=11.98, stdev= 7.72 00:32:49.123 clat (usec): min=114, max=312, avg=186.76, stdev=22.11 00:32:49.123 lat (usec): min=143, max=489, avg=198.74, stdev=25.04 00:32:49.123 clat percentiles (usec): 00:32:49.123 | 1.00th=[ 141], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 172], 00:32:49.123 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 184], 60.00th=[ 188], 00:32:49.123 | 70.00th=[ 192], 80.00th=[ 198], 90.00th=[ 212], 95.00th=[ 233], 00:32:49.123 | 99.00th=[ 255], 99.50th=[ 285], 99.90th=[ 314], 99.95th=[ 314], 00:32:49.123 | 99.99th=[ 314] 00:32:49.123 bw ( KiB/s): min= 4096, max= 4096, per=23.25%, avg=4096.00, stdev= 0.00, samples=1 00:32:49.123 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:49.123 lat (usec) : 250=94.21%, 500=1.68% 00:32:49.123 lat (msec) : 50=4.11% 00:32:49.123 cpu : usr=0.79%, sys=0.50%, ctx=536, majf=0, minf=1 00:32:49.123 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:49.123 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:49.123 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:49.123 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:49.123 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:49.123 job1: (groupid=0, jobs=1): err= 0: pid=2807203: Mon Dec 9 17:43:17 2024 00:32:49.123 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:32:49.123 slat (nsec): min=7257, max=44529, avg=8299.41, stdev=1348.97 00:32:49.123 clat (usec): min=160, max=957, avg=199.33, stdev=36.35 00:32:49.123 lat (usec): min=168, max=965, avg=207.63, stdev=36.35 00:32:49.123 clat percentiles (usec): 00:32:49.123 | 1.00th=[ 172], 5.00th=[ 178], 10.00th=[ 178], 20.00th=[ 180], 00:32:49.123 | 30.00th=[ 182], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 188], 00:32:49.123 | 70.00th=[ 192], 80.00th=[ 241], 90.00th=[ 247], 95.00th=[ 251], 00:32:49.123 | 99.00th=[ 260], 99.50th=[ 273], 99.90th=[ 693], 99.95th=[ 750], 00:32:49.123 | 99.99th=[ 955] 00:32:49.123 write: IOPS=2905, BW=11.3MiB/s (11.9MB/s)(11.4MiB/1001msec); 0 zone resets 00:32:49.123 slat (nsec): min=10749, max=43077, avg=12048.67, stdev=1830.69 00:32:49.123 clat (usec): min=115, max=815, avg=143.55, stdev=26.59 00:32:49.123 lat (usec): min=132, max=826, avg=155.60, stdev=26.92 00:32:49.123 clat percentiles (usec): 00:32:49.123 | 1.00th=[ 125], 5.00th=[ 128], 10.00th=[ 129], 20.00th=[ 131], 00:32:49.123 | 30.00th=[ 133], 40.00th=[ 133], 50.00th=[ 135], 60.00th=[ 137], 00:32:49.123 | 70.00th=[ 139], 80.00th=[ 147], 90.00th=[ 182], 95.00th=[ 192], 00:32:49.123 | 99.00th=[ 239], 99.50th=[ 243], 99.90th=[ 306], 99.95th=[ 318], 00:32:49.123 | 99.99th=[ 816] 00:32:49.123 bw ( KiB/s): min=12288, max=12288, per=69.75%, avg=12288.00, stdev= 0.00, samples=1 00:32:49.123 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:32:49.123 lat (usec) : 250=96.78%, 500=3.13%, 750=0.05%, 1000=0.04% 00:32:49.123 cpu : usr=5.50%, sys=7.80%, ctx=5469, majf=0, minf=1 00:32:49.123 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:49.123 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:49.123 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:49.123 issued rwts: total=2560,2908,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:49.123 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:49.123 job2: (groupid=0, jobs=1): err= 0: pid=2807204: Mon Dec 9 17:43:17 2024 00:32:49.123 read: IOPS=21, BW=87.6KiB/s (89.8kB/s)(88.0KiB/1004msec) 00:32:49.123 slat (nsec): min=10513, max=25451, avg=23981.91, stdev=3117.27 00:32:49.123 clat (usec): min=40493, max=41085, avg=40944.99, stdev=113.27 00:32:49.123 lat (usec): min=40504, max=41108, avg=40968.97, stdev=115.91 00:32:49.123 clat percentiles (usec): 00:32:49.123 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:32:49.123 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:49.123 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:49.123 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:49.123 | 99.99th=[41157] 00:32:49.123 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:32:49.123 slat (nsec): min=11111, max=39169, avg=12675.32, stdev=2643.42 00:32:49.123 clat (usec): min=157, max=337, avg=183.40, stdev=15.33 00:32:49.123 lat (usec): min=170, max=376, avg=196.07, stdev=16.31 00:32:49.123 clat percentiles (usec): 00:32:49.123 | 1.00th=[ 167], 5.00th=[ 169], 10.00th=[ 174], 20.00th=[ 176], 00:32:49.123 | 30.00th=[ 178], 40.00th=[ 178], 50.00th=[ 180], 60.00th=[ 182], 00:32:49.123 | 70.00th=[ 186], 80.00th=[ 190], 90.00th=[ 198], 95.00th=[ 204], 00:32:49.123 | 99.00th=[ 237], 99.50th=[ 285], 99.90th=[ 338], 99.95th=[ 338], 00:32:49.123 | 99.99th=[ 338] 00:32:49.123 bw ( KiB/s): min= 4096, max= 4096, per=23.25%, avg=4096.00, stdev= 0.00, samples=1 00:32:49.123 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:49.123 lat (usec) : 250=95.13%, 500=0.75% 00:32:49.123 lat (msec) : 50=4.12% 00:32:49.123 cpu : usr=0.50%, sys=0.90%, ctx=535, majf=0, minf=2 00:32:49.123 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:49.123 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:49.124 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:49.124 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:49.124 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:49.124 job3: (groupid=0, jobs=1): err= 0: pid=2807205: Mon Dec 9 17:43:17 2024 00:32:49.124 read: IOPS=25, BW=104KiB/s (106kB/s)(104KiB/1003msec) 00:32:49.124 slat (nsec): min=8882, max=25228, avg=18987.65, stdev=5880.77 00:32:49.124 clat (usec): min=233, max=41049, avg=34677.15, stdev=14969.24 00:32:49.124 lat (usec): min=247, max=41070, avg=34696.14, stdev=14971.30 00:32:49.124 clat percentiles (usec): 00:32:49.124 | 1.00th=[ 235], 5.00th=[ 243], 10.00th=[ 265], 20.00th=[40633], 00:32:49.124 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:49.124 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:49.124 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:49.124 | 99.99th=[41157] 00:32:49.124 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:32:49.124 slat (nsec): min=11127, max=39767, avg=13253.39, stdev=2589.40 00:32:49.124 clat (usec): min=159, max=317, avg=179.14, stdev=12.35 00:32:49.124 lat (usec): min=171, max=356, avg=192.40, stdev=13.04 00:32:49.124 clat percentiles (usec): 00:32:49.124 | 1.00th=[ 163], 5.00th=[ 165], 10.00th=[ 167], 20.00th=[ 172], 00:32:49.124 | 30.00th=[ 174], 40.00th=[ 176], 50.00th=[ 178], 60.00th=[ 180], 00:32:49.124 | 70.00th=[ 182], 80.00th=[ 186], 90.00th=[ 190], 95.00th=[ 196], 00:32:49.124 | 99.00th=[ 217], 99.50th=[ 255], 99.90th=[ 318], 99.95th=[ 318], 00:32:49.124 | 99.99th=[ 318] 00:32:49.124 bw ( KiB/s): min= 4096, max= 4096, per=23.25%, avg=4096.00, stdev= 0.00, samples=1 00:32:49.124 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:49.124 lat (usec) : 250=94.98%, 500=0.93% 00:32:49.124 lat (msec) : 50=4.09% 00:32:49.124 cpu : usr=0.60%, sys=0.80%, ctx=540, majf=0, minf=1 00:32:49.124 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:49.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:49.124 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:49.124 issued rwts: total=26,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:49.124 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:49.124 00:32:49.124 Run status group 0 (all jobs): 00:32:49.124 READ: bw=10.2MiB/s (10.7MB/s), 87.6KiB/s-9.99MiB/s (89.8kB/s-10.5MB/s), io=10.3MiB (10.8MB), run=1001-1009msec 00:32:49.124 WRITE: bw=17.2MiB/s (18.0MB/s), 2030KiB/s-11.3MiB/s (2078kB/s-11.9MB/s), io=17.4MiB (18.2MB), run=1001-1009msec 00:32:49.124 00:32:49.124 Disk stats (read/write): 00:32:49.124 nvme0n1: ios=69/512, merge=0/0, ticks=762/88, in_queue=850, util=85.77% 00:32:49.124 nvme0n2: ios=2180/2560, merge=0/0, ticks=1372/345, in_queue=1717, util=97.35% 00:32:49.124 nvme0n3: ios=75/512, merge=0/0, ticks=1546/89, in_queue=1635, util=97.27% 00:32:49.124 nvme0n4: ios=48/512, merge=0/0, ticks=1684/81, in_queue=1765, util=97.25% 00:32:49.124 17:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:32:49.124 [global] 00:32:49.124 thread=1 00:32:49.124 invalidate=1 00:32:49.124 rw=randwrite 00:32:49.124 time_based=1 00:32:49.124 runtime=1 00:32:49.124 ioengine=libaio 00:32:49.124 direct=1 00:32:49.124 bs=4096 00:32:49.124 iodepth=1 00:32:49.124 norandommap=0 00:32:49.124 numjobs=1 00:32:49.124 00:32:49.124 verify_dump=1 00:32:49.124 verify_backlog=512 00:32:49.124 verify_state_save=0 00:32:49.124 do_verify=1 00:32:49.124 verify=crc32c-intel 00:32:49.124 [job0] 00:32:49.124 filename=/dev/nvme0n1 00:32:49.124 [job1] 00:32:49.124 filename=/dev/nvme0n2 00:32:49.124 [job2] 00:32:49.124 filename=/dev/nvme0n3 00:32:49.124 [job3] 00:32:49.124 filename=/dev/nvme0n4 00:32:49.124 Could not set queue depth (nvme0n1) 00:32:49.124 Could not set queue depth (nvme0n2) 00:32:49.124 Could not set queue depth (nvme0n3) 00:32:49.124 Could not set queue depth (nvme0n4) 00:32:49.381 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:49.381 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:49.381 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:49.381 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:49.381 fio-3.35 00:32:49.381 Starting 4 threads 00:32:50.754 00:32:50.754 job0: (groupid=0, jobs=1): err= 0: pid=2807574: Mon Dec 9 17:43:19 2024 00:32:50.754 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:32:50.754 slat (nsec): min=6716, max=35849, avg=8957.20, stdev=3412.15 00:32:50.754 clat (usec): min=195, max=41335, avg=691.52, stdev=4213.83 00:32:50.754 lat (usec): min=206, max=41346, avg=700.47, stdev=4214.33 00:32:50.754 clat percentiles (usec): 00:32:50.754 | 1.00th=[ 206], 5.00th=[ 219], 10.00th=[ 223], 20.00th=[ 229], 00:32:50.754 | 30.00th=[ 235], 40.00th=[ 237], 50.00th=[ 241], 60.00th=[ 243], 00:32:50.754 | 70.00th=[ 247], 80.00th=[ 253], 90.00th=[ 265], 95.00th=[ 277], 00:32:50.754 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:50.754 | 99.99th=[41157] 00:32:50.754 write: IOPS=1503, BW=6014KiB/s (6158kB/s)(6020KiB/1001msec); 0 zone resets 00:32:50.754 slat (nsec): min=9475, max=47205, avg=12194.76, stdev=3800.65 00:32:50.754 clat (usec): min=116, max=549, avg=170.23, stdev=22.70 00:32:50.754 lat (usec): min=149, max=563, avg=182.42, stdev=22.86 00:32:50.754 clat percentiles (usec): 00:32:50.754 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 149], 20.00th=[ 155], 00:32:50.754 | 30.00th=[ 159], 40.00th=[ 161], 50.00th=[ 165], 60.00th=[ 169], 00:32:50.754 | 70.00th=[ 178], 80.00th=[ 184], 90.00th=[ 194], 95.00th=[ 204], 00:32:50.754 | 99.00th=[ 243], 99.50th=[ 245], 99.90th=[ 441], 99.95th=[ 553], 00:32:50.754 | 99.99th=[ 553] 00:32:50.754 bw ( KiB/s): min= 7104, max= 7104, per=27.05%, avg=7104.00, stdev= 0.00, samples=1 00:32:50.754 iops : min= 1776, max= 1776, avg=1776.00, stdev= 0.00, samples=1 00:32:50.754 lat (usec) : 250=89.44%, 500=10.00%, 750=0.04% 00:32:50.754 lat (msec) : 4=0.04%, 10=0.04%, 50=0.43% 00:32:50.754 cpu : usr=2.10%, sys=4.20%, ctx=2529, majf=0, minf=2 00:32:50.754 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:50.754 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:50.754 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:50.755 issued rwts: total=1024,1505,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:50.755 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:50.755 job1: (groupid=0, jobs=1): err= 0: pid=2807575: Mon Dec 9 17:43:19 2024 00:32:50.755 read: IOPS=2263, BW=9055KiB/s (9272kB/s)(9064KiB/1001msec) 00:32:50.755 slat (nsec): min=6405, max=25248, avg=7356.44, stdev=922.82 00:32:50.755 clat (usec): min=191, max=289, avg=228.12, stdev=12.87 00:32:50.755 lat (usec): min=199, max=296, avg=235.48, stdev=12.92 00:32:50.755 clat percentiles (usec): 00:32:50.755 | 1.00th=[ 202], 5.00th=[ 210], 10.00th=[ 215], 20.00th=[ 219], 00:32:50.755 | 30.00th=[ 221], 40.00th=[ 225], 50.00th=[ 227], 60.00th=[ 229], 00:32:50.755 | 70.00th=[ 233], 80.00th=[ 239], 90.00th=[ 247], 95.00th=[ 251], 00:32:50.755 | 99.00th=[ 265], 99.50th=[ 277], 99.90th=[ 285], 99.95th=[ 289], 00:32:50.755 | 99.99th=[ 289] 00:32:50.755 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:32:50.755 slat (nsec): min=9313, max=38153, avg=10557.95, stdev=1395.06 00:32:50.755 clat (usec): min=127, max=1520, avg=166.50, stdev=39.18 00:32:50.755 lat (usec): min=138, max=1530, avg=177.06, stdev=39.29 00:32:50.755 clat percentiles (usec): 00:32:50.755 | 1.00th=[ 133], 5.00th=[ 137], 10.00th=[ 143], 20.00th=[ 147], 00:32:50.755 | 30.00th=[ 151], 40.00th=[ 153], 50.00th=[ 157], 60.00th=[ 161], 00:32:50.755 | 70.00th=[ 169], 80.00th=[ 182], 90.00th=[ 202], 95.00th=[ 237], 00:32:50.755 | 99.00th=[ 262], 99.50th=[ 269], 99.90th=[ 367], 99.95th=[ 437], 00:32:50.755 | 99.99th=[ 1516] 00:32:50.755 bw ( KiB/s): min=10232, max=10232, per=38.96%, avg=10232.00, stdev= 0.00, samples=1 00:32:50.755 iops : min= 2558, max= 2558, avg=2558.00, stdev= 0.00, samples=1 00:32:50.755 lat (usec) : 250=96.13%, 500=3.85% 00:32:50.755 lat (msec) : 2=0.02% 00:32:50.755 cpu : usr=2.30%, sys=4.70%, ctx=4829, majf=0, minf=1 00:32:50.755 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:50.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:50.755 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:50.755 issued rwts: total=2266,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:50.755 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:50.755 job2: (groupid=0, jobs=1): err= 0: pid=2807576: Mon Dec 9 17:43:19 2024 00:32:50.755 read: IOPS=1527, BW=6109KiB/s (6256kB/s)(6164KiB/1009msec) 00:32:50.755 slat (nsec): min=6951, max=40173, avg=8188.13, stdev=1871.45 00:32:50.755 clat (usec): min=206, max=41956, avg=373.95, stdev=2152.20 00:32:50.755 lat (usec): min=215, max=41979, avg=382.13, stdev=2152.53 00:32:50.755 clat percentiles (usec): 00:32:50.755 | 1.00th=[ 215], 5.00th=[ 219], 10.00th=[ 223], 20.00th=[ 227], 00:32:50.755 | 30.00th=[ 235], 40.00th=[ 239], 50.00th=[ 245], 60.00th=[ 249], 00:32:50.755 | 70.00th=[ 253], 80.00th=[ 260], 90.00th=[ 285], 95.00th=[ 416], 00:32:50.755 | 99.00th=[ 437], 99.50th=[ 449], 99.90th=[41681], 99.95th=[42206], 00:32:50.755 | 99.99th=[42206] 00:32:50.755 write: IOPS=2029, BW=8119KiB/s (8314kB/s)(8192KiB/1009msec); 0 zone resets 00:32:50.755 slat (nsec): min=9728, max=43259, avg=11386.30, stdev=2450.55 00:32:50.755 clat (usec): min=132, max=355, avg=188.49, stdev=32.35 00:32:50.755 lat (usec): min=142, max=398, avg=199.88, stdev=32.46 00:32:50.755 clat percentiles (usec): 00:32:50.755 | 1.00th=[ 143], 5.00th=[ 153], 10.00th=[ 159], 20.00th=[ 165], 00:32:50.755 | 30.00th=[ 169], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 186], 00:32:50.755 | 70.00th=[ 192], 80.00th=[ 210], 90.00th=[ 241], 95.00th=[ 245], 00:32:50.755 | 99.00th=[ 293], 99.50th=[ 302], 99.90th=[ 310], 99.95th=[ 322], 00:32:50.755 | 99.99th=[ 355] 00:32:50.755 bw ( KiB/s): min= 6664, max= 9720, per=31.19%, avg=8192.00, stdev=2160.92, samples=2 00:32:50.755 iops : min= 1666, max= 2430, avg=2048.00, stdev=540.23, samples=2 00:32:50.755 lat (usec) : 250=82.42%, 500=17.41%, 750=0.03% 00:32:50.755 lat (msec) : 50=0.14% 00:32:50.755 cpu : usr=2.18%, sys=6.45%, ctx=3590, majf=0, minf=1 00:32:50.755 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:50.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:50.755 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:50.755 issued rwts: total=1541,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:50.755 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:50.755 job3: (groupid=0, jobs=1): err= 0: pid=2807577: Mon Dec 9 17:43:19 2024 00:32:50.755 read: IOPS=23, BW=95.4KiB/s (97.7kB/s)(96.0KiB/1006msec) 00:32:50.755 slat (nsec): min=9142, max=26799, avg=20192.71, stdev=5838.87 00:32:50.755 clat (usec): min=234, max=41067, avg=37543.92, stdev=11489.95 00:32:50.755 lat (usec): min=258, max=41079, avg=37564.11, stdev=11490.03 00:32:50.755 clat percentiles (usec): 00:32:50.755 | 1.00th=[ 235], 5.00th=[ 245], 10.00th=[40633], 20.00th=[40633], 00:32:50.755 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:50.755 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:50.755 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:50.755 | 99.99th=[41157] 00:32:50.755 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:32:50.755 slat (nsec): min=10588, max=35092, avg=12272.87, stdev=1911.97 00:32:50.755 clat (usec): min=155, max=257, avg=177.74, stdev=11.30 00:32:50.755 lat (usec): min=166, max=268, avg=190.01, stdev=11.62 00:32:50.755 clat percentiles (usec): 00:32:50.755 | 1.00th=[ 159], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 169], 00:32:50.755 | 30.00th=[ 172], 40.00th=[ 174], 50.00th=[ 176], 60.00th=[ 178], 00:32:50.755 | 70.00th=[ 182], 80.00th=[ 186], 90.00th=[ 192], 95.00th=[ 200], 00:32:50.755 | 99.00th=[ 212], 99.50th=[ 215], 99.90th=[ 258], 99.95th=[ 258], 00:32:50.755 | 99.99th=[ 258] 00:32:50.755 bw ( KiB/s): min= 4096, max= 4096, per=15.60%, avg=4096.00, stdev= 0.00, samples=1 00:32:50.755 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:50.755 lat (usec) : 250=95.71%, 500=0.19% 00:32:50.755 lat (msec) : 50=4.10% 00:32:50.755 cpu : usr=1.00%, sys=0.40%, ctx=538, majf=0, minf=1 00:32:50.755 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:50.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:50.755 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:50.755 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:50.755 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:50.755 00:32:50.755 Run status group 0 (all jobs): 00:32:50.755 READ: bw=18.8MiB/s (19.7MB/s), 95.4KiB/s-9055KiB/s (97.7kB/s-9272kB/s), io=19.0MiB (19.9MB), run=1001-1009msec 00:32:50.755 WRITE: bw=25.6MiB/s (26.9MB/s), 2036KiB/s-9.99MiB/s (2085kB/s-10.5MB/s), io=25.9MiB (27.1MB), run=1001-1009msec 00:32:50.755 00:32:50.755 Disk stats (read/write): 00:32:50.755 nvme0n1: ios=546/1024, merge=0/0, ticks=783/167, in_queue=950, util=96.29% 00:32:50.755 nvme0n2: ios=1751/2048, merge=0/0, ticks=1372/341, in_queue=1713, util=99.79% 00:32:50.755 nvme0n3: ios=1536/1850, merge=0/0, ticks=368/337, in_queue=705, util=86.77% 00:32:50.755 nvme0n4: ios=67/512, merge=0/0, ticks=1049/86, in_queue=1135, util=100.00% 00:32:50.755 17:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:32:50.755 [global] 00:32:50.755 thread=1 00:32:50.755 invalidate=1 00:32:50.755 rw=write 00:32:50.755 time_based=1 00:32:50.755 runtime=1 00:32:50.755 ioengine=libaio 00:32:50.755 direct=1 00:32:50.755 bs=4096 00:32:50.755 iodepth=128 00:32:50.755 norandommap=0 00:32:50.755 numjobs=1 00:32:50.755 00:32:50.755 verify_dump=1 00:32:50.755 verify_backlog=512 00:32:50.755 verify_state_save=0 00:32:50.755 do_verify=1 00:32:50.755 verify=crc32c-intel 00:32:50.755 [job0] 00:32:50.755 filename=/dev/nvme0n1 00:32:50.755 [job1] 00:32:50.755 filename=/dev/nvme0n2 00:32:50.755 [job2] 00:32:50.755 filename=/dev/nvme0n3 00:32:50.755 [job3] 00:32:50.755 filename=/dev/nvme0n4 00:32:50.755 Could not set queue depth (nvme0n1) 00:32:50.755 Could not set queue depth (nvme0n2) 00:32:50.755 Could not set queue depth (nvme0n3) 00:32:50.755 Could not set queue depth (nvme0n4) 00:32:51.013 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:51.013 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:51.013 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:51.013 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:51.013 fio-3.35 00:32:51.013 Starting 4 threads 00:32:52.385 00:32:52.385 job0: (groupid=0, jobs=1): err= 0: pid=2807959: Mon Dec 9 17:43:21 2024 00:32:52.385 read: IOPS=2320, BW=9283KiB/s (9506kB/s)(9376KiB/1010msec) 00:32:52.385 slat (nsec): min=1587, max=17130k, avg=154472.83, stdev=1023180.88 00:32:52.385 clat (usec): min=3385, max=56901, avg=18113.70, stdev=9705.55 00:32:52.385 lat (usec): min=9217, max=56918, avg=18268.17, stdev=9780.24 00:32:52.385 clat percentiles (usec): 00:32:52.385 | 1.00th=[11731], 5.00th=[12125], 10.00th=[12649], 20.00th=[13042], 00:32:52.385 | 30.00th=[13304], 40.00th=[13698], 50.00th=[13960], 60.00th=[14746], 00:32:52.385 | 70.00th=[15270], 80.00th=[18220], 90.00th=[36963], 95.00th=[42730], 00:32:52.385 | 99.00th=[52691], 99.50th=[53216], 99.90th=[53216], 99.95th=[53740], 00:32:52.385 | 99.99th=[56886] 00:32:52.385 write: IOPS=2534, BW=9.90MiB/s (10.4MB/s)(10.0MiB/1010msec); 0 zone resets 00:32:52.385 slat (usec): min=2, max=22733, avg=244.60, stdev=1370.61 00:32:52.385 clat (usec): min=8064, max=91352, avg=32607.97, stdev=17733.22 00:32:52.385 lat (usec): min=8074, max=91356, avg=32852.57, stdev=17859.31 00:32:52.385 clat percentiles (usec): 00:32:52.385 | 1.00th=[11863], 5.00th=[12780], 10.00th=[13173], 20.00th=[15270], 00:32:52.385 | 30.00th=[20841], 40.00th=[23462], 50.00th=[27657], 60.00th=[35390], 00:32:52.385 | 70.00th=[41157], 80.00th=[47449], 90.00th=[54264], 95.00th=[62653], 00:32:52.385 | 99.00th=[90702], 99.50th=[90702], 99.90th=[91751], 99.95th=[91751], 00:32:52.385 | 99.99th=[91751] 00:32:52.385 bw ( KiB/s): min= 9868, max=10592, per=16.15%, avg=10230.00, stdev=511.95, samples=2 00:32:52.385 iops : min= 2467, max= 2648, avg=2557.50, stdev=127.99, samples=2 00:32:52.385 lat (msec) : 4=0.02%, 10=0.37%, 20=53.51%, 50=38.01%, 100=8.10% 00:32:52.385 cpu : usr=1.98%, sys=3.37%, ctx=246, majf=0, minf=1 00:32:52.385 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:32:52.385 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.385 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:52.385 issued rwts: total=2344,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:52.385 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:52.385 job1: (groupid=0, jobs=1): err= 0: pid=2807977: Mon Dec 9 17:43:21 2024 00:32:52.385 read: IOPS=3044, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1009msec) 00:32:52.385 slat (nsec): min=981, max=16847k, avg=118898.61, stdev=811003.14 00:32:52.385 clat (usec): min=2398, max=62892, avg=13908.10, stdev=8783.07 00:32:52.385 lat (usec): min=2404, max=62899, avg=14027.00, stdev=8856.36 00:32:52.385 clat percentiles (usec): 00:32:52.385 | 1.00th=[ 2540], 5.00th=[ 7373], 10.00th=[ 8717], 20.00th=[ 8979], 00:32:52.385 | 30.00th=[ 9241], 40.00th=[10421], 50.00th=[10945], 60.00th=[12649], 00:32:52.386 | 70.00th=[14746], 80.00th=[16909], 90.00th=[21890], 95.00th=[29754], 00:32:52.386 | 99.00th=[54264], 99.50th=[58459], 99.90th=[62653], 99.95th=[62653], 00:32:52.386 | 99.99th=[62653] 00:32:52.386 write: IOPS=3373, BW=13.2MiB/s (13.8MB/s)(13.3MiB/1009msec); 0 zone resets 00:32:52.386 slat (nsec): min=1993, max=11358k, avg=174255.89, stdev=846768.10 00:32:52.386 clat (usec): min=1182, max=97778, avg=25072.65, stdev=18488.62 00:32:52.386 lat (usec): min=1192, max=97786, avg=25246.90, stdev=18597.61 00:32:52.386 clat percentiles (usec): 00:32:52.386 | 1.00th=[ 2737], 5.00th=[ 5538], 10.00th=[ 7308], 20.00th=[ 9634], 00:32:52.386 | 30.00th=[10945], 40.00th=[13698], 50.00th=[23462], 60.00th=[28181], 00:32:52.386 | 70.00th=[32375], 80.00th=[34866], 90.00th=[49021], 95.00th=[62129], 00:32:52.386 | 99.00th=[95945], 99.50th=[96994], 99.90th=[98042], 99.95th=[98042], 00:32:52.386 | 99.99th=[98042] 00:32:52.386 bw ( KiB/s): min=12288, max=13920, per=20.69%, avg=13104.00, stdev=1154.00, samples=2 00:32:52.386 iops : min= 3072, max= 3480, avg=3276.00, stdev=288.50, samples=2 00:32:52.386 lat (msec) : 2=0.29%, 4=1.81%, 10=28.17%, 20=35.96%, 50=27.72% 00:32:52.386 lat (msec) : 100=6.05% 00:32:52.386 cpu : usr=2.28%, sys=3.47%, ctx=360, majf=0, minf=2 00:32:52.386 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:32:52.386 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.386 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:52.386 issued rwts: total=3072,3404,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:52.386 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:52.386 job2: (groupid=0, jobs=1): err= 0: pid=2808011: Mon Dec 9 17:43:21 2024 00:32:52.386 read: IOPS=6137, BW=24.0MiB/s (25.1MB/s)(24.0MiB/1001msec) 00:32:52.386 slat (nsec): min=1365, max=3763.4k, avg=76719.07, stdev=380174.34 00:32:52.386 clat (usec): min=6308, max=15455, avg=9793.51, stdev=1176.41 00:32:52.386 lat (usec): min=6313, max=15460, avg=9870.23, stdev=1195.76 00:32:52.386 clat percentiles (usec): 00:32:52.386 | 1.00th=[ 7111], 5.00th=[ 7898], 10.00th=[ 8455], 20.00th=[ 8979], 00:32:52.386 | 30.00th=[ 9241], 40.00th=[ 9372], 50.00th=[ 9634], 60.00th=[10028], 00:32:52.386 | 70.00th=[10290], 80.00th=[10683], 90.00th=[11207], 95.00th=[11731], 00:32:52.386 | 99.00th=[13173], 99.50th=[13960], 99.90th=[14484], 99.95th=[15139], 00:32:52.386 | 99.99th=[15401] 00:32:52.386 write: IOPS=6632, BW=25.9MiB/s (27.2MB/s)(25.9MiB/1001msec); 0 zone resets 00:32:52.386 slat (usec): min=2, max=19526, avg=74.99, stdev=442.12 00:32:52.386 clat (usec): min=471, max=36550, avg=10004.36, stdev=3223.57 00:32:52.386 lat (usec): min=498, max=36563, avg=10079.36, stdev=3255.03 00:32:52.386 clat percentiles (usec): 00:32:52.386 | 1.00th=[ 6521], 5.00th=[ 8356], 10.00th=[ 8848], 20.00th=[ 9110], 00:32:52.386 | 30.00th=[ 9241], 40.00th=[ 9372], 50.00th=[ 9372], 60.00th=[ 9634], 00:32:52.386 | 70.00th=[ 9765], 80.00th=[10028], 90.00th=[11076], 95.00th=[11994], 00:32:52.386 | 99.00th=[32637], 99.50th=[33424], 99.90th=[35390], 99.95th=[35390], 00:32:52.386 | 99.99th=[36439] 00:32:52.386 bw ( KiB/s): min=25744, max=25744, per=40.64%, avg=25744.00, stdev= 0.00, samples=1 00:32:52.386 iops : min= 6436, max= 6436, avg=6436.00, stdev= 0.00, samples=1 00:32:52.386 lat (usec) : 500=0.01%, 750=0.01% 00:32:52.386 lat (msec) : 4=0.33%, 10=69.69%, 20=28.97%, 50=1.00% 00:32:52.386 cpu : usr=4.30%, sys=6.60%, ctx=664, majf=0, minf=2 00:32:52.386 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:32:52.386 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.386 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:52.386 issued rwts: total=6144,6639,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:52.386 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:52.386 job3: (groupid=0, jobs=1): err= 0: pid=2808024: Mon Dec 9 17:43:21 2024 00:32:52.386 read: IOPS=3044, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1009msec) 00:32:52.386 slat (nsec): min=1544, max=12358k, avg=151854.92, stdev=938745.03 00:32:52.386 clat (usec): min=4746, max=68506, avg=16204.04, stdev=9040.89 00:32:52.386 lat (usec): min=4771, max=68511, avg=16355.90, stdev=9154.16 00:32:52.386 clat percentiles (usec): 00:32:52.386 | 1.00th=[ 7898], 5.00th=[ 8586], 10.00th=[ 9896], 20.00th=[10945], 00:32:52.386 | 30.00th=[11600], 40.00th=[11863], 50.00th=[12387], 60.00th=[15795], 00:32:52.386 | 70.00th=[17695], 80.00th=[19006], 90.00th=[24773], 95.00th=[36963], 00:32:52.386 | 99.00th=[56361], 99.50th=[59507], 99.90th=[68682], 99.95th=[68682], 00:32:52.386 | 99.99th=[68682] 00:32:52.386 write: IOPS=3360, BW=13.1MiB/s (13.8MB/s)(13.2MiB/1009msec); 0 zone resets 00:32:52.386 slat (usec): min=2, max=13509, avg=145.64, stdev=671.81 00:32:52.386 clat (usec): min=1667, max=68501, avg=23028.49, stdev=12215.63 00:32:52.386 lat (usec): min=1675, max=68507, avg=23174.13, stdev=12276.25 00:32:52.386 clat percentiles (usec): 00:32:52.386 | 1.00th=[ 4883], 5.00th=[ 7504], 10.00th=[ 9241], 20.00th=[10945], 00:32:52.386 | 30.00th=[12649], 40.00th=[17695], 50.00th=[23200], 60.00th=[26346], 00:32:52.386 | 70.00th=[30540], 80.00th=[33817], 90.00th=[35914], 95.00th=[42730], 00:32:52.386 | 99.00th=[60031], 99.50th=[60556], 99.90th=[61080], 99.95th=[68682], 00:32:52.386 | 99.99th=[68682] 00:32:52.386 bw ( KiB/s): min=10248, max=15856, per=20.61%, avg=13052.00, stdev=3965.45, samples=2 00:32:52.386 iops : min= 2562, max= 3964, avg=3263.00, stdev=991.36, samples=2 00:32:52.386 lat (msec) : 2=0.08%, 4=0.11%, 10=14.23%, 20=48.32%, 50=35.20% 00:32:52.386 lat (msec) : 100=2.06% 00:32:52.386 cpu : usr=2.48%, sys=5.75%, ctx=338, majf=0, minf=1 00:32:52.386 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:32:52.386 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.386 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:52.386 issued rwts: total=3072,3391,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:52.386 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:52.386 00:32:52.386 Run status group 0 (all jobs): 00:32:52.386 READ: bw=56.6MiB/s (59.3MB/s), 9283KiB/s-24.0MiB/s (9506kB/s-25.1MB/s), io=57.2MiB (59.9MB), run=1001-1010msec 00:32:52.386 WRITE: bw=61.9MiB/s (64.9MB/s), 9.90MiB/s-25.9MiB/s (10.4MB/s-27.2MB/s), io=62.5MiB (65.5MB), run=1001-1010msec 00:32:52.386 00:32:52.386 Disk stats (read/write): 00:32:52.386 nvme0n1: ios=1563/1999, merge=0/0, ticks=8662/23510, in_queue=32172, util=98.90% 00:32:52.386 nvme0n2: ios=2560/2575, merge=0/0, ticks=31595/65426, in_queue=97021, util=81.20% 00:32:52.386 nvme0n3: ios=4899/5120, merge=0/0, ticks=15854/15912, in_queue=31766, util=86.66% 00:32:52.386 nvme0n4: ios=2599/2607, merge=0/0, ticks=39403/56408, in_queue=95811, util=97.74% 00:32:52.386 17:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:32:52.386 [global] 00:32:52.386 thread=1 00:32:52.386 invalidate=1 00:32:52.386 rw=randwrite 00:32:52.386 time_based=1 00:32:52.386 runtime=1 00:32:52.386 ioengine=libaio 00:32:52.386 direct=1 00:32:52.386 bs=4096 00:32:52.386 iodepth=128 00:32:52.386 norandommap=0 00:32:52.386 numjobs=1 00:32:52.386 00:32:52.386 verify_dump=1 00:32:52.386 verify_backlog=512 00:32:52.386 verify_state_save=0 00:32:52.386 do_verify=1 00:32:52.386 verify=crc32c-intel 00:32:52.386 [job0] 00:32:52.386 filename=/dev/nvme0n1 00:32:52.386 [job1] 00:32:52.386 filename=/dev/nvme0n2 00:32:52.386 [job2] 00:32:52.386 filename=/dev/nvme0n3 00:32:52.386 [job3] 00:32:52.386 filename=/dev/nvme0n4 00:32:52.386 Could not set queue depth (nvme0n1) 00:32:52.386 Could not set queue depth (nvme0n2) 00:32:52.386 Could not set queue depth (nvme0n3) 00:32:52.386 Could not set queue depth (nvme0n4) 00:32:52.644 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:52.644 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:52.644 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:52.644 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:52.644 fio-3.35 00:32:52.644 Starting 4 threads 00:32:54.017 00:32:54.017 job0: (groupid=0, jobs=1): err= 0: pid=2808398: Mon Dec 9 17:43:22 2024 00:32:54.017 read: IOPS=6636, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1003msec) 00:32:54.017 slat (nsec): min=1400, max=7241.4k, avg=69016.02, stdev=531377.74 00:32:54.017 clat (usec): min=3895, max=26434, avg=9171.50, stdev=3722.51 00:32:54.017 lat (usec): min=3899, max=26443, avg=9240.52, stdev=3766.77 00:32:54.017 clat percentiles (usec): 00:32:54.017 | 1.00th=[ 4686], 5.00th=[ 5932], 10.00th=[ 6521], 20.00th=[ 7046], 00:32:54.017 | 30.00th=[ 7242], 40.00th=[ 7504], 50.00th=[ 7635], 60.00th=[ 8160], 00:32:54.017 | 70.00th=[ 9372], 80.00th=[10945], 90.00th=[13435], 95.00th=[16712], 00:32:54.017 | 99.00th=[22676], 99.50th=[26084], 99.90th=[26346], 99.95th=[26346], 00:32:54.017 | 99.99th=[26346] 00:32:54.017 write: IOPS=7014, BW=27.4MiB/s (28.7MB/s)(27.5MiB/1003msec); 0 zone resets 00:32:54.017 slat (usec): min=2, max=20071, avg=68.94, stdev=521.84 00:32:54.017 clat (usec): min=1415, max=40161, avg=9367.50, stdev=7048.34 00:32:54.017 lat (usec): min=1427, max=40170, avg=9436.44, stdev=7093.55 00:32:54.017 clat percentiles (usec): 00:32:54.017 | 1.00th=[ 3752], 5.00th=[ 4490], 10.00th=[ 4752], 20.00th=[ 5735], 00:32:54.017 | 30.00th=[ 6521], 40.00th=[ 7242], 50.00th=[ 7635], 60.00th=[ 7963], 00:32:54.017 | 70.00th=[ 8094], 80.00th=[ 8586], 90.00th=[15533], 95.00th=[28967], 00:32:54.017 | 99.00th=[36439], 99.50th=[39060], 99.90th=[39584], 99.95th=[39584], 00:32:54.017 | 99.99th=[40109] 00:32:54.017 bw ( KiB/s): min=22832, max=32440, per=45.09%, avg=27636.00, stdev=6793.88, samples=2 00:32:54.017 iops : min= 5708, max= 8110, avg=6909.00, stdev=1698.47, samples=2 00:32:54.017 lat (msec) : 2=0.07%, 4=0.61%, 10=77.52%, 20=15.04%, 50=6.76% 00:32:54.017 cpu : usr=5.39%, sys=7.39%, ctx=540, majf=0, minf=1 00:32:54.017 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:32:54.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:54.017 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:54.017 issued rwts: total=6656,7036,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:54.017 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:54.017 job1: (groupid=0, jobs=1): err= 0: pid=2808414: Mon Dec 9 17:43:22 2024 00:32:54.017 read: IOPS=1894, BW=7579KiB/s (7761kB/s)(7640KiB/1008msec) 00:32:54.017 slat (nsec): min=1423, max=18791k, avg=236206.08, stdev=1439772.47 00:32:54.017 clat (usec): min=1641, max=73400, avg=30318.98, stdev=16592.19 00:32:54.017 lat (usec): min=12096, max=73423, avg=30555.19, stdev=16709.74 00:32:54.017 clat percentiles (usec): 00:32:54.017 | 1.00th=[13435], 5.00th=[13566], 10.00th=[13698], 20.00th=[13960], 00:32:54.017 | 30.00th=[18744], 40.00th=[20055], 50.00th=[22414], 60.00th=[27657], 00:32:54.017 | 70.00th=[37487], 80.00th=[51119], 90.00th=[53740], 95.00th=[60556], 00:32:54.017 | 99.00th=[68682], 99.50th=[70779], 99.90th=[72877], 99.95th=[73925], 00:32:54.017 | 99.99th=[73925] 00:32:54.017 write: IOPS=2031, BW=8127KiB/s (8322kB/s)(8192KiB/1008msec); 0 zone resets 00:32:54.017 slat (usec): min=2, max=22523, avg=264.38, stdev=1298.48 00:32:54.017 clat (usec): min=12832, max=73045, avg=33661.68, stdev=14007.64 00:32:54.017 lat (usec): min=12849, max=73110, avg=33926.06, stdev=14119.27 00:32:54.017 clat percentiles (usec): 00:32:54.017 | 1.00th=[18220], 5.00th=[18744], 10.00th=[19006], 20.00th=[19792], 00:32:54.017 | 30.00th=[22414], 40.00th=[25822], 50.00th=[29230], 60.00th=[35390], 00:32:54.017 | 70.00th=[41157], 80.00th=[46400], 90.00th=[55313], 95.00th=[60556], 00:32:54.017 | 99.00th=[67634], 99.50th=[67634], 99.90th=[68682], 99.95th=[69731], 00:32:54.017 | 99.99th=[72877] 00:32:54.017 bw ( KiB/s): min= 8192, max= 8192, per=13.37%, avg=8192.00, stdev= 0.00, samples=2 00:32:54.017 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:32:54.017 lat (msec) : 2=0.03%, 20=28.47%, 50=52.80%, 100=18.70% 00:32:54.017 cpu : usr=1.59%, sys=2.28%, ctx=225, majf=0, minf=1 00:32:54.017 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:32:54.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:54.017 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:54.017 issued rwts: total=1910,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:54.017 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:54.017 job2: (groupid=0, jobs=1): err= 0: pid=2808428: Mon Dec 9 17:43:22 2024 00:32:54.017 read: IOPS=2539, BW=9.92MiB/s (10.4MB/s)(10.0MiB/1008msec) 00:32:54.017 slat (nsec): min=1931, max=21654k, avg=165104.12, stdev=1117012.83 00:32:54.017 clat (usec): min=7950, max=70059, avg=21367.52, stdev=8568.27 00:32:54.017 lat (usec): min=7962, max=70085, avg=21532.62, stdev=8675.90 00:32:54.017 clat percentiles (usec): 00:32:54.018 | 1.00th=[10028], 5.00th=[10552], 10.00th=[12256], 20.00th=[16581], 00:32:54.018 | 30.00th=[17433], 40.00th=[18220], 50.00th=[19006], 60.00th=[19792], 00:32:54.018 | 70.00th=[21627], 80.00th=[24511], 90.00th=[35914], 95.00th=[39060], 00:32:54.018 | 99.00th=[48497], 99.50th=[49021], 99.90th=[55837], 99.95th=[55837], 00:32:54.018 | 99.99th=[69731] 00:32:54.018 write: IOPS=2755, BW=10.8MiB/s (11.3MB/s)(10.9MiB/1008msec); 0 zone resets 00:32:54.018 slat (usec): min=2, max=25201, avg=202.24, stdev=1423.36 00:32:54.018 clat (usec): min=589, max=78098, avg=26129.44, stdev=16560.61 00:32:54.018 lat (usec): min=6202, max=78121, avg=26331.69, stdev=16689.24 00:32:54.018 clat percentiles (usec): 00:32:54.018 | 1.00th=[ 9765], 5.00th=[10159], 10.00th=[10683], 20.00th=[10945], 00:32:54.018 | 30.00th=[13960], 40.00th=[18482], 50.00th=[20055], 60.00th=[23725], 00:32:54.018 | 70.00th=[28705], 80.00th=[42206], 90.00th=[55837], 95.00th=[58459], 00:32:54.018 | 99.00th=[64226], 99.50th=[64750], 99.90th=[70779], 99.95th=[78119], 00:32:54.018 | 99.99th=[78119] 00:32:54.018 bw ( KiB/s): min= 8192, max=13008, per=17.29%, avg=10600.00, stdev=3405.43, samples=2 00:32:54.018 iops : min= 2048, max= 3252, avg=2650.00, stdev=851.36, samples=2 00:32:54.018 lat (usec) : 750=0.02% 00:32:54.018 lat (msec) : 10=2.08%, 20=53.02%, 50=36.46%, 100=8.43% 00:32:54.018 cpu : usr=2.68%, sys=4.17%, ctx=180, majf=0, minf=1 00:32:54.018 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:32:54.018 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:54.018 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:54.018 issued rwts: total=2560,2778,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:54.018 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:54.018 job3: (groupid=0, jobs=1): err= 0: pid=2808434: Mon Dec 9 17:43:22 2024 00:32:54.018 read: IOPS=3173, BW=12.4MiB/s (13.0MB/s)(12.5MiB/1008msec) 00:32:54.018 slat (usec): min=2, max=12987, avg=133.06, stdev=881.61 00:32:54.018 clat (usec): min=1795, max=58022, avg=15488.62, stdev=8267.38 00:32:54.018 lat (usec): min=5187, max=58027, avg=15621.68, stdev=8352.17 00:32:54.018 clat percentiles (usec): 00:32:54.018 | 1.00th=[ 7177], 5.00th=[ 9503], 10.00th=[10421], 20.00th=[10683], 00:32:54.018 | 30.00th=[11338], 40.00th=[13042], 50.00th=[13698], 60.00th=[14353], 00:32:54.018 | 70.00th=[14746], 80.00th=[15533], 90.00th=[20841], 95.00th=[33817], 00:32:54.018 | 99.00th=[52691], 99.50th=[56886], 99.90th=[57934], 99.95th=[57934], 00:32:54.018 | 99.99th=[57934] 00:32:54.018 write: IOPS=3555, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1008msec); 0 zone resets 00:32:54.018 slat (usec): min=2, max=13464, avg=152.43, stdev=825.68 00:32:54.018 clat (usec): min=4035, max=58025, avg=21752.79, stdev=14544.94 00:32:54.018 lat (usec): min=4049, max=58040, avg=21905.22, stdev=14647.48 00:32:54.018 clat percentiles (usec): 00:32:54.018 | 1.00th=[ 5932], 5.00th=[ 7570], 10.00th=[ 8160], 20.00th=[10421], 00:32:54.018 | 30.00th=[11469], 40.00th=[13042], 50.00th=[14091], 60.00th=[18220], 00:32:54.018 | 70.00th=[26870], 80.00th=[36439], 90.00th=[46400], 95.00th=[52167], 00:32:54.018 | 99.00th=[53216], 99.50th=[53216], 99.90th=[56886], 99.95th=[57934], 00:32:54.018 | 99.99th=[57934] 00:32:54.018 bw ( KiB/s): min=13872, max=14792, per=23.38%, avg=14332.00, stdev=650.54, samples=2 00:32:54.018 iops : min= 3468, max= 3698, avg=3583.00, stdev=162.63, samples=2 00:32:54.018 lat (msec) : 2=0.01%, 10=13.74%, 20=61.03%, 50=20.68%, 100=4.53% 00:32:54.018 cpu : usr=2.78%, sys=4.97%, ctx=288, majf=0, minf=1 00:32:54.018 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:32:54.018 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:54.018 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:54.018 issued rwts: total=3199,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:54.018 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:54.018 00:32:54.018 Run status group 0 (all jobs): 00:32:54.018 READ: bw=55.5MiB/s (58.2MB/s), 7579KiB/s-25.9MiB/s (7761kB/s-27.2MB/s), io=56.0MiB (58.7MB), run=1003-1008msec 00:32:54.018 WRITE: bw=59.9MiB/s (62.8MB/s), 8127KiB/s-27.4MiB/s (8322kB/s-28.7MB/s), io=60.3MiB (63.3MB), run=1003-1008msec 00:32:54.018 00:32:54.018 Disk stats (read/write): 00:32:54.018 nvme0n1: ios=5414/5632, merge=0/0, ticks=42511/42486, in_queue=84997, util=96.79% 00:32:54.018 nvme0n2: ios=1572/1806, merge=0/0, ticks=14169/21340, in_queue=35509, util=96.74% 00:32:54.018 nvme0n3: ios=2167/2560, merge=0/0, ticks=15321/21569, in_queue=36890, util=99.16% 00:32:54.018 nvme0n4: ios=2578/2943, merge=0/0, ticks=37617/63837, in_queue=101454, util=96.70% 00:32:54.018 17:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:32:54.018 17:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2808543 00:32:54.018 17:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:32:54.018 17:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:32:54.018 [global] 00:32:54.018 thread=1 00:32:54.018 invalidate=1 00:32:54.018 rw=read 00:32:54.018 time_based=1 00:32:54.018 runtime=10 00:32:54.018 ioengine=libaio 00:32:54.018 direct=1 00:32:54.018 bs=4096 00:32:54.018 iodepth=1 00:32:54.018 norandommap=1 00:32:54.018 numjobs=1 00:32:54.018 00:32:54.018 [job0] 00:32:54.018 filename=/dev/nvme0n1 00:32:54.018 [job1] 00:32:54.018 filename=/dev/nvme0n2 00:32:54.018 [job2] 00:32:54.018 filename=/dev/nvme0n3 00:32:54.018 [job3] 00:32:54.018 filename=/dev/nvme0n4 00:32:54.018 Could not set queue depth (nvme0n1) 00:32:54.018 Could not set queue depth (nvme0n2) 00:32:54.018 Could not set queue depth (nvme0n3) 00:32:54.018 Could not set queue depth (nvme0n4) 00:32:54.018 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:54.018 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:54.018 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:54.018 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:54.018 fio-3.35 00:32:54.018 Starting 4 threads 00:32:57.296 17:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:32:57.296 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=37507072, buflen=4096 00:32:57.296 fio: pid=2808864, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:57.296 17:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:32:57.296 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=294912, buflen=4096 00:32:57.296 fio: pid=2808858, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:57.296 17:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:57.296 17:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:32:57.296 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=4964352, buflen=4096 00:32:57.296 fio: pid=2808827, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:57.296 17:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:57.296 17:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:32:57.554 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=60461056, buflen=4096 00:32:57.554 fio: pid=2808841, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:57.554 17:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:57.554 17:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:32:57.554 00:32:57.554 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2808827: Mon Dec 9 17:43:26 2024 00:32:57.554 read: IOPS=382, BW=1530KiB/s (1567kB/s)(4848KiB/3168msec) 00:32:57.554 slat (usec): min=7, max=15870, avg=22.70, stdev=455.43 00:32:57.554 clat (usec): min=225, max=45110, avg=2571.13, stdev=9453.04 00:32:57.554 lat (usec): min=234, max=60981, avg=2593.83, stdev=9526.07 00:32:57.554 clat percentiles (usec): 00:32:57.554 | 1.00th=[ 233], 5.00th=[ 239], 10.00th=[ 241], 20.00th=[ 243], 00:32:57.554 | 30.00th=[ 245], 40.00th=[ 247], 50.00th=[ 249], 60.00th=[ 249], 00:32:57.554 | 70.00th=[ 251], 80.00th=[ 253], 90.00th=[ 260], 95.00th=[41157], 00:32:57.554 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[45351], 00:32:57.554 | 99.99th=[45351] 00:32:57.554 bw ( KiB/s): min= 96, max= 6479, per=3.90%, avg=1162.50, stdev=2604.55, samples=6 00:32:57.554 iops : min= 24, max= 1619, avg=290.50, stdev=650.83, samples=6 00:32:57.554 lat (usec) : 250=62.90%, 500=31.16%, 1000=0.08% 00:32:57.554 lat (msec) : 2=0.08%, 50=5.69% 00:32:57.554 cpu : usr=0.06%, sys=0.85%, ctx=1215, majf=0, minf=1 00:32:57.554 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:57.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.554 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.554 issued rwts: total=1213,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:57.554 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:57.554 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2808841: Mon Dec 9 17:43:26 2024 00:32:57.554 read: IOPS=4367, BW=17.1MiB/s (17.9MB/s)(57.7MiB/3380msec) 00:32:57.554 slat (usec): min=6, max=16761, avg=11.82, stdev=234.70 00:32:57.554 clat (usec): min=170, max=1528, avg=213.83, stdev=19.44 00:32:57.554 lat (usec): min=180, max=17140, avg=225.65, stdev=237.74 00:32:57.554 clat percentiles (usec): 00:32:57.554 | 1.00th=[ 186], 5.00th=[ 190], 10.00th=[ 194], 20.00th=[ 204], 00:32:57.554 | 30.00th=[ 210], 40.00th=[ 212], 50.00th=[ 215], 60.00th=[ 219], 00:32:57.554 | 70.00th=[ 221], 80.00th=[ 225], 90.00th=[ 229], 95.00th=[ 233], 00:32:57.554 | 99.00th=[ 245], 99.50th=[ 253], 99.90th=[ 273], 99.95th=[ 334], 00:32:57.554 | 99.99th=[ 938] 00:32:57.554 bw ( KiB/s): min=17352, max=17856, per=59.19%, avg=17652.33, stdev=203.95, samples=6 00:32:57.554 iops : min= 4338, max= 4464, avg=4413.00, stdev=50.95, samples=6 00:32:57.554 lat (usec) : 250=99.38%, 500=0.58%, 750=0.01%, 1000=0.01% 00:32:57.554 lat (msec) : 2=0.01% 00:32:57.554 cpu : usr=2.16%, sys=7.16%, ctx=14766, majf=0, minf=2 00:32:57.554 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:57.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.554 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.554 issued rwts: total=14762,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:57.554 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:57.554 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2808858: Mon Dec 9 17:43:26 2024 00:32:57.554 read: IOPS=24, BW=98.2KiB/s (101kB/s)(288KiB/2933msec) 00:32:57.554 slat (nsec): min=11360, max=65285, avg=24151.74, stdev=5521.23 00:32:57.554 clat (usec): min=515, max=41135, avg=40407.09, stdev=4767.90 00:32:57.554 lat (usec): min=552, max=41162, avg=40431.27, stdev=4766.38 00:32:57.554 clat percentiles (usec): 00:32:57.554 | 1.00th=[ 515], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:32:57.554 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:57.554 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:57.554 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:57.554 | 99.99th=[41157] 00:32:57.554 bw ( KiB/s): min= 96, max= 104, per=0.33%, avg=99.20, stdev= 4.38, samples=5 00:32:57.554 iops : min= 24, max= 26, avg=24.80, stdev= 1.10, samples=5 00:32:57.554 lat (usec) : 750=1.37% 00:32:57.554 lat (msec) : 50=97.26% 00:32:57.554 cpu : usr=0.14%, sys=0.00%, ctx=74, majf=0, minf=2 00:32:57.554 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:57.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.554 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.554 issued rwts: total=73,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:57.554 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:57.554 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2808864: Mon Dec 9 17:43:26 2024 00:32:57.554 read: IOPS=3379, BW=13.2MiB/s (13.8MB/s)(35.8MiB/2710msec) 00:32:57.554 slat (nsec): min=7003, max=45836, avg=8219.47, stdev=1624.58 00:32:57.554 clat (usec): min=193, max=41374, avg=283.51, stdev=1277.02 00:32:57.554 lat (usec): min=210, max=41383, avg=291.73, stdev=1277.13 00:32:57.554 clat percentiles (usec): 00:32:57.554 | 1.00th=[ 221], 5.00th=[ 227], 10.00th=[ 229], 20.00th=[ 233], 00:32:57.554 | 30.00th=[ 237], 40.00th=[ 239], 50.00th=[ 243], 60.00th=[ 245], 00:32:57.554 | 70.00th=[ 249], 80.00th=[ 253], 90.00th=[ 260], 95.00th=[ 265], 00:32:57.554 | 99.00th=[ 281], 99.50th=[ 289], 99.90th=[ 494], 99.95th=[41157], 00:32:57.554 | 99.99th=[41157] 00:32:57.554 bw ( KiB/s): min=11728, max=16256, per=49.07%, avg=14635.20, stdev=1925.75, samples=5 00:32:57.554 iops : min= 2932, max= 4064, avg=3658.80, stdev=481.44, samples=5 00:32:57.555 lat (usec) : 250=73.92%, 500=25.97% 00:32:57.555 lat (msec) : 50=0.10% 00:32:57.555 cpu : usr=1.99%, sys=5.32%, ctx=9158, majf=0, minf=2 00:32:57.555 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:57.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.555 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.555 issued rwts: total=9158,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:57.555 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:57.555 00:32:57.555 Run status group 0 (all jobs): 00:32:57.555 READ: bw=29.1MiB/s (30.5MB/s), 98.2KiB/s-17.1MiB/s (101kB/s-17.9MB/s), io=98.4MiB (103MB), run=2710-3380msec 00:32:57.555 00:32:57.555 Disk stats (read/write): 00:32:57.555 nvme0n1: ios=1049/0, merge=0/0, ticks=3762/0, in_queue=3762, util=99.48% 00:32:57.555 nvme0n2: ios=14704/0, merge=0/0, ticks=2998/0, in_queue=2998, util=94.66% 00:32:57.555 nvme0n3: ios=70/0, merge=0/0, ticks=2830/0, in_queue=2830, util=96.48% 00:32:57.555 nvme0n4: ios=9151/0, merge=0/0, ticks=2378/0, in_queue=2378, util=96.44% 00:32:57.812 17:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:57.813 17:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:32:58.070 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:58.070 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:32:58.328 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:58.328 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:32:58.586 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:58.586 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:32:58.586 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:32:58.586 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 2808543 00:32:58.586 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:32:58.586 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:58.845 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:58.845 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:58.845 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:32:58.845 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:32:58.845 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:58.845 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:32:58.845 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:58.845 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:32:58.845 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:32:58.845 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:32:58.845 nvmf hotplug test: fio failed as expected 00:32:58.845 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:59.103 17:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:32:59.103 17:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:32:59.103 17:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:32:59.103 17:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:32:59.103 17:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:32:59.103 17:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:59.103 17:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:32:59.103 17:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:59.103 17:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:32:59.103 17:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:59.103 17:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:59.103 rmmod nvme_tcp 00:32:59.103 rmmod nvme_fabrics 00:32:59.103 rmmod nvme_keyring 00:32:59.103 17:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:59.103 17:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:32:59.103 17:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:32:59.103 17:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2806005 ']' 00:32:59.103 17:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2806005 00:32:59.103 17:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2806005 ']' 00:32:59.103 17:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2806005 00:32:59.103 17:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:32:59.103 17:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:59.103 17:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2806005 00:32:59.103 17:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:59.103 17:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:59.103 17:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2806005' 00:32:59.103 killing process with pid 2806005 00:32:59.103 17:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2806005 00:32:59.103 17:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2806005 00:32:59.363 17:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:59.363 17:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:59.363 17:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:59.363 17:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:32:59.363 17:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:32:59.363 17:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:59.363 17:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:32:59.363 17:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:59.363 17:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:59.363 17:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:59.363 17:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:59.363 17:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:01.900 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:01.900 00:33:01.900 real 0m26.533s 00:33:01.900 user 1m32.806s 00:33:01.900 sys 0m11.514s 00:33:01.900 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:01.900 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:01.900 ************************************ 00:33:01.900 END TEST nvmf_fio_target 00:33:01.900 ************************************ 00:33:01.900 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:33:01.900 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:01.900 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:01.900 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:01.900 ************************************ 00:33:01.900 START TEST nvmf_bdevio 00:33:01.900 ************************************ 00:33:01.900 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:33:01.900 * Looking for test storage... 00:33:01.900 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:01.900 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:01.901 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:33:01.901 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:01.901 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:01.901 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:01.901 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:01.901 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:01.901 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:33:01.901 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:33:01.901 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:33:01.901 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:33:01.901 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:33:01.901 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:33:01.901 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:33:01.901 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:01.901 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:33:01.901 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:33:01.901 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:01.901 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:01.901 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:33:01.901 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:33:01.901 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:01.901 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:33:01.901 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:33:01.901 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:33:01.901 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:33:01.901 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:01.901 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:33:01.901 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:33:01.901 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:01.901 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:01.901 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:33:01.901 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:01.901 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:01.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:01.901 --rc genhtml_branch_coverage=1 00:33:01.901 --rc genhtml_function_coverage=1 00:33:01.901 --rc genhtml_legend=1 00:33:01.901 --rc geninfo_all_blocks=1 00:33:01.901 --rc geninfo_unexecuted_blocks=1 00:33:01.901 00:33:01.901 ' 00:33:01.901 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:01.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:01.901 --rc genhtml_branch_coverage=1 00:33:01.901 --rc genhtml_function_coverage=1 00:33:01.901 --rc genhtml_legend=1 00:33:01.901 --rc geninfo_all_blocks=1 00:33:01.901 --rc geninfo_unexecuted_blocks=1 00:33:01.901 00:33:01.901 ' 00:33:01.901 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:01.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:01.901 --rc genhtml_branch_coverage=1 00:33:01.901 --rc genhtml_function_coverage=1 00:33:01.901 --rc genhtml_legend=1 00:33:01.901 --rc geninfo_all_blocks=1 00:33:01.901 --rc geninfo_unexecuted_blocks=1 00:33:01.901 00:33:01.901 ' 00:33:01.901 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:01.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:01.901 --rc genhtml_branch_coverage=1 00:33:01.901 --rc genhtml_function_coverage=1 00:33:01.901 --rc genhtml_legend=1 00:33:01.901 --rc geninfo_all_blocks=1 00:33:01.901 --rc geninfo_unexecuted_blocks=1 00:33:01.901 00:33:01.901 ' 00:33:01.901 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:01.901 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:33:01.901 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:01.901 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:01.901 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:01.901 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:01.901 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:01.901 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:01.901 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:01.901 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:01.901 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:01.901 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:01.901 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:33:01.901 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:33:01.901 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:01.901 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:01.901 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:01.901 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:01.901 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:01.901 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:33:01.901 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:01.901 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:01.901 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:01.901 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:01.901 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:01.901 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:01.901 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:33:01.901 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:01.901 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:33:01.901 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:01.901 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:01.901 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:01.901 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:01.901 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:01.901 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:01.901 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:01.901 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:01.901 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:01.901 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:01.901 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:01.902 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:01.902 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:33:01.902 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:01.902 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:01.902 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:01.902 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:01.902 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:01.902 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:01.902 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:01.902 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:01.902 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:01.902 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:01.902 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:33:01.902 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:07.183 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:07.183 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:33:07.183 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:07.183 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:07.183 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:07.183 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:07.183 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:07.183 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:33:07.183 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:07.183 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:33:07.183 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:33:07.183 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:33:07.183 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:33:07.183 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:33:07.183 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:33:07.183 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:07.183 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:07.183 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:07.183 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:07.183 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:07.184 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:07.184 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:07.184 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:07.184 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:07.184 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:07.184 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:07.184 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:07.184 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:07.184 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:07.184 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:07.184 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:07.184 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:07.184 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:07.184 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:07.184 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:07.184 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:07.184 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:07.184 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:07.184 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:07.184 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:07.184 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:07.184 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:07.184 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:07.184 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:07.184 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:07.184 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:07.184 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:07.184 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:07.184 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:07.184 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:07.184 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:07.184 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:07.184 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:07.184 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:07.184 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:07.184 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:07.184 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:07.184 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:07.184 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:07.184 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:07.184 Found net devices under 0000:af:00.0: cvl_0_0 00:33:07.184 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:07.184 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:07.184 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:07.184 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:07.184 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:07.184 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:07.184 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:07.184 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:07.184 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:07.184 Found net devices under 0000:af:00.1: cvl_0_1 00:33:07.184 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:07.184 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:07.184 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:33:07.184 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:07.184 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:07.184 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:07.184 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:07.184 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:07.184 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:07.184 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:07.184 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:07.184 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:07.184 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:07.184 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:07.184 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:07.184 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:07.184 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:07.184 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:07.184 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:07.184 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:07.184 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:07.444 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:07.444 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:07.444 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:07.444 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:07.444 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:07.444 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:07.444 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:07.444 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:07.444 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:07.444 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.355 ms 00:33:07.444 00:33:07.444 --- 10.0.0.2 ping statistics --- 00:33:07.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:07.444 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:33:07.444 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:07.444 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:07.444 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:33:07.444 00:33:07.444 --- 10.0.0.1 ping statistics --- 00:33:07.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:07.444 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:33:07.444 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:07.444 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:33:07.444 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:07.444 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:07.444 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:07.444 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:07.444 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:07.444 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:07.444 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:07.444 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:33:07.444 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:07.444 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:07.444 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:07.444 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2813092 00:33:07.444 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2813092 00:33:07.444 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:33:07.444 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2813092 ']' 00:33:07.444 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:07.444 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:07.444 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:07.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:07.444 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:07.444 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:07.704 [2024-12-09 17:43:36.623681] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:07.704 [2024-12-09 17:43:36.624604] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:33:07.704 [2024-12-09 17:43:36.624638] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:07.704 [2024-12-09 17:43:36.703141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:07.704 [2024-12-09 17:43:36.744492] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:07.704 [2024-12-09 17:43:36.744529] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:07.704 [2024-12-09 17:43:36.744536] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:07.704 [2024-12-09 17:43:36.744542] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:07.704 [2024-12-09 17:43:36.744547] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:07.704 [2024-12-09 17:43:36.746056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:33:07.704 [2024-12-09 17:43:36.746168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:33:07.704 [2024-12-09 17:43:36.746276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:07.704 [2024-12-09 17:43:36.746276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:33:07.704 [2024-12-09 17:43:36.814887] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:07.704 [2024-12-09 17:43:36.815526] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:07.704 [2024-12-09 17:43:36.815829] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:07.704 [2024-12-09 17:43:36.816018] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:07.704 [2024-12-09 17:43:36.816076] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:07.704 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:07.704 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:33:07.704 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:07.704 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:07.704 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:07.704 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:07.704 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:07.704 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.704 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:07.704 [2024-12-09 17:43:36.879078] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:07.963 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.963 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:07.963 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.963 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:07.963 Malloc0 00:33:07.963 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.963 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:07.963 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.963 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:07.963 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.963 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:07.963 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.963 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:07.963 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.963 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:07.963 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.963 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:07.963 [2024-12-09 17:43:36.959362] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:07.963 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.963 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:33:07.963 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:33:07.963 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:33:07.963 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:33:07.963 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:07.963 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:07.963 { 00:33:07.963 "params": { 00:33:07.963 "name": "Nvme$subsystem", 00:33:07.963 "trtype": "$TEST_TRANSPORT", 00:33:07.963 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:07.963 "adrfam": "ipv4", 00:33:07.963 "trsvcid": "$NVMF_PORT", 00:33:07.963 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:07.963 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:07.963 "hdgst": ${hdgst:-false}, 00:33:07.963 "ddgst": ${ddgst:-false} 00:33:07.963 }, 00:33:07.963 "method": "bdev_nvme_attach_controller" 00:33:07.963 } 00:33:07.963 EOF 00:33:07.963 )") 00:33:07.963 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:33:07.963 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:33:07.963 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:33:07.963 17:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:07.963 "params": { 00:33:07.963 "name": "Nvme1", 00:33:07.963 "trtype": "tcp", 00:33:07.963 "traddr": "10.0.0.2", 00:33:07.964 "adrfam": "ipv4", 00:33:07.964 "trsvcid": "4420", 00:33:07.964 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:07.964 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:07.964 "hdgst": false, 00:33:07.964 "ddgst": false 00:33:07.964 }, 00:33:07.964 "method": "bdev_nvme_attach_controller" 00:33:07.964 }' 00:33:07.964 [2024-12-09 17:43:37.012345] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:33:07.964 [2024-12-09 17:43:37.012391] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2813125 ] 00:33:07.964 [2024-12-09 17:43:37.089337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:07.964 [2024-12-09 17:43:37.131548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:07.964 [2024-12-09 17:43:37.131653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:07.964 [2024-12-09 17:43:37.131654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:08.221 I/O targets: 00:33:08.221 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:33:08.221 00:33:08.221 00:33:08.221 CUnit - A unit testing framework for C - Version 2.1-3 00:33:08.221 http://cunit.sourceforge.net/ 00:33:08.221 00:33:08.221 00:33:08.221 Suite: bdevio tests on: Nvme1n1 00:33:08.221 Test: blockdev write read block ...passed 00:33:08.478 Test: blockdev write zeroes read block ...passed 00:33:08.479 Test: blockdev write zeroes read no split ...passed 00:33:08.479 Test: blockdev write zeroes read split ...passed 00:33:08.479 Test: blockdev write zeroes read split partial ...passed 00:33:08.479 Test: blockdev reset ...[2024-12-09 17:43:37.513772] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:33:08.479 [2024-12-09 17:43:37.513834] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x79e8b0 (9): Bad file descriptor 00:33:08.479 [2024-12-09 17:43:37.517084] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:33:08.479 passed 00:33:08.479 Test: blockdev write read 8 blocks ...passed 00:33:08.479 Test: blockdev write read size > 128k ...passed 00:33:08.479 Test: blockdev write read invalid size ...passed 00:33:08.479 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:33:08.479 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:33:08.479 Test: blockdev write read max offset ...passed 00:33:08.479 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:33:08.736 Test: blockdev writev readv 8 blocks ...passed 00:33:08.736 Test: blockdev writev readv 30 x 1block ...passed 00:33:08.736 Test: blockdev writev readv block ...passed 00:33:08.736 Test: blockdev writev readv size > 128k ...passed 00:33:08.736 Test: blockdev writev readv size > 128k in two iovs ...passed 00:33:08.736 Test: blockdev comparev and writev ...[2024-12-09 17:43:37.728485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:08.736 [2024-12-09 17:43:37.728516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:08.736 [2024-12-09 17:43:37.728530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:08.736 [2024-12-09 17:43:37.728538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:08.736 [2024-12-09 17:43:37.728829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:08.736 [2024-12-09 17:43:37.728839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:08.736 [2024-12-09 17:43:37.728850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:08.736 [2024-12-09 17:43:37.728857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:08.736 [2024-12-09 17:43:37.729139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:08.736 [2024-12-09 17:43:37.729148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:08.736 [2024-12-09 17:43:37.729159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:08.736 [2024-12-09 17:43:37.729166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:08.736 [2024-12-09 17:43:37.729446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:08.736 [2024-12-09 17:43:37.729458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:08.736 [2024-12-09 17:43:37.729469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:08.736 [2024-12-09 17:43:37.729476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:08.736 passed 00:33:08.736 Test: blockdev nvme passthru rw ...passed 00:33:08.736 Test: blockdev nvme passthru vendor specific ...[2024-12-09 17:43:37.811690] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:33:08.736 [2024-12-09 17:43:37.811708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:08.736 [2024-12-09 17:43:37.811813] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:33:08.736 [2024-12-09 17:43:37.811823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:08.736 [2024-12-09 17:43:37.811927] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:33:08.736 [2024-12-09 17:43:37.811936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:08.736 [2024-12-09 17:43:37.812041] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:33:08.736 [2024-12-09 17:43:37.812050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:08.736 passed 00:33:08.736 Test: blockdev nvme admin passthru ...passed 00:33:08.736 Test: blockdev copy ...passed 00:33:08.736 00:33:08.737 Run Summary: Type Total Ran Passed Failed Inactive 00:33:08.737 suites 1 1 n/a 0 0 00:33:08.737 tests 23 23 23 0 0 00:33:08.737 asserts 152 152 152 0 n/a 00:33:08.737 00:33:08.737 Elapsed time = 1.087 seconds 00:33:08.995 17:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:08.995 17:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.995 17:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:08.995 17:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.995 17:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:33:08.995 17:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:33:08.995 17:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:08.995 17:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:33:08.995 17:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:08.995 17:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:33:08.995 17:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:08.995 17:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:08.995 rmmod nvme_tcp 00:33:08.995 rmmod nvme_fabrics 00:33:08.995 rmmod nvme_keyring 00:33:08.995 17:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:08.995 17:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:33:08.995 17:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:33:08.995 17:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2813092 ']' 00:33:08.995 17:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2813092 00:33:08.995 17:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2813092 ']' 00:33:08.995 17:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2813092 00:33:08.995 17:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:33:08.995 17:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:08.995 17:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2813092 00:33:08.995 17:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:33:08.995 17:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:33:08.995 17:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2813092' 00:33:08.995 killing process with pid 2813092 00:33:08.996 17:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2813092 00:33:08.996 17:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2813092 00:33:09.255 17:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:09.255 17:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:09.255 17:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:09.255 17:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:33:09.255 17:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:33:09.255 17:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:09.255 17:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:33:09.255 17:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:09.255 17:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:09.255 17:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:09.255 17:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:09.255 17:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:11.790 17:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:11.790 00:33:11.790 real 0m9.854s 00:33:11.790 user 0m8.476s 00:33:11.790 sys 0m5.261s 00:33:11.790 17:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:11.790 17:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:11.790 ************************************ 00:33:11.790 END TEST nvmf_bdevio 00:33:11.790 ************************************ 00:33:11.790 17:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:33:11.790 00:33:11.790 real 4m31.652s 00:33:11.790 user 9m5.056s 00:33:11.790 sys 1m50.798s 00:33:11.790 17:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:11.790 17:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:11.790 ************************************ 00:33:11.790 END TEST nvmf_target_core_interrupt_mode 00:33:11.790 ************************************ 00:33:11.791 17:43:40 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:33:11.791 17:43:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:11.791 17:43:40 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:11.791 17:43:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:11.791 ************************************ 00:33:11.791 START TEST nvmf_interrupt 00:33:11.791 ************************************ 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:33:11.791 * Looking for test storage... 00:33:11.791 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:11.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:11.791 --rc genhtml_branch_coverage=1 00:33:11.791 --rc genhtml_function_coverage=1 00:33:11.791 --rc genhtml_legend=1 00:33:11.791 --rc geninfo_all_blocks=1 00:33:11.791 --rc geninfo_unexecuted_blocks=1 00:33:11.791 00:33:11.791 ' 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:11.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:11.791 --rc genhtml_branch_coverage=1 00:33:11.791 --rc genhtml_function_coverage=1 00:33:11.791 --rc genhtml_legend=1 00:33:11.791 --rc geninfo_all_blocks=1 00:33:11.791 --rc geninfo_unexecuted_blocks=1 00:33:11.791 00:33:11.791 ' 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:11.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:11.791 --rc genhtml_branch_coverage=1 00:33:11.791 --rc genhtml_function_coverage=1 00:33:11.791 --rc genhtml_legend=1 00:33:11.791 --rc geninfo_all_blocks=1 00:33:11.791 --rc geninfo_unexecuted_blocks=1 00:33:11.791 00:33:11.791 ' 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:11.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:11.791 --rc genhtml_branch_coverage=1 00:33:11.791 --rc genhtml_function_coverage=1 00:33:11.791 --rc genhtml_legend=1 00:33:11.791 --rc geninfo_all_blocks=1 00:33:11.791 --rc geninfo_unexecuted_blocks=1 00:33:11.791 00:33:11.791 ' 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:11.791 17:43:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:11.792 17:43:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:11.792 17:43:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:11.792 17:43:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:11.792 17:43:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:33:11.792 17:43:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:18.364 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:18.364 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:33:18.364 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:18.364 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:18.364 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:18.364 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:18.364 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:18.364 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:33:18.364 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:18.364 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:33:18.364 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:33:18.364 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:33:18.364 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:33:18.364 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:33:18.364 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:33:18.364 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:18.364 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:18.364 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:18.364 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:18.364 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:18.364 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:18.364 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:18.364 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:18.364 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:18.364 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:18.364 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:18.364 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:18.364 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:18.364 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:18.364 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:18.364 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:18.364 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:18.364 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:18.364 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:18.364 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:18.364 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:18.364 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:18.364 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:18.364 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:18.364 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:18.364 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:18.364 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:18.364 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:18.364 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:18.364 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:18.364 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:18.364 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:18.364 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:18.365 Found net devices under 0000:af:00.0: cvl_0_0 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:18.365 Found net devices under 0000:af:00.1: cvl_0_1 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:18.365 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:18.365 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.298 ms 00:33:18.365 00:33:18.365 --- 10.0.0.2 ping statistics --- 00:33:18.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:18.365 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:18.365 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:18.365 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:33:18.365 00:33:18.365 --- 10.0.0.1 ping statistics --- 00:33:18.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:18.365 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=2816852 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 2816852 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 2816852 ']' 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:18.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:18.365 [2024-12-09 17:43:46.660233] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:18.365 [2024-12-09 17:43:46.661234] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:33:18.365 [2024-12-09 17:43:46.661276] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:18.365 [2024-12-09 17:43:46.741630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:18.365 [2024-12-09 17:43:46.781182] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:18.365 [2024-12-09 17:43:46.781224] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:18.365 [2024-12-09 17:43:46.781231] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:18.365 [2024-12-09 17:43:46.781237] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:18.365 [2024-12-09 17:43:46.781258] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:18.365 [2024-12-09 17:43:46.782400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:18.365 [2024-12-09 17:43:46.782401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:18.365 [2024-12-09 17:43:46.849381] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:18.365 [2024-12-09 17:43:46.849841] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:18.365 [2024-12-09 17:43:46.850079] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:33:18.365 5000+0 records in 00:33:18.365 5000+0 records out 00:33:18.365 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0177462 s, 577 MB/s 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:18.365 AIO0 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:18.365 [2024-12-09 17:43:46.975186] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:33:18.365 17:43:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.366 17:43:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:18.366 17:43:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.366 17:43:46 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:33:18.366 17:43:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.366 17:43:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:18.366 17:43:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.366 17:43:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:18.366 17:43:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.366 17:43:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:18.366 [2024-12-09 17:43:47.015500] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:18.366 17:43:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.366 17:43:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:33:18.366 17:43:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2816852 0 00:33:18.366 17:43:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2816852 0 idle 00:33:18.366 17:43:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2816852 00:33:18.366 17:43:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:33:18.366 17:43:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:18.366 17:43:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:18.366 17:43:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:18.366 17:43:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:18.366 17:43:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:18.366 17:43:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:18.366 17:43:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:18.366 17:43:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:18.366 17:43:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2816852 -w 256 00:33:18.366 17:43:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:33:18.366 17:43:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2816852 root 20 0 128.2g 45312 33792 S 0.0 0.0 0:00.25 reactor_0' 00:33:18.366 17:43:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2816852 root 20 0 128.2g 45312 33792 S 0.0 0.0 0:00.25 reactor_0 00:33:18.366 17:43:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:18.366 17:43:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:18.366 17:43:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:18.366 17:43:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:18.366 17:43:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:18.366 17:43:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:18.366 17:43:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:18.366 17:43:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:18.366 17:43:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:33:18.366 17:43:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2816852 1 00:33:18.366 17:43:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2816852 1 idle 00:33:18.366 17:43:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2816852 00:33:18.366 17:43:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:33:18.366 17:43:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:18.366 17:43:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:18.366 17:43:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:18.366 17:43:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:18.366 17:43:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:18.366 17:43:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:18.366 17:43:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:18.366 17:43:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:18.366 17:43:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2816852 -w 256 00:33:18.366 17:43:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:33:18.366 17:43:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2816857 root 20 0 128.2g 45312 33792 S 0.0 0.0 0:00.00 reactor_1' 00:33:18.366 17:43:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2816857 root 20 0 128.2g 45312 33792 S 0.0 0.0 0:00.00 reactor_1 00:33:18.366 17:43:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:18.366 17:43:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:18.366 17:43:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:18.366 17:43:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:18.366 17:43:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:18.366 17:43:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:18.366 17:43:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:18.366 17:43:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:18.366 17:43:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:33:18.366 17:43:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=2816899 00:33:18.366 17:43:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:33:18.366 17:43:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:18.366 17:43:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:33:18.366 17:43:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2816852 0 00:33:18.366 17:43:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2816852 0 busy 00:33:18.366 17:43:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2816852 00:33:18.366 17:43:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:33:18.366 17:43:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:33:18.366 17:43:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:33:18.366 17:43:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:18.366 17:43:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:33:18.366 17:43:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:18.366 17:43:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:18.366 17:43:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:18.366 17:43:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2816852 -w 256 00:33:18.366 17:43:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:33:18.623 17:43:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2816852 root 20 0 128.2g 46080 33792 R 13.3 0.0 0:00.27 reactor_0' 00:33:18.623 17:43:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2816852 root 20 0 128.2g 46080 33792 R 13.3 0.0 0:00.27 reactor_0 00:33:18.623 17:43:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:18.623 17:43:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:18.623 17:43:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=13.3 00:33:18.623 17:43:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=13 00:33:18.623 17:43:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:33:18.623 17:43:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:33:18.623 17:43:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:33:19.553 17:43:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:33:19.553 17:43:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:19.553 17:43:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2816852 -w 256 00:33:19.553 17:43:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:33:19.809 17:43:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2816852 root 20 0 128.2g 46080 33792 R 99.9 0.0 0:02.64 reactor_0' 00:33:19.809 17:43:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2816852 root 20 0 128.2g 46080 33792 R 99.9 0.0 0:02.64 reactor_0 00:33:19.809 17:43:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:19.809 17:43:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:19.809 17:43:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:33:19.809 17:43:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:33:19.809 17:43:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:33:19.809 17:43:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:33:19.809 17:43:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:33:19.809 17:43:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:19.809 17:43:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:33:19.809 17:43:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:33:19.809 17:43:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2816852 1 00:33:19.809 17:43:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2816852 1 busy 00:33:19.809 17:43:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2816852 00:33:19.809 17:43:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:33:19.809 17:43:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:33:19.809 17:43:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:33:19.809 17:43:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:19.809 17:43:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:33:19.809 17:43:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:19.809 17:43:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:19.809 17:43:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:19.809 17:43:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2816852 -w 256 00:33:19.809 17:43:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:33:19.809 17:43:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2816857 root 20 0 128.2g 46080 33792 R 99.9 0.0 0:01.37 reactor_1' 00:33:19.809 17:43:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2816857 root 20 0 128.2g 46080 33792 R 99.9 0.0 0:01.37 reactor_1 00:33:19.809 17:43:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:19.809 17:43:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:19.809 17:43:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:33:19.809 17:43:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:33:19.809 17:43:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:33:19.809 17:43:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:33:19.809 17:43:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:33:19.809 17:43:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:19.809 17:43:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 2816899 00:33:29.767 Initializing NVMe Controllers 00:33:29.767 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:29.767 Controller IO queue size 256, less than required. 00:33:29.767 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:29.767 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:33:29.767 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:33:29.767 Initialization complete. Launching workers. 00:33:29.767 ======================================================== 00:33:29.767 Latency(us) 00:33:29.767 Device Information : IOPS MiB/s Average min max 00:33:29.767 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16540.63 64.61 15484.63 2807.40 30289.34 00:33:29.767 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 16680.43 65.16 15351.57 7441.49 25860.77 00:33:29.767 ======================================================== 00:33:29.767 Total : 33221.07 129.77 15417.82 2807.40 30289.34 00:33:29.767 00:33:29.767 17:43:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:33:29.767 17:43:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2816852 0 00:33:29.767 17:43:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2816852 0 idle 00:33:29.767 17:43:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2816852 00:33:29.767 17:43:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:33:29.767 17:43:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:29.767 17:43:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:29.767 17:43:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:29.767 17:43:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:29.767 17:43:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:29.767 17:43:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:29.767 17:43:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:29.767 17:43:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:29.767 17:43:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2816852 -w 256 00:33:29.767 17:43:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:33:29.767 17:43:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2816852 root 20 0 128.2g 46080 33792 S 6.7 0.0 0:20.25 reactor_0' 00:33:29.767 17:43:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2816852 root 20 0 128.2g 46080 33792 S 6.7 0.0 0:20.25 reactor_0 00:33:29.767 17:43:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:29.767 17:43:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:29.767 17:43:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:33:29.767 17:43:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:33:29.767 17:43:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:29.767 17:43:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:29.767 17:43:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:29.767 17:43:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:29.767 17:43:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:33:29.767 17:43:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2816852 1 00:33:29.767 17:43:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2816852 1 idle 00:33:29.767 17:43:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2816852 00:33:29.767 17:43:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:33:29.767 17:43:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:29.767 17:43:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:29.767 17:43:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:29.767 17:43:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:29.767 17:43:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:29.767 17:43:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:29.767 17:43:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:29.767 17:43:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:29.767 17:43:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2816852 -w 256 00:33:29.767 17:43:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:33:29.767 17:43:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2816857 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:10.00 reactor_1' 00:33:29.767 17:43:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2816857 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:10.00 reactor_1 00:33:29.767 17:43:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:29.767 17:43:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:29.767 17:43:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:29.767 17:43:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:29.767 17:43:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:29.767 17:43:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:29.767 17:43:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:29.767 17:43:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:29.767 17:43:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:33:29.767 17:43:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:33:29.767 17:43:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:33:29.767 17:43:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:33:29.767 17:43:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:33:29.767 17:43:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:33:31.672 17:44:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:33:31.672 17:44:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:33:31.672 17:44:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:33:31.672 17:44:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:33:31.672 17:44:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:33:31.672 17:44:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:33:31.672 17:44:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:33:31.672 17:44:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2816852 0 00:33:31.672 17:44:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2816852 0 idle 00:33:31.672 17:44:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2816852 00:33:31.672 17:44:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:33:31.672 17:44:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:31.672 17:44:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:31.672 17:44:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:31.672 17:44:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:31.672 17:44:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:31.672 17:44:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:31.672 17:44:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:31.672 17:44:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:31.672 17:44:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2816852 -w 256 00:33:31.672 17:44:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:33:31.672 17:44:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2816852 root 20 0 128.2g 72192 33792 S 0.0 0.1 0:20.51 reactor_0' 00:33:31.672 17:44:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2816852 root 20 0 128.2g 72192 33792 S 0.0 0.1 0:20.51 reactor_0 00:33:31.672 17:44:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:31.672 17:44:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:31.672 17:44:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:31.672 17:44:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:31.672 17:44:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:31.672 17:44:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:31.672 17:44:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:31.672 17:44:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:31.672 17:44:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:33:31.672 17:44:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2816852 1 00:33:31.672 17:44:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2816852 1 idle 00:33:31.672 17:44:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2816852 00:33:31.672 17:44:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:33:31.672 17:44:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:31.672 17:44:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:31.672 17:44:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:31.672 17:44:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:31.672 17:44:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:31.672 17:44:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:31.672 17:44:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:31.672 17:44:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:31.672 17:44:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2816852 -w 256 00:33:31.672 17:44:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:33:31.672 17:44:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2816857 root 20 0 128.2g 72192 33792 S 0.0 0.1 0:10.11 reactor_1' 00:33:31.672 17:44:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2816857 root 20 0 128.2g 72192 33792 S 0.0 0.1 0:10.11 reactor_1 00:33:31.672 17:44:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:31.672 17:44:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:31.672 17:44:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:31.672 17:44:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:31.672 17:44:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:31.672 17:44:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:31.672 17:44:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:31.672 17:44:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:31.672 17:44:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:33:31.931 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:31.931 17:44:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:33:31.931 17:44:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:33:31.931 17:44:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:33:31.931 17:44:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:31.931 17:44:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:33:31.931 17:44:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:31.931 17:44:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:33:31.931 17:44:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:33:31.931 17:44:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:33:31.931 17:44:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:31.931 17:44:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:33:31.931 17:44:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:31.931 17:44:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:33:31.931 17:44:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:31.931 17:44:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:31.931 rmmod nvme_tcp 00:33:31.931 rmmod nvme_fabrics 00:33:31.931 rmmod nvme_keyring 00:33:32.189 17:44:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:32.189 17:44:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:33:32.189 17:44:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:33:32.189 17:44:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 2816852 ']' 00:33:32.189 17:44:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 2816852 00:33:32.189 17:44:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 2816852 ']' 00:33:32.189 17:44:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 2816852 00:33:32.189 17:44:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:33:32.189 17:44:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:32.189 17:44:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2816852 00:33:32.189 17:44:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:32.190 17:44:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:32.190 17:44:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2816852' 00:33:32.190 killing process with pid 2816852 00:33:32.190 17:44:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 2816852 00:33:32.190 17:44:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 2816852 00:33:32.190 17:44:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:32.190 17:44:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:32.190 17:44:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:32.190 17:44:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:33:32.190 17:44:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:33:32.190 17:44:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:32.190 17:44:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:33:32.190 17:44:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:32.190 17:44:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:32.190 17:44:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:32.190 17:44:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:32.190 17:44:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:34.724 17:44:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:34.724 00:33:34.724 real 0m22.933s 00:33:34.724 user 0m39.797s 00:33:34.724 sys 0m8.368s 00:33:34.724 17:44:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:34.724 17:44:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:34.724 ************************************ 00:33:34.724 END TEST nvmf_interrupt 00:33:34.724 ************************************ 00:33:34.724 00:33:34.724 real 27m29.186s 00:33:34.724 user 56m40.786s 00:33:34.724 sys 9m18.497s 00:33:34.724 17:44:03 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:34.724 17:44:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:34.724 ************************************ 00:33:34.724 END TEST nvmf_tcp 00:33:34.724 ************************************ 00:33:34.724 17:44:03 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:33:34.724 17:44:03 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:34.724 17:44:03 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:34.724 17:44:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:34.724 17:44:03 -- common/autotest_common.sh@10 -- # set +x 00:33:34.724 ************************************ 00:33:34.724 START TEST spdkcli_nvmf_tcp 00:33:34.724 ************************************ 00:33:34.724 17:44:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:34.724 * Looking for test storage... 00:33:34.724 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:33:34.724 17:44:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:34.724 17:44:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:33:34.724 17:44:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:34.724 17:44:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:34.724 17:44:03 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:34.724 17:44:03 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:34.724 17:44:03 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:34.724 17:44:03 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:33:34.724 17:44:03 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:33:34.724 17:44:03 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:33:34.724 17:44:03 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:33:34.724 17:44:03 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:33:34.724 17:44:03 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:33:34.724 17:44:03 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:33:34.724 17:44:03 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:34.724 17:44:03 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:33:34.724 17:44:03 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:33:34.724 17:44:03 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:34.724 17:44:03 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:34.724 17:44:03 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:33:34.724 17:44:03 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:33:34.724 17:44:03 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:34.724 17:44:03 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:33:34.724 17:44:03 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:33:34.724 17:44:03 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:33:34.724 17:44:03 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:33:34.724 17:44:03 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:34.724 17:44:03 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:33:34.724 17:44:03 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:33:34.724 17:44:03 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:34.724 17:44:03 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:34.724 17:44:03 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:33:34.724 17:44:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:34.724 17:44:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:34.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:34.724 --rc genhtml_branch_coverage=1 00:33:34.724 --rc genhtml_function_coverage=1 00:33:34.724 --rc genhtml_legend=1 00:33:34.724 --rc geninfo_all_blocks=1 00:33:34.724 --rc geninfo_unexecuted_blocks=1 00:33:34.724 00:33:34.724 ' 00:33:34.724 17:44:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:34.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:34.724 --rc genhtml_branch_coverage=1 00:33:34.724 --rc genhtml_function_coverage=1 00:33:34.724 --rc genhtml_legend=1 00:33:34.724 --rc geninfo_all_blocks=1 00:33:34.724 --rc geninfo_unexecuted_blocks=1 00:33:34.724 00:33:34.724 ' 00:33:34.724 17:44:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:34.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:34.724 --rc genhtml_branch_coverage=1 00:33:34.724 --rc genhtml_function_coverage=1 00:33:34.724 --rc genhtml_legend=1 00:33:34.724 --rc geninfo_all_blocks=1 00:33:34.724 --rc geninfo_unexecuted_blocks=1 00:33:34.724 00:33:34.724 ' 00:33:34.724 17:44:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:34.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:34.724 --rc genhtml_branch_coverage=1 00:33:34.724 --rc genhtml_function_coverage=1 00:33:34.724 --rc genhtml_legend=1 00:33:34.724 --rc geninfo_all_blocks=1 00:33:34.725 --rc geninfo_unexecuted_blocks=1 00:33:34.725 00:33:34.725 ' 00:33:34.725 17:44:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:33:34.725 17:44:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:33:34.725 17:44:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:33:34.725 17:44:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:34.725 17:44:03 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:33:34.725 17:44:03 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:34.725 17:44:03 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:34.725 17:44:03 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:34.725 17:44:03 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:34.725 17:44:03 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:34.725 17:44:03 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:34.725 17:44:03 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:34.725 17:44:03 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:34.725 17:44:03 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:34.725 17:44:03 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:34.725 17:44:03 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:33:34.725 17:44:03 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:33:34.725 17:44:03 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:34.725 17:44:03 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:34.725 17:44:03 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:34.725 17:44:03 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:34.725 17:44:03 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:34.725 17:44:03 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:33:34.725 17:44:03 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:34.725 17:44:03 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:34.725 17:44:03 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:34.725 17:44:03 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:34.725 17:44:03 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:34.725 17:44:03 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:34.725 17:44:03 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:33:34.725 17:44:03 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:34.725 17:44:03 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:33:34.725 17:44:03 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:34.725 17:44:03 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:34.725 17:44:03 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:34.725 17:44:03 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:34.725 17:44:03 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:34.725 17:44:03 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:34.725 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:34.725 17:44:03 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:34.725 17:44:03 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:34.725 17:44:03 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:34.725 17:44:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:33:34.725 17:44:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:33:34.725 17:44:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:33:34.725 17:44:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:33:34.725 17:44:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:34.725 17:44:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:34.725 17:44:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:33:34.725 17:44:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2819667 00:33:34.725 17:44:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2819667 00:33:34.725 17:44:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 2819667 ']' 00:33:34.725 17:44:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:33:34.725 17:44:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:34.725 17:44:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:34.725 17:44:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:34.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:34.725 17:44:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:34.725 17:44:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:34.725 [2024-12-09 17:44:03.782138] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:33:34.725 [2024-12-09 17:44:03.782191] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2819667 ] 00:33:34.725 [2024-12-09 17:44:03.854651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:34.725 [2024-12-09 17:44:03.896361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:34.725 [2024-12-09 17:44:03.896362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:34.984 17:44:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:34.984 17:44:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:33:34.984 17:44:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:33:34.984 17:44:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:34.984 17:44:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:34.984 17:44:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:33:34.984 17:44:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:33:34.984 17:44:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:33:34.984 17:44:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:34.984 17:44:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:34.984 17:44:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:33:34.984 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:33:34.984 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:33:34.984 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:33:34.984 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:33:34.984 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:33:34.984 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:33:34.984 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:34.984 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:33:34.984 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:33:34.984 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:34.984 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:34.984 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:33:34.984 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:34.984 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:34.984 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:33:34.984 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:34.984 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:34.984 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:34.984 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:34.984 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:33:34.984 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:33:34.984 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:34.984 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:33:34.984 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:34.984 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:33:34.984 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:33:34.984 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:33:34.985 ' 00:33:37.684 [2024-12-09 17:44:06.707276] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:39.057 [2024-12-09 17:44:08.047689] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:33:41.586 [2024-12-09 17:44:10.531421] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:33:44.115 [2024-12-09 17:44:12.686176] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:33:45.489 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:33:45.489 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:33:45.489 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:33:45.489 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:33:45.489 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:33:45.489 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:33:45.489 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:33:45.489 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:45.489 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:33:45.489 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:33:45.489 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:45.489 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:45.489 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:33:45.489 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:45.489 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:45.489 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:33:45.489 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:45.489 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:45.489 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:45.489 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:45.489 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:33:45.489 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:33:45.489 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:45.489 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:33:45.489 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:45.489 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:33:45.489 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:33:45.489 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:33:45.489 17:44:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:33:45.489 17:44:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:45.489 17:44:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:45.489 17:44:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:33:45.489 17:44:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:45.489 17:44:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:45.489 17:44:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:33:45.489 17:44:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:33:45.747 17:44:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:33:46.005 17:44:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:33:46.005 17:44:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:33:46.005 17:44:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:46.005 17:44:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:46.005 17:44:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:33:46.005 17:44:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:46.005 17:44:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:46.005 17:44:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:33:46.005 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:33:46.005 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:46.005 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:33:46.005 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:33:46.005 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:33:46.005 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:33:46.005 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:46.005 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:33:46.005 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:33:46.005 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:33:46.005 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:33:46.005 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:33:46.005 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:33:46.005 ' 00:33:52.561 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:33:52.561 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:33:52.561 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:52.562 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:33:52.562 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:33:52.562 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:33:52.562 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:33:52.562 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:52.562 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:33:52.562 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:33:52.562 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:33:52.562 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:33:52.562 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:33:52.562 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:33:52.562 17:44:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:33:52.562 17:44:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:52.562 17:44:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:52.562 17:44:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2819667 00:33:52.562 17:44:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 2819667 ']' 00:33:52.562 17:44:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 2819667 00:33:52.562 17:44:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:33:52.562 17:44:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:52.562 17:44:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2819667 00:33:52.562 17:44:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:52.562 17:44:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:52.562 17:44:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2819667' 00:33:52.562 killing process with pid 2819667 00:33:52.562 17:44:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 2819667 00:33:52.562 17:44:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 2819667 00:33:52.562 17:44:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:33:52.562 17:44:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:33:52.562 17:44:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2819667 ']' 00:33:52.562 17:44:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2819667 00:33:52.562 17:44:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 2819667 ']' 00:33:52.562 17:44:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 2819667 00:33:52.562 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2819667) - No such process 00:33:52.562 17:44:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 2819667 is not found' 00:33:52.562 Process with pid 2819667 is not found 00:33:52.562 17:44:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:33:52.562 17:44:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:33:52.562 17:44:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:33:52.562 00:33:52.562 real 0m17.331s 00:33:52.562 user 0m38.169s 00:33:52.562 sys 0m0.811s 00:33:52.562 17:44:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:52.562 17:44:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:52.562 ************************************ 00:33:52.562 END TEST spdkcli_nvmf_tcp 00:33:52.562 ************************************ 00:33:52.562 17:44:20 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:52.562 17:44:20 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:52.562 17:44:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:52.562 17:44:20 -- common/autotest_common.sh@10 -- # set +x 00:33:52.562 ************************************ 00:33:52.562 START TEST nvmf_identify_passthru 00:33:52.562 ************************************ 00:33:52.562 17:44:20 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:52.562 * Looking for test storage... 00:33:52.562 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:52.562 17:44:21 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:52.562 17:44:21 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version 00:33:52.562 17:44:21 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:52.562 17:44:21 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:52.562 17:44:21 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:52.562 17:44:21 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:52.562 17:44:21 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:52.562 17:44:21 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:33:52.562 17:44:21 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:33:52.562 17:44:21 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:33:52.562 17:44:21 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:33:52.562 17:44:21 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:33:52.562 17:44:21 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:33:52.562 17:44:21 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:33:52.562 17:44:21 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:52.562 17:44:21 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:33:52.562 17:44:21 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:33:52.562 17:44:21 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:52.562 17:44:21 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:52.562 17:44:21 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:33:52.562 17:44:21 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:33:52.562 17:44:21 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:52.562 17:44:21 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:33:52.562 17:44:21 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:33:52.562 17:44:21 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:33:52.562 17:44:21 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:33:52.562 17:44:21 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:52.562 17:44:21 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:33:52.562 17:44:21 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:33:52.562 17:44:21 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:52.562 17:44:21 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:52.562 17:44:21 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:33:52.562 17:44:21 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:52.562 17:44:21 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:52.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:52.562 --rc genhtml_branch_coverage=1 00:33:52.563 --rc genhtml_function_coverage=1 00:33:52.563 --rc genhtml_legend=1 00:33:52.563 --rc geninfo_all_blocks=1 00:33:52.563 --rc geninfo_unexecuted_blocks=1 00:33:52.563 00:33:52.563 ' 00:33:52.563 17:44:21 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:52.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:52.563 --rc genhtml_branch_coverage=1 00:33:52.563 --rc genhtml_function_coverage=1 00:33:52.563 --rc genhtml_legend=1 00:33:52.563 --rc geninfo_all_blocks=1 00:33:52.563 --rc geninfo_unexecuted_blocks=1 00:33:52.563 00:33:52.563 ' 00:33:52.563 17:44:21 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:52.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:52.563 --rc genhtml_branch_coverage=1 00:33:52.563 --rc genhtml_function_coverage=1 00:33:52.563 --rc genhtml_legend=1 00:33:52.563 --rc geninfo_all_blocks=1 00:33:52.563 --rc geninfo_unexecuted_blocks=1 00:33:52.563 00:33:52.563 ' 00:33:52.563 17:44:21 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:52.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:52.563 --rc genhtml_branch_coverage=1 00:33:52.563 --rc genhtml_function_coverage=1 00:33:52.563 --rc genhtml_legend=1 00:33:52.563 --rc geninfo_all_blocks=1 00:33:52.563 --rc geninfo_unexecuted_blocks=1 00:33:52.563 00:33:52.563 ' 00:33:52.563 17:44:21 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:52.563 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:33:52.563 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:52.563 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:52.563 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:52.563 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:52.563 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:52.563 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:52.563 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:52.563 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:52.563 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:52.563 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:52.563 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:33:52.563 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:33:52.563 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:52.563 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:52.563 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:52.563 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:52.563 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:52.563 17:44:21 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:33:52.563 17:44:21 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:52.563 17:44:21 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:52.563 17:44:21 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:52.563 17:44:21 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:52.563 17:44:21 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:52.563 17:44:21 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:52.563 17:44:21 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:33:52.563 17:44:21 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:52.563 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:33:52.563 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:52.563 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:52.563 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:52.563 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:52.563 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:52.563 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:52.563 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:52.563 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:52.563 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:52.563 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:52.563 17:44:21 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:52.563 17:44:21 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:33:52.563 17:44:21 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:52.563 17:44:21 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:52.563 17:44:21 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:52.563 17:44:21 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:52.563 17:44:21 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:52.563 17:44:21 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:52.564 17:44:21 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:33:52.564 17:44:21 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:52.564 17:44:21 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:33:52.564 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:52.564 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:52.564 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:52.564 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:52.564 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:52.564 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:52.564 17:44:21 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:52.564 17:44:21 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:52.564 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:52.564 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:52.564 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:33:52.564 17:44:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:57.849 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:57.849 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:33:57.849 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:57.849 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:57.849 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:57.849 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:57.849 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:57.849 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:33:57.849 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:57.849 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:33:57.849 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:33:57.849 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:33:57.849 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:33:57.849 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:33:57.849 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:33:57.849 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:57.849 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:57.849 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:57.849 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:57.849 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:57.849 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:57.850 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:57.850 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:57.850 Found net devices under 0000:af:00.0: cvl_0_0 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:57.850 Found net devices under 0000:af:00.1: cvl_0_1 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:57.850 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:57.850 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.241 ms 00:33:57.850 00:33:57.850 --- 10.0.0.2 ping statistics --- 00:33:57.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:57.850 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:57.850 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:57.850 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:33:57.850 00:33:57.850 --- 10.0.0.1 ping statistics --- 00:33:57.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:57.850 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:57.850 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:57.850 17:44:27 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:33:57.850 17:44:27 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:57.850 17:44:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:57.850 17:44:27 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:33:57.850 17:44:27 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:33:57.850 17:44:27 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:33:57.850 17:44:27 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:33:57.850 17:44:27 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:33:57.850 17:44:27 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:33:57.850 17:44:27 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:33:57.850 17:44:27 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:33:57.850 17:44:27 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:33:57.850 17:44:27 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:33:58.110 17:44:27 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:33:58.110 17:44:27 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:33:58.110 17:44:27 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:33:58.110 17:44:27 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:33:58.110 17:44:27 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:33:58.110 17:44:27 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:33:58.110 17:44:27 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:33:58.110 17:44:27 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:34:02.296 17:44:31 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ807001JM1P0FGN 00:34:02.296 17:44:31 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:34:02.296 17:44:31 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:34:02.296 17:44:31 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:34:06.485 17:44:35 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:34:06.485 17:44:35 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:34:06.485 17:44:35 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:06.485 17:44:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:06.485 17:44:35 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:34:06.485 17:44:35 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:06.485 17:44:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:06.485 17:44:35 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2826869 00:34:06.485 17:44:35 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:34:06.485 17:44:35 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:06.485 17:44:35 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2826869 00:34:06.485 17:44:35 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 2826869 ']' 00:34:06.485 17:44:35 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:06.485 17:44:35 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:06.485 17:44:35 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:06.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:06.485 17:44:35 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:06.485 17:44:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:06.485 [2024-12-09 17:44:35.550501] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:34:06.485 [2024-12-09 17:44:35.550549] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:06.485 [2024-12-09 17:44:35.629482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:06.743 [2024-12-09 17:44:35.671299] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:06.743 [2024-12-09 17:44:35.671333] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:06.743 [2024-12-09 17:44:35.671340] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:06.743 [2024-12-09 17:44:35.671346] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:06.743 [2024-12-09 17:44:35.671351] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:06.743 [2024-12-09 17:44:35.672891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:06.743 [2024-12-09 17:44:35.673000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:06.743 [2024-12-09 17:44:35.673131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:06.743 [2024-12-09 17:44:35.673131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:06.743 17:44:35 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:06.743 17:44:35 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:34:06.743 17:44:35 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:34:06.743 17:44:35 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.743 17:44:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:06.743 INFO: Log level set to 20 00:34:06.743 INFO: Requests: 00:34:06.743 { 00:34:06.743 "jsonrpc": "2.0", 00:34:06.743 "method": "nvmf_set_config", 00:34:06.744 "id": 1, 00:34:06.744 "params": { 00:34:06.744 "admin_cmd_passthru": { 00:34:06.744 "identify_ctrlr": true 00:34:06.744 } 00:34:06.744 } 00:34:06.744 } 00:34:06.744 00:34:06.744 INFO: response: 00:34:06.744 { 00:34:06.744 "jsonrpc": "2.0", 00:34:06.744 "id": 1, 00:34:06.744 "result": true 00:34:06.744 } 00:34:06.744 00:34:06.744 17:44:35 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.744 17:44:35 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:34:06.744 17:44:35 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.744 17:44:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:06.744 INFO: Setting log level to 20 00:34:06.744 INFO: Setting log level to 20 00:34:06.744 INFO: Log level set to 20 00:34:06.744 INFO: Log level set to 20 00:34:06.744 INFO: Requests: 00:34:06.744 { 00:34:06.744 "jsonrpc": "2.0", 00:34:06.744 "method": "framework_start_init", 00:34:06.744 "id": 1 00:34:06.744 } 00:34:06.744 00:34:06.744 INFO: Requests: 00:34:06.744 { 00:34:06.744 "jsonrpc": "2.0", 00:34:06.744 "method": "framework_start_init", 00:34:06.744 "id": 1 00:34:06.744 } 00:34:06.744 00:34:06.744 [2024-12-09 17:44:35.780323] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:34:06.744 INFO: response: 00:34:06.744 { 00:34:06.744 "jsonrpc": "2.0", 00:34:06.744 "id": 1, 00:34:06.744 "result": true 00:34:06.744 } 00:34:06.744 00:34:06.744 INFO: response: 00:34:06.744 { 00:34:06.744 "jsonrpc": "2.0", 00:34:06.744 "id": 1, 00:34:06.744 "result": true 00:34:06.744 } 00:34:06.744 00:34:06.744 17:44:35 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.744 17:44:35 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:06.744 17:44:35 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.744 17:44:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:06.744 INFO: Setting log level to 40 00:34:06.744 INFO: Setting log level to 40 00:34:06.744 INFO: Setting log level to 40 00:34:06.744 [2024-12-09 17:44:35.793581] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:06.744 17:44:35 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.744 17:44:35 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:34:06.744 17:44:35 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:06.744 17:44:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:06.744 17:44:35 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:34:06.744 17:44:35 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.744 17:44:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:10.025 Nvme0n1 00:34:10.025 17:44:38 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.025 17:44:38 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:34:10.025 17:44:38 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.025 17:44:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:10.025 17:44:38 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.025 17:44:38 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:34:10.025 17:44:38 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.025 17:44:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:10.025 17:44:38 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.025 17:44:38 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:10.025 17:44:38 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.025 17:44:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:10.025 [2024-12-09 17:44:38.704565] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:10.025 17:44:38 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.025 17:44:38 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:34:10.025 17:44:38 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.025 17:44:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:10.025 [ 00:34:10.025 { 00:34:10.025 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:34:10.025 "subtype": "Discovery", 00:34:10.025 "listen_addresses": [], 00:34:10.025 "allow_any_host": true, 00:34:10.025 "hosts": [] 00:34:10.025 }, 00:34:10.025 { 00:34:10.025 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:34:10.025 "subtype": "NVMe", 00:34:10.025 "listen_addresses": [ 00:34:10.025 { 00:34:10.025 "trtype": "TCP", 00:34:10.025 "adrfam": "IPv4", 00:34:10.025 "traddr": "10.0.0.2", 00:34:10.025 "trsvcid": "4420" 00:34:10.025 } 00:34:10.025 ], 00:34:10.025 "allow_any_host": true, 00:34:10.025 "hosts": [], 00:34:10.025 "serial_number": "SPDK00000000000001", 00:34:10.025 "model_number": "SPDK bdev Controller", 00:34:10.025 "max_namespaces": 1, 00:34:10.025 "min_cntlid": 1, 00:34:10.025 "max_cntlid": 65519, 00:34:10.025 "namespaces": [ 00:34:10.025 { 00:34:10.025 "nsid": 1, 00:34:10.025 "bdev_name": "Nvme0n1", 00:34:10.025 "name": "Nvme0n1", 00:34:10.025 "nguid": "A0AFD9D858E9451DB6A863C210766AEF", 00:34:10.025 "uuid": "a0afd9d8-58e9-451d-b6a8-63c210766aef" 00:34:10.025 } 00:34:10.025 ] 00:34:10.025 } 00:34:10.025 ] 00:34:10.025 17:44:38 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.026 17:44:38 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:10.026 17:44:38 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:34:10.026 17:44:38 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:34:10.026 17:44:38 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ807001JM1P0FGN 00:34:10.026 17:44:38 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:10.026 17:44:38 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:34:10.026 17:44:38 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:34:10.284 17:44:39 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:34:10.284 17:44:39 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ807001JM1P0FGN '!=' BTLJ807001JM1P0FGN ']' 00:34:10.284 17:44:39 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:34:10.284 17:44:39 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:10.284 17:44:39 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.284 17:44:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:10.284 17:44:39 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.284 17:44:39 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:34:10.284 17:44:39 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:34:10.284 17:44:39 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:10.284 17:44:39 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:34:10.284 17:44:39 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:10.284 17:44:39 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:34:10.284 17:44:39 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:10.284 17:44:39 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:10.284 rmmod nvme_tcp 00:34:10.284 rmmod nvme_fabrics 00:34:10.284 rmmod nvme_keyring 00:34:10.284 17:44:39 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:10.284 17:44:39 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:34:10.284 17:44:39 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:34:10.284 17:44:39 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 2826869 ']' 00:34:10.284 17:44:39 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 2826869 00:34:10.284 17:44:39 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 2826869 ']' 00:34:10.284 17:44:39 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 2826869 00:34:10.284 17:44:39 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:34:10.284 17:44:39 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:10.284 17:44:39 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2826869 00:34:10.284 17:44:39 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:10.284 17:44:39 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:10.284 17:44:39 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2826869' 00:34:10.284 killing process with pid 2826869 00:34:10.284 17:44:39 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 2826869 00:34:10.284 17:44:39 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 2826869 00:34:11.658 17:44:40 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:11.658 17:44:40 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:11.658 17:44:40 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:11.658 17:44:40 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:34:11.658 17:44:40 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:34:11.658 17:44:40 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:11.658 17:44:40 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:34:11.658 17:44:40 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:11.658 17:44:40 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:11.658 17:44:40 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:11.658 17:44:40 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:11.658 17:44:40 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:14.194 17:44:42 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:14.194 00:34:14.194 real 0m21.962s 00:34:14.194 user 0m27.306s 00:34:14.194 sys 0m6.134s 00:34:14.194 17:44:42 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:14.194 17:44:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:14.194 ************************************ 00:34:14.194 END TEST nvmf_identify_passthru 00:34:14.194 ************************************ 00:34:14.194 17:44:42 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:14.194 17:44:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:14.194 17:44:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:14.194 17:44:42 -- common/autotest_common.sh@10 -- # set +x 00:34:14.194 ************************************ 00:34:14.194 START TEST nvmf_dif 00:34:14.194 ************************************ 00:34:14.194 17:44:42 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:14.194 * Looking for test storage... 00:34:14.194 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:14.194 17:44:43 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:14.194 17:44:43 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:34:14.194 17:44:43 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:14.194 17:44:43 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:14.194 17:44:43 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:14.194 17:44:43 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:14.194 17:44:43 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:14.194 17:44:43 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:34:14.194 17:44:43 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:34:14.194 17:44:43 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:34:14.194 17:44:43 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:34:14.194 17:44:43 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:34:14.194 17:44:43 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:34:14.194 17:44:43 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:34:14.194 17:44:43 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:14.194 17:44:43 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:34:14.194 17:44:43 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:34:14.194 17:44:43 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:14.195 17:44:43 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:14.195 17:44:43 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:34:14.195 17:44:43 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:34:14.195 17:44:43 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:14.195 17:44:43 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:34:14.195 17:44:43 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:34:14.195 17:44:43 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:34:14.195 17:44:43 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:34:14.195 17:44:43 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:14.195 17:44:43 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:34:14.195 17:44:43 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:34:14.195 17:44:43 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:14.195 17:44:43 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:14.195 17:44:43 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:34:14.195 17:44:43 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:14.195 17:44:43 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:14.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:14.195 --rc genhtml_branch_coverage=1 00:34:14.195 --rc genhtml_function_coverage=1 00:34:14.195 --rc genhtml_legend=1 00:34:14.195 --rc geninfo_all_blocks=1 00:34:14.195 --rc geninfo_unexecuted_blocks=1 00:34:14.195 00:34:14.195 ' 00:34:14.195 17:44:43 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:14.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:14.195 --rc genhtml_branch_coverage=1 00:34:14.195 --rc genhtml_function_coverage=1 00:34:14.195 --rc genhtml_legend=1 00:34:14.195 --rc geninfo_all_blocks=1 00:34:14.195 --rc geninfo_unexecuted_blocks=1 00:34:14.195 00:34:14.195 ' 00:34:14.195 17:44:43 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:14.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:14.195 --rc genhtml_branch_coverage=1 00:34:14.195 --rc genhtml_function_coverage=1 00:34:14.195 --rc genhtml_legend=1 00:34:14.195 --rc geninfo_all_blocks=1 00:34:14.195 --rc geninfo_unexecuted_blocks=1 00:34:14.195 00:34:14.195 ' 00:34:14.195 17:44:43 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:14.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:14.195 --rc genhtml_branch_coverage=1 00:34:14.195 --rc genhtml_function_coverage=1 00:34:14.195 --rc genhtml_legend=1 00:34:14.195 --rc geninfo_all_blocks=1 00:34:14.195 --rc geninfo_unexecuted_blocks=1 00:34:14.195 00:34:14.195 ' 00:34:14.195 17:44:43 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:14.195 17:44:43 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:34:14.195 17:44:43 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:14.195 17:44:43 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:14.195 17:44:43 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:14.195 17:44:43 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:14.195 17:44:43 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:14.195 17:44:43 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:14.195 17:44:43 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:14.195 17:44:43 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:14.195 17:44:43 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:14.195 17:44:43 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:14.195 17:44:43 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:34:14.195 17:44:43 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:34:14.195 17:44:43 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:14.195 17:44:43 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:14.195 17:44:43 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:14.195 17:44:43 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:14.195 17:44:43 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:14.195 17:44:43 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:34:14.195 17:44:43 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:14.195 17:44:43 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:14.195 17:44:43 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:14.195 17:44:43 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.195 17:44:43 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.195 17:44:43 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.195 17:44:43 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:34:14.195 17:44:43 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.195 17:44:43 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:34:14.195 17:44:43 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:14.195 17:44:43 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:14.195 17:44:43 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:14.195 17:44:43 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:14.195 17:44:43 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:14.195 17:44:43 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:14.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:14.195 17:44:43 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:14.195 17:44:43 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:14.195 17:44:43 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:14.195 17:44:43 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:34:14.195 17:44:43 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:34:14.195 17:44:43 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:34:14.195 17:44:43 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:34:14.195 17:44:43 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:34:14.195 17:44:43 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:14.195 17:44:43 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:14.195 17:44:43 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:14.195 17:44:43 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:14.195 17:44:43 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:14.195 17:44:43 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:14.195 17:44:43 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:14.195 17:44:43 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:14.195 17:44:43 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:14.195 17:44:43 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:14.195 17:44:43 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:34:14.195 17:44:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:20.770 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:20.770 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:20.770 Found net devices under 0000:af:00.0: cvl_0_0 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:20.770 Found net devices under 0000:af:00.1: cvl_0_1 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:20.770 17:44:48 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:20.771 17:44:48 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:20.771 17:44:48 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:20.771 17:44:48 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:20.771 17:44:48 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:20.771 17:44:48 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:20.771 17:44:48 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:20.771 17:44:48 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:20.771 17:44:48 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:20.771 17:44:48 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:20.771 17:44:48 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:20.771 17:44:49 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:20.771 17:44:49 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:20.771 17:44:49 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:20.771 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:20.771 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.372 ms 00:34:20.771 00:34:20.771 --- 10.0.0.2 ping statistics --- 00:34:20.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:20.771 rtt min/avg/max/mdev = 0.372/0.372/0.372/0.000 ms 00:34:20.771 17:44:49 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:20.771 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:20.771 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:34:20.771 00:34:20.771 --- 10.0.0.1 ping statistics --- 00:34:20.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:20.771 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:34:20.771 17:44:49 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:20.771 17:44:49 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:34:20.771 17:44:49 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:34:20.771 17:44:49 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:22.677 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:34:22.935 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:34:22.935 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:34:22.935 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:34:22.935 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:34:22.935 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:34:22.935 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:34:22.935 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:34:22.935 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:34:22.935 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:34:22.935 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:34:22.935 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:34:22.935 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:34:22.935 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:34:22.935 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:34:22.935 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:34:22.935 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:34:22.935 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:34:22.935 17:44:52 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:22.935 17:44:52 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:22.935 17:44:52 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:22.935 17:44:52 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:22.935 17:44:52 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:22.935 17:44:52 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:23.193 17:44:52 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:34:23.193 17:44:52 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:34:23.193 17:44:52 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:23.193 17:44:52 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:23.193 17:44:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:23.193 17:44:52 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=2832399 00:34:23.193 17:44:52 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:34:23.193 17:44:52 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 2832399 00:34:23.193 17:44:52 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 2832399 ']' 00:34:23.193 17:44:52 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:23.193 17:44:52 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:23.193 17:44:52 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:23.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:23.193 17:44:52 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:23.193 17:44:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:23.193 [2024-12-09 17:44:52.172235] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:34:23.193 [2024-12-09 17:44:52.172276] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:23.193 [2024-12-09 17:44:52.251427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:23.193 [2024-12-09 17:44:52.290451] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:23.193 [2024-12-09 17:44:52.290485] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:23.194 [2024-12-09 17:44:52.290492] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:23.194 [2024-12-09 17:44:52.290498] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:23.194 [2024-12-09 17:44:52.290503] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:23.194 [2024-12-09 17:44:52.291065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:23.452 17:44:52 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:23.452 17:44:52 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:34:23.452 17:44:52 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:23.452 17:44:52 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:23.452 17:44:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:23.452 17:44:52 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:23.452 17:44:52 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:34:23.452 17:44:52 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:34:23.452 17:44:52 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.452 17:44:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:23.452 [2024-12-09 17:44:52.426608] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:23.452 17:44:52 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.452 17:44:52 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:34:23.452 17:44:52 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:23.452 17:44:52 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:23.452 17:44:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:23.452 ************************************ 00:34:23.452 START TEST fio_dif_1_default 00:34:23.452 ************************************ 00:34:23.452 17:44:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:34:23.452 17:44:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:34:23.452 17:44:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:34:23.452 17:44:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:34:23.452 17:44:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:34:23.452 17:44:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:34:23.452 17:44:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:23.452 17:44:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.452 17:44:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:23.452 bdev_null0 00:34:23.452 17:44:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.452 17:44:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:23.452 17:44:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.452 17:44:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:23.452 17:44:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.452 17:44:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:23.452 17:44:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.452 17:44:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:23.452 17:44:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.452 17:44:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:23.452 17:44:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.453 17:44:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:23.453 [2024-12-09 17:44:52.502917] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:23.453 17:44:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.453 17:44:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:34:23.453 17:44:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:34:23.453 17:44:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:23.453 17:44:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:34:23.453 17:44:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:23.453 17:44:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:34:23.453 17:44:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:23.453 17:44:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:23.453 17:44:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:34:23.453 17:44:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:23.453 { 00:34:23.453 "params": { 00:34:23.453 "name": "Nvme$subsystem", 00:34:23.453 "trtype": "$TEST_TRANSPORT", 00:34:23.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:23.453 "adrfam": "ipv4", 00:34:23.453 "trsvcid": "$NVMF_PORT", 00:34:23.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:23.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:23.453 "hdgst": ${hdgst:-false}, 00:34:23.453 "ddgst": ${ddgst:-false} 00:34:23.453 }, 00:34:23.453 "method": "bdev_nvme_attach_controller" 00:34:23.453 } 00:34:23.453 EOF 00:34:23.453 )") 00:34:23.453 17:44:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:23.453 17:44:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:34:23.453 17:44:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:23.453 17:44:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:34:23.453 17:44:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:23.453 17:44:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:23.453 17:44:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:34:23.453 17:44:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:23.453 17:44:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:23.453 17:44:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:34:23.453 17:44:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:34:23.453 17:44:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:23.453 17:44:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:34:23.453 17:44:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:34:23.453 17:44:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:23.453 17:44:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:34:23.453 17:44:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:34:23.453 17:44:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:23.453 "params": { 00:34:23.453 "name": "Nvme0", 00:34:23.453 "trtype": "tcp", 00:34:23.453 "traddr": "10.0.0.2", 00:34:23.453 "adrfam": "ipv4", 00:34:23.453 "trsvcid": "4420", 00:34:23.453 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:23.453 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:23.453 "hdgst": false, 00:34:23.453 "ddgst": false 00:34:23.453 }, 00:34:23.453 "method": "bdev_nvme_attach_controller" 00:34:23.453 }' 00:34:23.453 17:44:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:23.453 17:44:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:23.453 17:44:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:23.453 17:44:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:23.453 17:44:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:23.453 17:44:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:23.453 17:44:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:23.453 17:44:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:23.453 17:44:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:23.453 17:44:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:23.711 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:23.711 fio-3.35 00:34:23.711 Starting 1 thread 00:34:35.911 00:34:35.911 filename0: (groupid=0, jobs=1): err= 0: pid=2832768: Mon Dec 9 17:45:03 2024 00:34:35.911 read: IOPS=214, BW=858KiB/s (878kB/s)(8608KiB/10034msec) 00:34:35.911 slat (nsec): min=5782, max=32599, avg=6055.41, stdev=753.93 00:34:35.911 clat (usec): min=358, max=42488, avg=18632.46, stdev=20151.93 00:34:35.911 lat (usec): min=364, max=42494, avg=18638.52, stdev=20151.91 00:34:35.911 clat percentiles (usec): 00:34:35.911 | 1.00th=[ 371], 5.00th=[ 383], 10.00th=[ 392], 20.00th=[ 400], 00:34:35.912 | 30.00th=[ 404], 40.00th=[ 416], 50.00th=[ 502], 60.00th=[40633], 00:34:35.912 | 70.00th=[40633], 80.00th=[40633], 90.00th=[41681], 95.00th=[41681], 00:34:35.912 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:34:35.912 | 99.99th=[42730] 00:34:35.912 bw ( KiB/s): min= 736, max= 992, per=100.00%, avg=859.20, stdev=62.53, samples=20 00:34:35.912 iops : min= 184, max= 248, avg=214.80, stdev=15.63, samples=20 00:34:35.912 lat (usec) : 500=49.91%, 750=5.11% 00:34:35.912 lat (msec) : 50=44.98% 00:34:35.912 cpu : usr=92.23%, sys=7.51%, ctx=14, majf=0, minf=0 00:34:35.912 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:35.912 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:35.912 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:35.912 issued rwts: total=2152,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:35.912 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:35.912 00:34:35.912 Run status group 0 (all jobs): 00:34:35.912 READ: bw=858KiB/s (878kB/s), 858KiB/s-858KiB/s (878kB/s-878kB/s), io=8608KiB (8815kB), run=10034-10034msec 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.912 00:34:35.912 real 0m11.244s 00:34:35.912 user 0m16.179s 00:34:35.912 sys 0m1.090s 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:35.912 ************************************ 00:34:35.912 END TEST fio_dif_1_default 00:34:35.912 ************************************ 00:34:35.912 17:45:03 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:34:35.912 17:45:03 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:35.912 17:45:03 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:35.912 17:45:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:35.912 ************************************ 00:34:35.912 START TEST fio_dif_1_multi_subsystems 00:34:35.912 ************************************ 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:35.912 bdev_null0 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:35.912 [2024-12-09 17:45:03.823595] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:35.912 bdev_null1 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:35.912 { 00:34:35.912 "params": { 00:34:35.912 "name": "Nvme$subsystem", 00:34:35.912 "trtype": "$TEST_TRANSPORT", 00:34:35.912 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:35.912 "adrfam": "ipv4", 00:34:35.912 "trsvcid": "$NVMF_PORT", 00:34:35.912 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:35.912 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:35.912 "hdgst": ${hdgst:-false}, 00:34:35.912 "ddgst": ${ddgst:-false} 00:34:35.912 }, 00:34:35.912 "method": "bdev_nvme_attach_controller" 00:34:35.912 } 00:34:35.912 EOF 00:34:35.912 )") 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:35.912 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:35.913 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:35.913 { 00:34:35.913 "params": { 00:34:35.913 "name": "Nvme$subsystem", 00:34:35.913 "trtype": "$TEST_TRANSPORT", 00:34:35.913 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:35.913 "adrfam": "ipv4", 00:34:35.913 "trsvcid": "$NVMF_PORT", 00:34:35.913 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:35.913 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:35.913 "hdgst": ${hdgst:-false}, 00:34:35.913 "ddgst": ${ddgst:-false} 00:34:35.913 }, 00:34:35.913 "method": "bdev_nvme_attach_controller" 00:34:35.913 } 00:34:35.913 EOF 00:34:35.913 )") 00:34:35.913 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:34:35.913 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:34:35.913 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:35.913 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:34:35.913 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:34:35.913 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:35.913 "params": { 00:34:35.913 "name": "Nvme0", 00:34:35.913 "trtype": "tcp", 00:34:35.913 "traddr": "10.0.0.2", 00:34:35.913 "adrfam": "ipv4", 00:34:35.913 "trsvcid": "4420", 00:34:35.913 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:35.913 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:35.913 "hdgst": false, 00:34:35.913 "ddgst": false 00:34:35.913 }, 00:34:35.913 "method": "bdev_nvme_attach_controller" 00:34:35.913 },{ 00:34:35.913 "params": { 00:34:35.913 "name": "Nvme1", 00:34:35.913 "trtype": "tcp", 00:34:35.913 "traddr": "10.0.0.2", 00:34:35.913 "adrfam": "ipv4", 00:34:35.913 "trsvcid": "4420", 00:34:35.913 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:35.913 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:35.913 "hdgst": false, 00:34:35.913 "ddgst": false 00:34:35.913 }, 00:34:35.913 "method": "bdev_nvme_attach_controller" 00:34:35.913 }' 00:34:35.913 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:35.913 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:35.913 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:35.913 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:35.913 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:35.913 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:35.913 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:35.913 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:35.913 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:35.913 17:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:35.913 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:35.913 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:35.913 fio-3.35 00:34:35.913 Starting 2 threads 00:34:45.879 00:34:45.879 filename0: (groupid=0, jobs=1): err= 0: pid=2834833: Mon Dec 9 17:45:15 2024 00:34:45.879 read: IOPS=200, BW=804KiB/s (823kB/s)(8064KiB/10036msec) 00:34:45.879 slat (nsec): min=5870, max=45499, avg=7043.67, stdev=2245.39 00:34:45.879 clat (usec): min=384, max=42581, avg=19892.38, stdev=20406.02 00:34:45.879 lat (usec): min=391, max=42588, avg=19899.43, stdev=20405.46 00:34:45.879 clat percentiles (usec): 00:34:45.879 | 1.00th=[ 396], 5.00th=[ 400], 10.00th=[ 408], 20.00th=[ 416], 00:34:45.879 | 30.00th=[ 424], 40.00th=[ 453], 50.00th=[ 603], 60.00th=[40633], 00:34:45.879 | 70.00th=[41157], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:34:45.879 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:34:45.879 | 99.99th=[42730] 00:34:45.879 bw ( KiB/s): min= 704, max= 960, per=67.24%, avg=804.80, stdev=61.66, samples=20 00:34:45.879 iops : min= 176, max= 240, avg=201.20, stdev=15.42, samples=20 00:34:45.879 lat (usec) : 500=46.23%, 750=5.95%, 1000=0.20% 00:34:45.879 lat (msec) : 50=47.62% 00:34:45.879 cpu : usr=96.67%, sys=3.07%, ctx=13, majf=0, minf=24 00:34:45.879 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:45.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:45.879 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:45.879 issued rwts: total=2016,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:45.879 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:45.879 filename1: (groupid=0, jobs=1): err= 0: pid=2834834: Mon Dec 9 17:45:15 2024 00:34:45.879 read: IOPS=98, BW=393KiB/s (403kB/s)(3936KiB/10012msec) 00:34:45.879 slat (nsec): min=5876, max=45760, avg=7793.57, stdev=3061.15 00:34:45.879 clat (usec): min=403, max=42546, avg=40675.64, stdev=3651.36 00:34:45.879 lat (usec): min=409, max=42553, avg=40683.43, stdev=3651.39 00:34:45.879 clat percentiles (usec): 00:34:45.879 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:34:45.879 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:45.879 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:45.879 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:34:45.879 | 99.99th=[42730] 00:34:45.879 bw ( KiB/s): min= 384, max= 448, per=32.70%, avg=392.00, stdev=17.60, samples=20 00:34:45.879 iops : min= 96, max= 112, avg=98.00, stdev= 4.40, samples=20 00:34:45.879 lat (usec) : 500=0.81% 00:34:45.879 lat (msec) : 50=99.19% 00:34:45.879 cpu : usr=96.64%, sys=3.10%, ctx=13, majf=0, minf=74 00:34:45.879 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:45.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:45.879 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:45.879 issued rwts: total=984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:45.879 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:45.879 00:34:45.879 Run status group 0 (all jobs): 00:34:45.879 READ: bw=1196KiB/s (1224kB/s), 393KiB/s-804KiB/s (403kB/s-823kB/s), io=11.7MiB (12.3MB), run=10012-10036msec 00:34:46.138 17:45:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:34:46.138 17:45:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:34:46.138 17:45:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:46.138 17:45:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:46.138 17:45:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:34:46.138 17:45:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:46.138 17:45:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.138 17:45:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:46.138 17:45:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.138 17:45:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:46.138 17:45:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.138 17:45:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:46.138 17:45:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.138 17:45:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:46.138 17:45:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:46.138 17:45:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:34:46.138 17:45:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:46.138 17:45:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.138 17:45:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:46.138 17:45:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.138 17:45:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:46.138 17:45:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.138 17:45:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:46.138 17:45:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.138 00:34:46.138 real 0m11.477s 00:34:46.138 user 0m26.968s 00:34:46.138 sys 0m1.033s 00:34:46.138 17:45:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:46.138 17:45:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:46.138 ************************************ 00:34:46.138 END TEST fio_dif_1_multi_subsystems 00:34:46.138 ************************************ 00:34:46.138 17:45:15 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:34:46.138 17:45:15 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:46.138 17:45:15 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:46.138 17:45:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:46.397 ************************************ 00:34:46.397 START TEST fio_dif_rand_params 00:34:46.397 ************************************ 00:34:46.397 17:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:34:46.397 17:45:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:34:46.397 17:45:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:34:46.397 17:45:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:34:46.397 17:45:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:34:46.397 17:45:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:34:46.397 17:45:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:34:46.397 17:45:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:34:46.397 17:45:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:34:46.397 17:45:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:46.397 17:45:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:46.397 17:45:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:46.397 17:45:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:46.397 17:45:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:46.397 17:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.397 17:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:46.397 bdev_null0 00:34:46.397 17:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.397 17:45:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:46.397 17:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.397 17:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:46.397 17:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.397 17:45:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:46.397 17:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.397 17:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:46.397 17:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.397 17:45:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:46.397 17:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.397 17:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:46.397 [2024-12-09 17:45:15.376182] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:46.397 17:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.397 17:45:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:34:46.397 17:45:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:34:46.397 17:45:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:46.397 17:45:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:34:46.397 17:45:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:46.397 17:45:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:34:46.397 17:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:46.397 17:45:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:46.397 17:45:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:46.397 17:45:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:46.397 { 00:34:46.397 "params": { 00:34:46.397 "name": "Nvme$subsystem", 00:34:46.397 "trtype": "$TEST_TRANSPORT", 00:34:46.397 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:46.397 "adrfam": "ipv4", 00:34:46.397 "trsvcid": "$NVMF_PORT", 00:34:46.397 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:46.397 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:46.397 "hdgst": ${hdgst:-false}, 00:34:46.397 "ddgst": ${ddgst:-false} 00:34:46.397 }, 00:34:46.397 "method": "bdev_nvme_attach_controller" 00:34:46.397 } 00:34:46.397 EOF 00:34:46.397 )") 00:34:46.397 17:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:46.397 17:45:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:46.397 17:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:46.397 17:45:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:46.397 17:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:46.397 17:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:46.397 17:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:34:46.397 17:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:46.397 17:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:46.397 17:45:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:46.397 17:45:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:46.397 17:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:46.397 17:45:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:46.397 17:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:34:46.397 17:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:46.397 17:45:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:34:46.397 17:45:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:34:46.397 17:45:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:46.398 "params": { 00:34:46.398 "name": "Nvme0", 00:34:46.398 "trtype": "tcp", 00:34:46.398 "traddr": "10.0.0.2", 00:34:46.398 "adrfam": "ipv4", 00:34:46.398 "trsvcid": "4420", 00:34:46.398 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:46.398 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:46.398 "hdgst": false, 00:34:46.398 "ddgst": false 00:34:46.398 }, 00:34:46.398 "method": "bdev_nvme_attach_controller" 00:34:46.398 }' 00:34:46.398 17:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:46.398 17:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:46.398 17:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:46.398 17:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:46.398 17:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:46.398 17:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:46.398 17:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:46.398 17:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:46.398 17:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:46.398 17:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:46.656 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:46.656 ... 00:34:46.656 fio-3.35 00:34:46.656 Starting 3 threads 00:34:53.216 00:34:53.216 filename0: (groupid=0, jobs=1): err= 0: pid=2837159: Mon Dec 9 17:45:21 2024 00:34:53.216 read: IOPS=299, BW=37.5MiB/s (39.3MB/s)(189MiB/5043msec) 00:34:53.216 slat (nsec): min=6122, max=44446, avg=10450.11, stdev=2363.24 00:34:53.216 clat (usec): min=3368, max=90273, avg=9964.94, stdev=9012.45 00:34:53.216 lat (usec): min=3375, max=90280, avg=9975.39, stdev=9012.53 00:34:53.216 clat percentiles (usec): 00:34:53.216 | 1.00th=[ 3621], 5.00th=[ 5538], 10.00th=[ 6194], 20.00th=[ 6980], 00:34:53.216 | 30.00th=[ 7832], 40.00th=[ 8291], 50.00th=[ 8455], 60.00th=[ 8717], 00:34:53.216 | 70.00th=[ 8979], 80.00th=[ 9241], 90.00th=[ 9634], 95.00th=[10552], 00:34:53.216 | 99.00th=[50070], 99.50th=[50070], 99.90th=[90702], 99.95th=[90702], 00:34:53.216 | 99.99th=[90702] 00:34:53.216 bw ( KiB/s): min=16128, max=48384, per=33.14%, avg=38656.00, stdev=9279.78, samples=10 00:34:53.216 iops : min= 126, max= 378, avg=302.00, stdev=72.50, samples=10 00:34:53.216 lat (msec) : 4=3.24%, 10=90.28%, 20=1.98%, 50=3.64%, 100=0.86% 00:34:53.216 cpu : usr=93.99%, sys=5.67%, ctx=18, majf=0, minf=45 00:34:53.216 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:53.216 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.216 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.216 issued rwts: total=1512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:53.216 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:53.216 filename0: (groupid=0, jobs=1): err= 0: pid=2837160: Mon Dec 9 17:45:21 2024 00:34:53.216 read: IOPS=311, BW=39.0MiB/s (40.9MB/s)(196MiB/5017msec) 00:34:53.216 slat (nsec): min=6125, max=23326, avg=10516.50, stdev=2217.56 00:34:53.216 clat (usec): min=3146, max=51219, avg=9605.01, stdev=7487.01 00:34:53.216 lat (usec): min=3152, max=51231, avg=9615.53, stdev=7487.15 00:34:53.216 clat percentiles (usec): 00:34:53.217 | 1.00th=[ 3654], 5.00th=[ 4883], 10.00th=[ 5866], 20.00th=[ 6521], 00:34:53.217 | 30.00th=[ 7570], 40.00th=[ 8586], 50.00th=[ 8848], 60.00th=[ 9241], 00:34:53.217 | 70.00th=[ 9503], 80.00th=[ 9765], 90.00th=[10290], 95.00th=[10814], 00:34:53.217 | 99.00th=[50070], 99.50th=[50594], 99.90th=[51119], 99.95th=[51119], 00:34:53.217 | 99.99th=[51119] 00:34:53.217 bw ( KiB/s): min=28672, max=54272, per=34.29%, avg=39987.20, stdev=7981.11, samples=10 00:34:53.217 iops : min= 224, max= 424, avg=312.40, stdev=62.35, samples=10 00:34:53.217 lat (msec) : 4=3.83%, 10=81.28%, 20=11.63%, 50=2.04%, 100=1.21% 00:34:53.217 cpu : usr=94.18%, sys=5.52%, ctx=8, majf=0, minf=72 00:34:53.217 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:53.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.217 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.217 issued rwts: total=1565,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:53.217 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:53.217 filename0: (groupid=0, jobs=1): err= 0: pid=2837161: Mon Dec 9 17:45:21 2024 00:34:53.217 read: IOPS=301, BW=37.6MiB/s (39.5MB/s)(190MiB/5043msec) 00:34:53.217 slat (nsec): min=6145, max=42410, avg=10623.11, stdev=2424.72 00:34:53.217 clat (usec): min=3040, max=89114, avg=9925.97, stdev=6357.23 00:34:53.217 lat (usec): min=3046, max=89122, avg=9936.60, stdev=6357.28 00:34:53.217 clat percentiles (usec): 00:34:53.217 | 1.00th=[ 3752], 5.00th=[ 5997], 10.00th=[ 6390], 20.00th=[ 6849], 00:34:53.217 | 30.00th=[ 7635], 40.00th=[ 8848], 50.00th=[ 9372], 60.00th=[ 9765], 00:34:53.217 | 70.00th=[10421], 80.00th=[11076], 90.00th=[11863], 95.00th=[12518], 00:34:53.217 | 99.00th=[47449], 99.50th=[50594], 99.90th=[52691], 99.95th=[88605], 00:34:53.217 | 99.99th=[88605] 00:34:53.217 bw ( KiB/s): min=32577, max=49408, per=33.28%, avg=38816.10, stdev=5067.08, samples=10 00:34:53.217 iops : min= 254, max= 386, avg=303.20, stdev=39.66, samples=10 00:34:53.217 lat (msec) : 4=1.38%, 10=61.53%, 20=34.85%, 50=1.71%, 100=0.53% 00:34:53.217 cpu : usr=94.90%, sys=4.82%, ctx=8, majf=0, minf=64 00:34:53.217 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:53.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.217 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.217 issued rwts: total=1518,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:53.217 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:53.217 00:34:53.217 Run status group 0 (all jobs): 00:34:53.217 READ: bw=114MiB/s (119MB/s), 37.5MiB/s-39.0MiB/s (39.3MB/s-40.9MB/s), io=574MiB (602MB), run=5017-5043msec 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:53.217 bdev_null0 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:53.217 [2024-12-09 17:45:21.752028] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:53.217 bdev_null1 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:53.217 bdev_null2 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:53.217 17:45:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:53.217 { 00:34:53.217 "params": { 00:34:53.218 "name": "Nvme$subsystem", 00:34:53.218 "trtype": "$TEST_TRANSPORT", 00:34:53.218 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:53.218 "adrfam": "ipv4", 00:34:53.218 "trsvcid": "$NVMF_PORT", 00:34:53.218 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:53.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:53.218 "hdgst": ${hdgst:-false}, 00:34:53.218 "ddgst": ${ddgst:-false} 00:34:53.218 }, 00:34:53.218 "method": "bdev_nvme_attach_controller" 00:34:53.218 } 00:34:53.218 EOF 00:34:53.218 )") 00:34:53.218 17:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:53.218 17:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:53.218 17:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:53.218 17:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:53.218 17:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:53.218 17:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:53.218 17:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:34:53.218 17:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:53.218 17:45:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:53.218 17:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:53.218 17:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:53.218 17:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:53.218 17:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:53.218 17:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:34:53.218 17:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:53.218 17:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:53.218 17:45:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:53.218 17:45:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:53.218 { 00:34:53.218 "params": { 00:34:53.218 "name": "Nvme$subsystem", 00:34:53.218 "trtype": "$TEST_TRANSPORT", 00:34:53.218 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:53.218 "adrfam": "ipv4", 00:34:53.218 "trsvcid": "$NVMF_PORT", 00:34:53.218 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:53.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:53.218 "hdgst": ${hdgst:-false}, 00:34:53.218 "ddgst": ${ddgst:-false} 00:34:53.218 }, 00:34:53.218 "method": "bdev_nvme_attach_controller" 00:34:53.218 } 00:34:53.218 EOF 00:34:53.218 )") 00:34:53.218 17:45:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:53.218 17:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:53.218 17:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:53.218 17:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:53.218 17:45:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:53.218 17:45:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:53.218 { 00:34:53.218 "params": { 00:34:53.218 "name": "Nvme$subsystem", 00:34:53.218 "trtype": "$TEST_TRANSPORT", 00:34:53.218 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:53.218 "adrfam": "ipv4", 00:34:53.218 "trsvcid": "$NVMF_PORT", 00:34:53.218 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:53.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:53.218 "hdgst": ${hdgst:-false}, 00:34:53.218 "ddgst": ${ddgst:-false} 00:34:53.218 }, 00:34:53.218 "method": "bdev_nvme_attach_controller" 00:34:53.218 } 00:34:53.218 EOF 00:34:53.218 )") 00:34:53.218 17:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:53.218 17:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:53.218 17:45:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:53.218 17:45:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:34:53.218 17:45:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:34:53.218 17:45:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:53.218 "params": { 00:34:53.218 "name": "Nvme0", 00:34:53.218 "trtype": "tcp", 00:34:53.218 "traddr": "10.0.0.2", 00:34:53.218 "adrfam": "ipv4", 00:34:53.218 "trsvcid": "4420", 00:34:53.218 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:53.218 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:53.218 "hdgst": false, 00:34:53.218 "ddgst": false 00:34:53.218 }, 00:34:53.218 "method": "bdev_nvme_attach_controller" 00:34:53.218 },{ 00:34:53.218 "params": { 00:34:53.218 "name": "Nvme1", 00:34:53.218 "trtype": "tcp", 00:34:53.218 "traddr": "10.0.0.2", 00:34:53.218 "adrfam": "ipv4", 00:34:53.218 "trsvcid": "4420", 00:34:53.218 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:53.218 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:53.218 "hdgst": false, 00:34:53.218 "ddgst": false 00:34:53.218 }, 00:34:53.218 "method": "bdev_nvme_attach_controller" 00:34:53.218 },{ 00:34:53.218 "params": { 00:34:53.218 "name": "Nvme2", 00:34:53.218 "trtype": "tcp", 00:34:53.218 "traddr": "10.0.0.2", 00:34:53.218 "adrfam": "ipv4", 00:34:53.218 "trsvcid": "4420", 00:34:53.218 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:34:53.218 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:34:53.218 "hdgst": false, 00:34:53.218 "ddgst": false 00:34:53.218 }, 00:34:53.218 "method": "bdev_nvme_attach_controller" 00:34:53.218 }' 00:34:53.218 17:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:53.218 17:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:53.218 17:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:53.218 17:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:53.218 17:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:53.218 17:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:53.218 17:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:53.218 17:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:53.218 17:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:53.218 17:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:53.218 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:53.218 ... 00:34:53.218 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:53.218 ... 00:34:53.218 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:53.218 ... 00:34:53.218 fio-3.35 00:34:53.218 Starting 24 threads 00:35:05.540 00:35:05.540 filename0: (groupid=0, jobs=1): err= 0: pid=2838203: Mon Dec 9 17:45:33 2024 00:35:05.540 read: IOPS=595, BW=2381KiB/s (2438kB/s)(23.2MiB/10001msec) 00:35:05.540 slat (usec): min=8, max=110, avg=36.22, stdev=14.10 00:35:05.540 clat (usec): min=14254, max=42305, avg=26600.43, stdev=2176.73 00:35:05.540 lat (usec): min=14267, max=42345, avg=26636.65, stdev=2177.62 00:35:05.540 clat percentiles (usec): 00:35:05.540 | 1.00th=[21365], 5.00th=[24511], 10.00th=[24773], 20.00th=[25035], 00:35:05.540 | 30.00th=[25297], 40.00th=[25560], 50.00th=[26608], 60.00th=[26870], 00:35:05.540 | 70.00th=[27132], 80.00th=[28443], 90.00th=[29492], 95.00th=[30540], 00:35:05.540 | 99.00th=[31065], 99.50th=[31065], 99.90th=[42206], 99.95th=[42206], 00:35:05.540 | 99.99th=[42206] 00:35:05.540 bw ( KiB/s): min= 2176, max= 2560, per=4.20%, avg=2384.58, stdev=122.16, samples=19 00:35:05.540 iops : min= 544, max= 640, avg=596.11, stdev=30.52, samples=19 00:35:05.540 lat (msec) : 20=0.94%, 50=99.06% 00:35:05.540 cpu : usr=98.15%, sys=1.25%, ctx=60, majf=0, minf=25 00:35:05.540 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:05.540 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.540 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.540 issued rwts: total=5952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:05.540 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:05.540 filename0: (groupid=0, jobs=1): err= 0: pid=2838204: Mon Dec 9 17:45:33 2024 00:35:05.540 read: IOPS=593, BW=2373KiB/s (2430kB/s)(23.2MiB/10006msec) 00:35:05.540 slat (usec): min=7, max=100, avg=44.81, stdev=18.62 00:35:05.540 clat (usec): min=8534, max=55860, avg=26559.56, stdev=2618.50 00:35:05.540 lat (usec): min=8584, max=55877, avg=26604.37, stdev=2619.16 00:35:05.540 clat percentiles (usec): 00:35:05.540 | 1.00th=[23987], 5.00th=[24511], 10.00th=[24773], 20.00th=[25035], 00:35:05.540 | 30.00th=[25035], 40.00th=[25560], 50.00th=[26346], 60.00th=[26608], 00:35:05.540 | 70.00th=[26870], 80.00th=[28443], 90.00th=[29492], 95.00th=[30278], 00:35:05.540 | 99.00th=[31065], 99.50th=[31327], 99.90th=[55837], 99.95th=[55837], 00:35:05.540 | 99.99th=[55837] 00:35:05.540 bw ( KiB/s): min= 2048, max= 2560, per=4.17%, avg=2364.84, stdev=161.83, samples=19 00:35:05.540 iops : min= 512, max= 640, avg=591.21, stdev=40.46, samples=19 00:35:05.540 lat (msec) : 10=0.27%, 20=0.44%, 50=99.02%, 100=0.27% 00:35:05.540 cpu : usr=98.75%, sys=0.85%, ctx=35, majf=0, minf=41 00:35:05.540 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:05.540 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.540 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.540 issued rwts: total=5936,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:05.540 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:05.540 filename0: (groupid=0, jobs=1): err= 0: pid=2838205: Mon Dec 9 17:45:33 2024 00:35:05.540 read: IOPS=590, BW=2363KiB/s (2420kB/s)(23.2MiB/10055msec) 00:35:05.540 slat (nsec): min=6871, max=93092, avg=37961.61, stdev=18757.94 00:35:05.540 clat (usec): min=14238, max=54155, avg=26617.48, stdev=2034.25 00:35:05.540 lat (usec): min=14273, max=54172, avg=26655.44, stdev=2037.17 00:35:05.540 clat percentiles (usec): 00:35:05.540 | 1.00th=[24249], 5.00th=[24773], 10.00th=[24773], 20.00th=[25035], 00:35:05.540 | 30.00th=[25297], 40.00th=[25560], 50.00th=[26346], 60.00th=[26608], 00:35:05.540 | 70.00th=[27132], 80.00th=[28443], 90.00th=[29492], 95.00th=[30278], 00:35:05.540 | 99.00th=[30802], 99.50th=[31065], 99.90th=[31327], 99.95th=[54264], 00:35:05.540 | 99.99th=[54264] 00:35:05.540 bw ( KiB/s): min= 2176, max= 2560, per=4.18%, avg=2374.40, stdev=113.54, samples=20 00:35:05.540 iops : min= 544, max= 640, avg=593.60, stdev=28.39, samples=20 00:35:05.540 lat (msec) : 20=0.30%, 50=99.63%, 100=0.07% 00:35:05.540 cpu : usr=98.72%, sys=0.89%, ctx=17, majf=0, minf=36 00:35:05.540 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:05.540 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.540 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.540 issued rwts: total=5940,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:05.540 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:05.540 filename0: (groupid=0, jobs=1): err= 0: pid=2838206: Mon Dec 9 17:45:33 2024 00:35:05.540 read: IOPS=594, BW=2376KiB/s (2433kB/s)(23.2MiB/10019msec) 00:35:05.540 slat (nsec): min=7408, max=92230, avg=32897.60, stdev=20447.94 00:35:05.540 clat (usec): min=10288, max=42330, avg=26636.79, stdev=2357.37 00:35:05.540 lat (usec): min=10296, max=42357, avg=26669.68, stdev=2361.86 00:35:05.540 clat percentiles (usec): 00:35:05.540 | 1.00th=[17171], 5.00th=[24773], 10.00th=[25035], 20.00th=[25035], 00:35:05.540 | 30.00th=[25297], 40.00th=[25560], 50.00th=[26346], 60.00th=[26870], 00:35:05.540 | 70.00th=[27132], 80.00th=[28443], 90.00th=[29492], 95.00th=[30540], 00:35:05.540 | 99.00th=[32375], 99.50th=[34866], 99.90th=[41681], 99.95th=[42206], 00:35:05.540 | 99.99th=[42206] 00:35:05.540 bw ( KiB/s): min= 2048, max= 2560, per=4.18%, avg=2374.15, stdev=140.81, samples=20 00:35:05.540 iops : min= 512, max= 640, avg=593.50, stdev=35.22, samples=20 00:35:05.540 lat (msec) : 20=1.28%, 50=98.72% 00:35:05.540 cpu : usr=98.67%, sys=0.83%, ctx=47, majf=0, minf=20 00:35:05.540 IO depths : 1=5.9%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.6%, 32=0.0%, >=64=0.0% 00:35:05.540 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.540 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.540 issued rwts: total=5952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:05.540 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:05.540 filename0: (groupid=0, jobs=1): err= 0: pid=2838207: Mon Dec 9 17:45:33 2024 00:35:05.540 read: IOPS=593, BW=2373KiB/s (2430kB/s)(23.2MiB/10006msec) 00:35:05.540 slat (nsec): min=6406, max=81501, avg=43717.81, stdev=12534.99 00:35:05.540 clat (usec): min=8520, max=60758, avg=26590.93, stdev=2652.52 00:35:05.540 lat (usec): min=8573, max=60777, avg=26634.65, stdev=2651.80 00:35:05.540 clat percentiles (usec): 00:35:05.540 | 1.00th=[23987], 5.00th=[24511], 10.00th=[24773], 20.00th=[25035], 00:35:05.540 | 30.00th=[25035], 40.00th=[25560], 50.00th=[26346], 60.00th=[26608], 00:35:05.540 | 70.00th=[27132], 80.00th=[28443], 90.00th=[29492], 95.00th=[30278], 00:35:05.540 | 99.00th=[31065], 99.50th=[31327], 99.90th=[55837], 99.95th=[55837], 00:35:05.540 | 99.99th=[60556] 00:35:05.540 bw ( KiB/s): min= 2048, max= 2560, per=4.17%, avg=2364.84, stdev=161.83, samples=19 00:35:05.540 iops : min= 512, max= 640, avg=591.21, stdev=40.46, samples=19 00:35:05.540 lat (msec) : 10=0.27%, 20=0.35%, 50=99.11%, 100=0.27% 00:35:05.540 cpu : usr=98.00%, sys=1.31%, ctx=124, majf=0, minf=22 00:35:05.540 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:05.540 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.540 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.540 issued rwts: total=5936,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:05.540 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:05.540 filename0: (groupid=0, jobs=1): err= 0: pid=2838208: Mon Dec 9 17:45:33 2024 00:35:05.540 read: IOPS=593, BW=2373KiB/s (2430kB/s)(23.2MiB/10007msec) 00:35:05.540 slat (nsec): min=5742, max=73674, avg=20874.74, stdev=10120.82 00:35:05.540 clat (usec): min=9652, max=46817, avg=26799.47, stdev=2798.67 00:35:05.540 lat (usec): min=9666, max=46836, avg=26820.35, stdev=2799.13 00:35:05.540 clat percentiles (usec): 00:35:05.540 | 1.00th=[16188], 5.00th=[24773], 10.00th=[25035], 20.00th=[25035], 00:35:05.540 | 30.00th=[25297], 40.00th=[25560], 50.00th=[26608], 60.00th=[26870], 00:35:05.540 | 70.00th=[27395], 80.00th=[28705], 90.00th=[30016], 95.00th=[30802], 00:35:05.540 | 99.00th=[35390], 99.50th=[39584], 99.90th=[46924], 99.95th=[46924], 00:35:05.540 | 99.99th=[46924] 00:35:05.540 bw ( KiB/s): min= 2160, max= 2560, per=4.17%, avg=2364.37, stdev=118.04, samples=19 00:35:05.540 iops : min= 540, max= 640, avg=591.05, stdev=29.53, samples=19 00:35:05.540 lat (msec) : 10=0.27%, 20=1.38%, 50=98.35% 00:35:05.540 cpu : usr=98.51%, sys=1.06%, ctx=32, majf=0, minf=48 00:35:05.540 IO depths : 1=5.2%, 2=11.4%, 4=24.9%, 8=51.2%, 16=7.3%, 32=0.0%, >=64=0.0% 00:35:05.540 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.540 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.540 issued rwts: total=5936,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:05.540 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:05.540 filename0: (groupid=0, jobs=1): err= 0: pid=2838209: Mon Dec 9 17:45:33 2024 00:35:05.540 read: IOPS=593, BW=2373KiB/s (2430kB/s)(23.2MiB/10007msec) 00:35:05.540 slat (nsec): min=6942, max=98721, avg=44086.00, stdev=17331.25 00:35:05.540 clat (usec): min=8541, max=56585, avg=26563.42, stdev=2662.24 00:35:05.540 lat (usec): min=8594, max=56602, avg=26607.51, stdev=2662.16 00:35:05.540 clat percentiles (usec): 00:35:05.540 | 1.00th=[23987], 5.00th=[24511], 10.00th=[24773], 20.00th=[25035], 00:35:05.540 | 30.00th=[25035], 40.00th=[25560], 50.00th=[26346], 60.00th=[26608], 00:35:05.540 | 70.00th=[26870], 80.00th=[28443], 90.00th=[29492], 95.00th=[30278], 00:35:05.540 | 99.00th=[31065], 99.50th=[31327], 99.90th=[56361], 99.95th=[56361], 00:35:05.540 | 99.99th=[56361] 00:35:05.540 bw ( KiB/s): min= 2048, max= 2560, per=4.17%, avg=2364.84, stdev=161.83, samples=19 00:35:05.540 iops : min= 512, max= 640, avg=591.21, stdev=40.46, samples=19 00:35:05.540 lat (msec) : 10=0.27%, 20=0.42%, 50=99.04%, 100=0.27% 00:35:05.540 cpu : usr=98.87%, sys=0.74%, ctx=17, majf=0, minf=39 00:35:05.540 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:05.540 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.540 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.540 issued rwts: total=5936,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:05.540 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:05.540 filename0: (groupid=0, jobs=1): err= 0: pid=2838210: Mon Dec 9 17:45:33 2024 00:35:05.540 read: IOPS=597, BW=2390KiB/s (2447kB/s)(23.3MiB/10002msec) 00:35:05.540 slat (nsec): min=3775, max=84889, avg=40037.59, stdev=15619.24 00:35:05.540 clat (usec): min=11066, max=37285, avg=26434.32, stdev=2461.74 00:35:05.540 lat (usec): min=11077, max=37292, avg=26474.36, stdev=2461.56 00:35:05.540 clat percentiles (usec): 00:35:05.540 | 1.00th=[18482], 5.00th=[24249], 10.00th=[24511], 20.00th=[24773], 00:35:05.540 | 30.00th=[25035], 40.00th=[25297], 50.00th=[26346], 60.00th=[26608], 00:35:05.540 | 70.00th=[27132], 80.00th=[28181], 90.00th=[29492], 95.00th=[30540], 00:35:05.540 | 99.00th=[35390], 99.50th=[36439], 99.90th=[36963], 99.95th=[37487], 00:35:05.540 | 99.99th=[37487] 00:35:05.540 bw ( KiB/s): min= 2176, max= 2560, per=4.21%, avg=2388.21, stdev=112.87, samples=19 00:35:05.541 iops : min= 544, max= 640, avg=597.05, stdev=28.22, samples=19 00:35:05.541 lat (msec) : 20=1.31%, 50=98.69% 00:35:05.541 cpu : usr=98.37%, sys=1.07%, ctx=100, majf=0, minf=38 00:35:05.541 IO depths : 1=5.5%, 2=11.1%, 4=22.7%, 8=53.4%, 16=7.3%, 32=0.0%, >=64=0.0% 00:35:05.541 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.541 complete : 0=0.0%, 4=93.5%, 8=1.0%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.541 issued rwts: total=5976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:05.541 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:05.541 filename1: (groupid=0, jobs=1): err= 0: pid=2838211: Mon Dec 9 17:45:33 2024 00:35:05.541 read: IOPS=594, BW=2376KiB/s (2433kB/s)(23.2MiB/10019msec) 00:35:05.541 slat (nsec): min=7008, max=79547, avg=17439.20, stdev=8757.47 00:35:05.541 clat (usec): min=14246, max=33851, avg=26790.65, stdev=2072.70 00:35:05.541 lat (usec): min=14276, max=33875, avg=26808.09, stdev=2070.74 00:35:05.541 clat percentiles (usec): 00:35:05.541 | 1.00th=[24249], 5.00th=[24773], 10.00th=[25035], 20.00th=[25297], 00:35:05.541 | 30.00th=[25297], 40.00th=[25560], 50.00th=[26870], 60.00th=[26870], 00:35:05.541 | 70.00th=[27132], 80.00th=[28705], 90.00th=[29754], 95.00th=[30802], 00:35:05.541 | 99.00th=[31065], 99.50th=[31327], 99.90th=[33817], 99.95th=[33817], 00:35:05.541 | 99.99th=[33817] 00:35:05.541 bw ( KiB/s): min= 2048, max= 2560, per=4.18%, avg=2374.15, stdev=140.81, samples=20 00:35:05.541 iops : min= 512, max= 640, avg=593.50, stdev=35.22, samples=20 00:35:05.541 lat (msec) : 20=0.54%, 50=99.46% 00:35:05.541 cpu : usr=98.71%, sys=0.90%, ctx=15, majf=0, minf=26 00:35:05.541 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:05.541 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.541 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.541 issued rwts: total=5952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:05.541 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:05.541 filename1: (groupid=0, jobs=1): err= 0: pid=2838212: Mon Dec 9 17:45:33 2024 00:35:05.541 read: IOPS=592, BW=2371KiB/s (2428kB/s)(23.2MiB/10013msec) 00:35:05.541 slat (nsec): min=7777, max=92899, avg=39557.90, stdev=18311.63 00:35:05.541 clat (usec): min=16739, max=41313, avg=26611.60, stdev=2036.77 00:35:05.541 lat (usec): min=16755, max=41325, avg=26651.16, stdev=2038.55 00:35:05.541 clat percentiles (usec): 00:35:05.541 | 1.00th=[24249], 5.00th=[24511], 10.00th=[24773], 20.00th=[25035], 00:35:05.541 | 30.00th=[25035], 40.00th=[25560], 50.00th=[26346], 60.00th=[26608], 00:35:05.541 | 70.00th=[27132], 80.00th=[28443], 90.00th=[29492], 95.00th=[30540], 00:35:05.541 | 99.00th=[31065], 99.50th=[31327], 99.90th=[39584], 99.95th=[41157], 00:35:05.541 | 99.99th=[41157] 00:35:05.541 bw ( KiB/s): min= 2048, max= 2560, per=4.18%, avg=2371.37, stdev=143.86, samples=19 00:35:05.541 iops : min= 512, max= 640, avg=592.84, stdev=35.96, samples=19 00:35:05.541 lat (msec) : 20=0.37%, 50=99.63% 00:35:05.541 cpu : usr=98.86%, sys=0.69%, ctx=43, majf=0, minf=25 00:35:05.541 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:05.541 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.541 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.541 issued rwts: total=5936,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:05.541 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:05.541 filename1: (groupid=0, jobs=1): err= 0: pid=2838213: Mon Dec 9 17:45:33 2024 00:35:05.541 read: IOPS=593, BW=2373KiB/s (2430kB/s)(23.2MiB/10004msec) 00:35:05.541 slat (nsec): min=3761, max=80137, avg=42803.17, stdev=13152.58 00:35:05.541 clat (usec): min=14303, max=35201, avg=26606.78, stdev=1993.32 00:35:05.541 lat (usec): min=14362, max=35213, avg=26649.58, stdev=1992.56 00:35:05.541 clat percentiles (usec): 00:35:05.541 | 1.00th=[24249], 5.00th=[24511], 10.00th=[24773], 20.00th=[25035], 00:35:05.541 | 30.00th=[25297], 40.00th=[25560], 50.00th=[26346], 60.00th=[26870], 00:35:05.541 | 70.00th=[27132], 80.00th=[28443], 90.00th=[29492], 95.00th=[30540], 00:35:05.541 | 99.00th=[31065], 99.50th=[31327], 99.90th=[35390], 99.95th=[35390], 00:35:05.541 | 99.99th=[35390] 00:35:05.541 bw ( KiB/s): min= 2176, max= 2560, per=4.18%, avg=2371.58, stdev=130.48, samples=19 00:35:05.541 iops : min= 544, max= 640, avg=592.89, stdev=32.62, samples=19 00:35:05.541 lat (msec) : 20=0.34%, 50=99.66% 00:35:05.541 cpu : usr=98.07%, sys=1.25%, ctx=169, majf=0, minf=28 00:35:05.541 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:05.541 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.541 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.541 issued rwts: total=5936,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:05.541 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:05.541 filename1: (groupid=0, jobs=1): err= 0: pid=2838214: Mon Dec 9 17:45:33 2024 00:35:05.541 read: IOPS=593, BW=2372KiB/s (2429kB/s)(23.2MiB/10010msec) 00:35:05.541 slat (usec): min=7, max=114, avg=37.45, stdev=14.24 00:35:05.541 clat (usec): min=15695, max=39563, avg=26674.05, stdev=1985.90 00:35:05.541 lat (usec): min=15712, max=39611, avg=26711.50, stdev=1986.36 00:35:05.541 clat percentiles (usec): 00:35:05.541 | 1.00th=[24249], 5.00th=[24511], 10.00th=[24773], 20.00th=[25035], 00:35:05.541 | 30.00th=[25297], 40.00th=[25560], 50.00th=[26608], 60.00th=[26870], 00:35:05.541 | 70.00th=[27132], 80.00th=[28443], 90.00th=[29492], 95.00th=[30540], 00:35:05.541 | 99.00th=[31065], 99.50th=[31327], 99.90th=[35390], 99.95th=[35390], 00:35:05.541 | 99.99th=[39584] 00:35:05.541 bw ( KiB/s): min= 2048, max= 2560, per=4.18%, avg=2371.58, stdev=143.76, samples=19 00:35:05.541 iops : min= 512, max= 640, avg=592.89, stdev=35.94, samples=19 00:35:05.541 lat (msec) : 20=0.40%, 50=99.60% 00:35:05.541 cpu : usr=97.93%, sys=1.21%, ctx=183, majf=0, minf=19 00:35:05.541 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:05.541 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.541 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.541 issued rwts: total=5936,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:05.541 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:05.541 filename1: (groupid=0, jobs=1): err= 0: pid=2838215: Mon Dec 9 17:45:33 2024 00:35:05.541 read: IOPS=595, BW=2381KiB/s (2438kB/s)(23.2MiB/10001msec) 00:35:05.541 slat (usec): min=7, max=102, avg=29.67, stdev=14.08 00:35:05.541 clat (usec): min=10420, max=31446, avg=26664.54, stdev=2108.17 00:35:05.541 lat (usec): min=10433, max=31462, avg=26694.20, stdev=2107.64 00:35:05.541 clat percentiles (usec): 00:35:05.541 | 1.00th=[21627], 5.00th=[24773], 10.00th=[24773], 20.00th=[25035], 00:35:05.541 | 30.00th=[25297], 40.00th=[25560], 50.00th=[26608], 60.00th=[26870], 00:35:05.541 | 70.00th=[27132], 80.00th=[28705], 90.00th=[29492], 95.00th=[30540], 00:35:05.541 | 99.00th=[31065], 99.50th=[31327], 99.90th=[31327], 99.95th=[31327], 00:35:05.541 | 99.99th=[31327] 00:35:05.541 bw ( KiB/s): min= 2176, max= 2560, per=4.20%, avg=2384.58, stdev=122.16, samples=19 00:35:05.541 iops : min= 544, max= 640, avg=596.11, stdev=30.52, samples=19 00:35:05.541 lat (msec) : 20=0.77%, 50=99.23% 00:35:05.541 cpu : usr=98.30%, sys=1.12%, ctx=62, majf=0, minf=26 00:35:05.541 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:05.541 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.541 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.541 issued rwts: total=5952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:05.541 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:05.541 filename1: (groupid=0, jobs=1): err= 0: pid=2838216: Mon Dec 9 17:45:33 2024 00:35:05.541 read: IOPS=593, BW=2372KiB/s (2429kB/s)(23.2MiB/10010msec) 00:35:05.541 slat (nsec): min=3780, max=92911, avg=39210.18, stdev=17865.26 00:35:05.541 clat (usec): min=18410, max=35979, avg=26610.67, stdev=1912.51 00:35:05.541 lat (usec): min=18423, max=35992, avg=26649.88, stdev=1914.18 00:35:05.541 clat percentiles (usec): 00:35:05.541 | 1.00th=[24249], 5.00th=[24511], 10.00th=[24773], 20.00th=[25035], 00:35:05.541 | 30.00th=[25297], 40.00th=[25560], 50.00th=[26346], 60.00th=[26608], 00:35:05.541 | 70.00th=[27132], 80.00th=[28443], 90.00th=[29492], 95.00th=[30540], 00:35:05.541 | 99.00th=[31065], 99.50th=[31065], 99.90th=[35914], 99.95th=[35914], 00:35:05.541 | 99.99th=[35914] 00:35:05.541 bw ( KiB/s): min= 2048, max= 2560, per=4.18%, avg=2371.37, stdev=143.86, samples=19 00:35:05.541 iops : min= 512, max= 640, avg=592.84, stdev=35.96, samples=19 00:35:05.541 lat (msec) : 20=0.27%, 50=99.73% 00:35:05.541 cpu : usr=98.39%, sys=1.03%, ctx=74, majf=0, minf=20 00:35:05.541 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:05.541 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.541 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.541 issued rwts: total=5936,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:05.541 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:05.541 filename1: (groupid=0, jobs=1): err= 0: pid=2838217: Mon Dec 9 17:45:33 2024 00:35:05.541 read: IOPS=592, BW=2372KiB/s (2428kB/s)(23.2MiB/10012msec) 00:35:05.541 slat (nsec): min=3849, max=84924, avg=23263.43, stdev=12144.49 00:35:05.541 clat (usec): min=14819, max=44702, avg=26782.69, stdev=2206.40 00:35:05.541 lat (usec): min=14829, max=44714, avg=26805.96, stdev=2207.17 00:35:05.541 clat percentiles (usec): 00:35:05.541 | 1.00th=[24249], 5.00th=[24773], 10.00th=[25035], 20.00th=[25035], 00:35:05.541 | 30.00th=[25297], 40.00th=[25560], 50.00th=[26608], 60.00th=[26870], 00:35:05.541 | 70.00th=[27132], 80.00th=[28443], 90.00th=[29754], 95.00th=[30802], 00:35:05.541 | 99.00th=[31327], 99.50th=[34341], 99.90th=[41681], 99.95th=[43254], 00:35:05.541 | 99.99th=[44827] 00:35:05.541 bw ( KiB/s): min= 2048, max= 2560, per=4.18%, avg=2371.37, stdev=141.57, samples=19 00:35:05.541 iops : min= 512, max= 640, avg=592.84, stdev=35.39, samples=19 00:35:05.541 lat (msec) : 20=0.83%, 50=99.17% 00:35:05.541 cpu : usr=97.52%, sys=1.53%, ctx=256, majf=0, minf=39 00:35:05.541 IO depths : 1=5.5%, 2=11.7%, 4=24.9%, 8=50.9%, 16=7.0%, 32=0.0%, >=64=0.0% 00:35:05.541 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.541 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.541 issued rwts: total=5936,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:05.541 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:05.541 filename1: (groupid=0, jobs=1): err= 0: pid=2838218: Mon Dec 9 17:45:33 2024 00:35:05.541 read: IOPS=594, BW=2377KiB/s (2434kB/s)(23.2MiB/10018msec) 00:35:05.541 slat (usec): min=6, max=104, avg=34.24, stdev=20.95 00:35:05.541 clat (usec): min=12277, max=35533, avg=26653.35, stdev=2111.61 00:35:05.541 lat (usec): min=12285, max=35565, avg=26687.60, stdev=2109.80 00:35:05.541 clat percentiles (usec): 00:35:05.541 | 1.00th=[23987], 5.00th=[24511], 10.00th=[24773], 20.00th=[25035], 00:35:05.541 | 30.00th=[25297], 40.00th=[25560], 50.00th=[26608], 60.00th=[26870], 00:35:05.541 | 70.00th=[27132], 80.00th=[28443], 90.00th=[29492], 95.00th=[30540], 00:35:05.541 | 99.00th=[31065], 99.50th=[31589], 99.90th=[35390], 99.95th=[35390], 00:35:05.541 | 99.99th=[35390] 00:35:05.542 bw ( KiB/s): min= 2048, max= 2560, per=4.18%, avg=2374.15, stdev=140.81, samples=20 00:35:05.542 iops : min= 512, max= 640, avg=593.50, stdev=35.22, samples=20 00:35:05.542 lat (msec) : 20=0.81%, 50=99.19% 00:35:05.542 cpu : usr=99.04%, sys=0.55%, ctx=39, majf=0, minf=28 00:35:05.542 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:05.542 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.542 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.542 issued rwts: total=5952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:05.542 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:05.542 filename2: (groupid=0, jobs=1): err= 0: pid=2838219: Mon Dec 9 17:45:33 2024 00:35:05.542 read: IOPS=595, BW=2381KiB/s (2438kB/s)(23.2MiB/10001msec) 00:35:05.542 slat (nsec): min=6763, max=93301, avg=38939.45, stdev=17992.49 00:35:05.542 clat (usec): min=14227, max=36519, avg=26508.80, stdev=2112.14 00:35:05.542 lat (usec): min=14249, max=36535, avg=26547.74, stdev=2114.70 00:35:05.542 clat percentiles (usec): 00:35:05.542 | 1.00th=[21365], 5.00th=[24511], 10.00th=[24773], 20.00th=[25035], 00:35:05.542 | 30.00th=[25035], 40.00th=[25560], 50.00th=[26346], 60.00th=[26870], 00:35:05.542 | 70.00th=[27132], 80.00th=[28443], 90.00th=[29492], 95.00th=[30278], 00:35:05.542 | 99.00th=[30802], 99.50th=[31065], 99.90th=[31327], 99.95th=[34341], 00:35:05.542 | 99.99th=[36439] 00:35:05.542 bw ( KiB/s): min= 2176, max= 2560, per=4.20%, avg=2384.58, stdev=114.46, samples=19 00:35:05.542 iops : min= 544, max= 640, avg=596.11, stdev=28.60, samples=19 00:35:05.542 lat (msec) : 20=0.87%, 50=99.13% 00:35:05.542 cpu : usr=98.78%, sys=0.84%, ctx=22, majf=0, minf=28 00:35:05.542 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:05.542 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.542 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.542 issued rwts: total=5952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:05.542 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:05.542 filename2: (groupid=0, jobs=1): err= 0: pid=2838220: Mon Dec 9 17:45:33 2024 00:35:05.542 read: IOPS=592, BW=2371KiB/s (2428kB/s)(23.2MiB/10019msec) 00:35:05.542 slat (nsec): min=8257, max=91887, avg=31329.17, stdev=17634.50 00:35:05.542 clat (usec): min=14169, max=31449, avg=26675.24, stdev=1915.04 00:35:05.542 lat (usec): min=14191, max=31486, avg=26706.57, stdev=1917.45 00:35:05.542 clat percentiles (usec): 00:35:05.542 | 1.00th=[24249], 5.00th=[24773], 10.00th=[25035], 20.00th=[25035], 00:35:05.542 | 30.00th=[25297], 40.00th=[25560], 50.00th=[26346], 60.00th=[26608], 00:35:05.542 | 70.00th=[27132], 80.00th=[28443], 90.00th=[29492], 95.00th=[30540], 00:35:05.542 | 99.00th=[31065], 99.50th=[31065], 99.90th=[31327], 99.95th=[31327], 00:35:05.542 | 99.99th=[31327] 00:35:05.542 bw ( KiB/s): min= 2176, max= 2560, per=4.18%, avg=2374.40, stdev=113.54, samples=20 00:35:05.542 iops : min= 544, max= 640, avg=593.60, stdev=28.39, samples=20 00:35:05.542 lat (msec) : 20=0.32%, 50=99.68% 00:35:05.542 cpu : usr=98.66%, sys=0.91%, ctx=49, majf=0, minf=27 00:35:05.542 IO depths : 1=6.3%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:05.542 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.542 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.542 issued rwts: total=5939,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:05.542 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:05.542 filename2: (groupid=0, jobs=1): err= 0: pid=2838221: Mon Dec 9 17:45:33 2024 00:35:05.542 read: IOPS=593, BW=2374KiB/s (2431kB/s)(23.2MiB/10002msec) 00:35:05.542 slat (nsec): min=3735, max=77645, avg=41672.28, stdev=13129.31 00:35:05.542 clat (usec): min=14445, max=33252, avg=26604.38, stdev=1977.29 00:35:05.542 lat (usec): min=14472, max=33264, avg=26646.05, stdev=1977.00 00:35:05.542 clat percentiles (usec): 00:35:05.542 | 1.00th=[24249], 5.00th=[24511], 10.00th=[24773], 20.00th=[25035], 00:35:05.542 | 30.00th=[25297], 40.00th=[25560], 50.00th=[26346], 60.00th=[26870], 00:35:05.542 | 70.00th=[27132], 80.00th=[28443], 90.00th=[29492], 95.00th=[30540], 00:35:05.542 | 99.00th=[31065], 99.50th=[31327], 99.90th=[33162], 99.95th=[33162], 00:35:05.542 | 99.99th=[33162] 00:35:05.542 bw ( KiB/s): min= 2176, max= 2560, per=4.18%, avg=2371.11, stdev=130.47, samples=19 00:35:05.542 iops : min= 544, max= 640, avg=592.74, stdev=32.60, samples=19 00:35:05.542 lat (msec) : 20=0.29%, 50=99.71% 00:35:05.542 cpu : usr=98.24%, sys=1.15%, ctx=81, majf=0, minf=32 00:35:05.542 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:05.542 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.542 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.542 issued rwts: total=5936,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:05.542 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:05.542 filename2: (groupid=0, jobs=1): err= 0: pid=2838222: Mon Dec 9 17:45:33 2024 00:35:05.542 read: IOPS=593, BW=2374KiB/s (2431kB/s)(23.2MiB/10003msec) 00:35:05.542 slat (usec): min=6, max=124, avg=45.57, stdev=19.16 00:35:05.542 clat (usec): min=14266, max=34746, avg=26550.16, stdev=1960.75 00:35:05.542 lat (usec): min=14278, max=34780, avg=26595.73, stdev=1963.03 00:35:05.542 clat percentiles (usec): 00:35:05.542 | 1.00th=[24249], 5.00th=[24511], 10.00th=[24773], 20.00th=[25035], 00:35:05.542 | 30.00th=[25035], 40.00th=[25560], 50.00th=[26346], 60.00th=[26608], 00:35:05.542 | 70.00th=[27132], 80.00th=[28443], 90.00th=[29492], 95.00th=[30278], 00:35:05.542 | 99.00th=[31065], 99.50th=[31327], 99.90th=[34866], 99.95th=[34866], 00:35:05.542 | 99.99th=[34866] 00:35:05.542 bw ( KiB/s): min= 2176, max= 2560, per=4.18%, avg=2371.11, stdev=130.47, samples=19 00:35:05.542 iops : min= 544, max= 640, avg=592.74, stdev=32.60, samples=19 00:35:05.542 lat (msec) : 20=0.44%, 50=99.56% 00:35:05.542 cpu : usr=98.93%, sys=0.66%, ctx=17, majf=0, minf=36 00:35:05.542 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:05.542 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.542 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.542 issued rwts: total=5936,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:05.542 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:05.542 filename2: (groupid=0, jobs=1): err= 0: pid=2838223: Mon Dec 9 17:45:33 2024 00:35:05.542 read: IOPS=593, BW=2373KiB/s (2430kB/s)(23.2MiB/10006msec) 00:35:05.542 slat (usec): min=6, max=101, avg=44.67, stdev=18.79 00:35:05.542 clat (usec): min=8522, max=60789, avg=26553.83, stdev=2637.44 00:35:05.542 lat (usec): min=8533, max=60806, avg=26598.51, stdev=2638.16 00:35:05.542 clat percentiles (usec): 00:35:05.542 | 1.00th=[24249], 5.00th=[24511], 10.00th=[24773], 20.00th=[25035], 00:35:05.542 | 30.00th=[25035], 40.00th=[25560], 50.00th=[26346], 60.00th=[26608], 00:35:05.542 | 70.00th=[26870], 80.00th=[28443], 90.00th=[29492], 95.00th=[30278], 00:35:05.542 | 99.00th=[30802], 99.50th=[31065], 99.90th=[55837], 99.95th=[55837], 00:35:05.542 | 99.99th=[60556] 00:35:05.542 bw ( KiB/s): min= 2048, max= 2560, per=4.17%, avg=2364.84, stdev=161.83, samples=19 00:35:05.542 iops : min= 512, max= 640, avg=591.21, stdev=40.46, samples=19 00:35:05.542 lat (msec) : 10=0.27%, 20=0.44%, 50=99.02%, 100=0.27% 00:35:05.542 cpu : usr=98.28%, sys=1.02%, ctx=269, majf=0, minf=41 00:35:05.542 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:05.542 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.542 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.542 issued rwts: total=5936,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:05.542 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:05.542 filename2: (groupid=0, jobs=1): err= 0: pid=2838224: Mon Dec 9 17:45:33 2024 00:35:05.542 read: IOPS=593, BW=2373KiB/s (2430kB/s)(23.2MiB/10007msec) 00:35:05.542 slat (nsec): min=6212, max=80029, avg=22851.99, stdev=11203.71 00:35:05.542 clat (usec): min=9652, max=46973, avg=26778.95, stdev=2542.57 00:35:05.542 lat (usec): min=9668, max=46990, avg=26801.80, stdev=2543.12 00:35:05.542 clat percentiles (usec): 00:35:05.542 | 1.00th=[18482], 5.00th=[24773], 10.00th=[25035], 20.00th=[25035], 00:35:05.542 | 30.00th=[25297], 40.00th=[25560], 50.00th=[26608], 60.00th=[26870], 00:35:05.542 | 70.00th=[27132], 80.00th=[28443], 90.00th=[29754], 95.00th=[30802], 00:35:05.542 | 99.00th=[31327], 99.50th=[37487], 99.90th=[46924], 99.95th=[46924], 00:35:05.542 | 99.99th=[46924] 00:35:05.542 bw ( KiB/s): min= 2160, max= 2560, per=4.17%, avg=2364.37, stdev=118.16, samples=19 00:35:05.542 iops : min= 540, max= 640, avg=591.05, stdev=29.56, samples=19 00:35:05.542 lat (msec) : 10=0.27%, 20=0.77%, 50=98.96% 00:35:05.542 cpu : usr=98.07%, sys=1.19%, ctx=169, majf=0, minf=45 00:35:05.542 IO depths : 1=5.6%, 2=11.9%, 4=25.0%, 8=50.6%, 16=6.9%, 32=0.0%, >=64=0.0% 00:35:05.542 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.542 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.542 issued rwts: total=5936,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:05.542 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:05.542 filename2: (groupid=0, jobs=1): err= 0: pid=2838225: Mon Dec 9 17:45:33 2024 00:35:05.542 read: IOPS=593, BW=2373KiB/s (2430kB/s)(23.2MiB/10006msec) 00:35:05.542 slat (nsec): min=7480, max=79296, avg=34995.61, stdev=16285.45 00:35:05.542 clat (usec): min=8672, max=61377, avg=26705.69, stdev=2710.85 00:35:05.542 lat (usec): min=8693, max=61394, avg=26740.69, stdev=2709.68 00:35:05.542 clat percentiles (usec): 00:35:05.542 | 1.00th=[21365], 5.00th=[24511], 10.00th=[24773], 20.00th=[25035], 00:35:05.542 | 30.00th=[25297], 40.00th=[25560], 50.00th=[26608], 60.00th=[26870], 00:35:05.542 | 70.00th=[27132], 80.00th=[28443], 90.00th=[29754], 95.00th=[30540], 00:35:05.542 | 99.00th=[31327], 99.50th=[32375], 99.90th=[56361], 99.95th=[56361], 00:35:05.542 | 99.99th=[61604] 00:35:05.542 bw ( KiB/s): min= 2048, max= 2560, per=4.17%, avg=2364.84, stdev=161.83, samples=19 00:35:05.542 iops : min= 512, max= 640, avg=591.21, stdev=40.46, samples=19 00:35:05.542 lat (msec) : 10=0.27%, 20=0.27%, 50=99.19%, 100=0.27% 00:35:05.542 cpu : usr=97.60%, sys=1.45%, ctx=194, majf=0, minf=49 00:35:05.542 IO depths : 1=5.8%, 2=11.9%, 4=24.8%, 8=50.8%, 16=6.7%, 32=0.0%, >=64=0.0% 00:35:05.542 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.542 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.542 issued rwts: total=5936,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:05.542 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:05.542 filename2: (groupid=0, jobs=1): err= 0: pid=2838226: Mon Dec 9 17:45:33 2024 00:35:05.542 read: IOPS=593, BW=2372KiB/s (2429kB/s)(23.2MiB/10009msec) 00:35:05.542 slat (nsec): min=6550, max=88468, avg=24582.22, stdev=13832.34 00:35:05.542 clat (usec): min=16223, max=38266, avg=26760.77, stdev=1860.78 00:35:05.542 lat (usec): min=16234, max=38286, avg=26785.36, stdev=1861.64 00:35:05.542 clat percentiles (usec): 00:35:05.542 | 1.00th=[24511], 5.00th=[24773], 10.00th=[25035], 20.00th=[25035], 00:35:05.542 | 30.00th=[25297], 40.00th=[25560], 50.00th=[26608], 60.00th=[26870], 00:35:05.542 | 70.00th=[27132], 80.00th=[28443], 90.00th=[29754], 95.00th=[30802], 00:35:05.542 | 99.00th=[31065], 99.50th=[31327], 99.90th=[31589], 99.95th=[31589], 00:35:05.543 | 99.99th=[38011] 00:35:05.543 bw ( KiB/s): min= 2048, max= 2560, per=4.18%, avg=2371.37, stdev=150.05, samples=19 00:35:05.543 iops : min= 512, max= 640, avg=592.84, stdev=37.51, samples=19 00:35:05.543 lat (msec) : 20=0.30%, 50=99.70% 00:35:05.543 cpu : usr=98.42%, sys=1.00%, ctx=71, majf=0, minf=25 00:35:05.543 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:05.543 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.543 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.543 issued rwts: total=5936,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:05.543 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:05.543 00:35:05.543 Run status group 0 (all jobs): 00:35:05.543 READ: bw=55.4MiB/s (58.1MB/s), 2363KiB/s-2390KiB/s (2420kB/s-2447kB/s), io=557MiB (584MB), run=10001-10055msec 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:05.543 bdev_null0 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:05.543 [2024-12-09 17:45:33.446203] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:05.543 bdev_null1 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:05.543 { 00:35:05.543 "params": { 00:35:05.543 "name": "Nvme$subsystem", 00:35:05.543 "trtype": "$TEST_TRANSPORT", 00:35:05.543 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:05.543 "adrfam": "ipv4", 00:35:05.543 "trsvcid": "$NVMF_PORT", 00:35:05.543 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:05.543 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:05.543 "hdgst": ${hdgst:-false}, 00:35:05.543 "ddgst": ${ddgst:-false} 00:35:05.543 }, 00:35:05.543 "method": "bdev_nvme_attach_controller" 00:35:05.543 } 00:35:05.543 EOF 00:35:05.543 )") 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:05.543 17:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:05.544 17:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:05.544 17:45:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:05.544 17:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:05.544 17:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:35:05.544 17:45:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:05.544 { 00:35:05.544 "params": { 00:35:05.544 "name": "Nvme$subsystem", 00:35:05.544 "trtype": "$TEST_TRANSPORT", 00:35:05.544 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:05.544 "adrfam": "ipv4", 00:35:05.544 "trsvcid": "$NVMF_PORT", 00:35:05.544 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:05.544 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:05.544 "hdgst": ${hdgst:-false}, 00:35:05.544 "ddgst": ${ddgst:-false} 00:35:05.544 }, 00:35:05.544 "method": "bdev_nvme_attach_controller" 00:35:05.544 } 00:35:05.544 EOF 00:35:05.544 )") 00:35:05.544 17:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:05.544 17:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:05.544 17:45:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:05.544 17:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:05.544 17:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:05.544 17:45:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:35:05.544 17:45:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:35:05.544 17:45:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:05.544 "params": { 00:35:05.544 "name": "Nvme0", 00:35:05.544 "trtype": "tcp", 00:35:05.544 "traddr": "10.0.0.2", 00:35:05.544 "adrfam": "ipv4", 00:35:05.544 "trsvcid": "4420", 00:35:05.544 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:05.544 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:05.544 "hdgst": false, 00:35:05.544 "ddgst": false 00:35:05.544 }, 00:35:05.544 "method": "bdev_nvme_attach_controller" 00:35:05.544 },{ 00:35:05.544 "params": { 00:35:05.544 "name": "Nvme1", 00:35:05.544 "trtype": "tcp", 00:35:05.544 "traddr": "10.0.0.2", 00:35:05.544 "adrfam": "ipv4", 00:35:05.544 "trsvcid": "4420", 00:35:05.544 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:05.544 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:05.544 "hdgst": false, 00:35:05.544 "ddgst": false 00:35:05.544 }, 00:35:05.544 "method": "bdev_nvme_attach_controller" 00:35:05.544 }' 00:35:05.544 17:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:05.544 17:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:05.544 17:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:05.544 17:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:05.544 17:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:35:05.544 17:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:05.544 17:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:05.544 17:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:05.544 17:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:05.544 17:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:05.544 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:05.544 ... 00:35:05.544 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:05.544 ... 00:35:05.544 fio-3.35 00:35:05.544 Starting 4 threads 00:35:10.809 00:35:10.809 filename0: (groupid=0, jobs=1): err= 0: pid=2840161: Mon Dec 9 17:45:39 2024 00:35:10.809 read: IOPS=2779, BW=21.7MiB/s (22.8MB/s)(109MiB/5002msec) 00:35:10.809 slat (nsec): min=6035, max=28900, avg=8490.39, stdev=2725.69 00:35:10.809 clat (usec): min=739, max=43158, avg=2855.03, stdev=1034.96 00:35:10.809 lat (usec): min=749, max=43186, avg=2863.52, stdev=1035.00 00:35:10.809 clat percentiles (usec): 00:35:10.809 | 1.00th=[ 1876], 5.00th=[ 2245], 10.00th=[ 2343], 20.00th=[ 2507], 00:35:10.809 | 30.00th=[ 2671], 40.00th=[ 2769], 50.00th=[ 2900], 60.00th=[ 2966], 00:35:10.809 | 70.00th=[ 2999], 80.00th=[ 3032], 90.00th=[ 3228], 95.00th=[ 3425], 00:35:10.809 | 99.00th=[ 3916], 99.50th=[ 4228], 99.90th=[ 4883], 99.95th=[43254], 00:35:10.809 | 99.99th=[43254] 00:35:10.809 bw ( KiB/s): min=19984, max=24128, per=26.22%, avg=22233.60, stdev=1109.70, samples=10 00:35:10.809 iops : min= 2498, max= 3016, avg=2779.20, stdev=138.71, samples=10 00:35:10.809 lat (usec) : 750=0.01%, 1000=0.01% 00:35:10.809 lat (msec) : 2=1.47%, 4=97.63%, 10=0.83%, 50=0.06% 00:35:10.809 cpu : usr=95.48%, sys=4.20%, ctx=7, majf=0, minf=9 00:35:10.809 IO depths : 1=0.1%, 2=3.1%, 4=66.3%, 8=30.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:10.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.809 complete : 0=0.0%, 4=94.6%, 8=5.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.809 issued rwts: total=13901,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.809 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:10.809 filename0: (groupid=0, jobs=1): err= 0: pid=2840162: Mon Dec 9 17:45:39 2024 00:35:10.809 read: IOPS=2557, BW=20.0MiB/s (20.9MB/s)(99.9MiB/5001msec) 00:35:10.809 slat (nsec): min=6039, max=61459, avg=8433.91, stdev=2944.56 00:35:10.809 clat (usec): min=786, max=5355, avg=3104.14, stdev=510.29 00:35:10.809 lat (usec): min=793, max=5362, avg=3112.57, stdev=509.79 00:35:10.809 clat percentiles (usec): 00:35:10.809 | 1.00th=[ 2089], 5.00th=[ 2409], 10.00th=[ 2638], 20.00th=[ 2868], 00:35:10.809 | 30.00th=[ 2933], 40.00th=[ 2966], 50.00th=[ 2999], 60.00th=[ 3032], 00:35:10.809 | 70.00th=[ 3163], 80.00th=[ 3326], 90.00th=[ 3720], 95.00th=[ 4293], 00:35:10.809 | 99.00th=[ 4752], 99.50th=[ 4883], 99.90th=[ 5211], 99.95th=[ 5211], 00:35:10.809 | 99.99th=[ 5276] 00:35:10.809 bw ( KiB/s): min=19344, max=20944, per=24.11%, avg=20440.90, stdev=502.17, samples=10 00:35:10.809 iops : min= 2418, max= 2618, avg=2555.10, stdev=62.76, samples=10 00:35:10.809 lat (usec) : 1000=0.05% 00:35:10.809 lat (msec) : 2=0.74%, 4=91.56%, 10=7.65% 00:35:10.809 cpu : usr=96.18%, sys=3.52%, ctx=7, majf=0, minf=9 00:35:10.809 IO depths : 1=0.1%, 2=2.4%, 4=69.8%, 8=27.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:10.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.809 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.809 issued rwts: total=12788,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.809 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:10.809 filename1: (groupid=0, jobs=1): err= 0: pid=2840163: Mon Dec 9 17:45:39 2024 00:35:10.809 read: IOPS=2604, BW=20.3MiB/s (21.3MB/s)(102MiB/5001msec) 00:35:10.809 slat (nsec): min=6048, max=37510, avg=8371.26, stdev=2922.10 00:35:10.809 clat (usec): min=989, max=5613, avg=3047.62, stdev=379.37 00:35:10.809 lat (usec): min=996, max=5619, avg=3055.99, stdev=379.10 00:35:10.809 clat percentiles (usec): 00:35:10.809 | 1.00th=[ 2212], 5.00th=[ 2507], 10.00th=[ 2704], 20.00th=[ 2802], 00:35:10.809 | 30.00th=[ 2933], 40.00th=[ 2966], 50.00th=[ 2999], 60.00th=[ 2999], 00:35:10.809 | 70.00th=[ 3097], 80.00th=[ 3261], 90.00th=[ 3458], 95.00th=[ 3720], 00:35:10.809 | 99.00th=[ 4424], 99.50th=[ 4686], 99.90th=[ 5211], 99.95th=[ 5342], 00:35:10.809 | 99.99th=[ 5604] 00:35:10.809 bw ( KiB/s): min=20128, max=21408, per=24.57%, avg=20834.60, stdev=486.07, samples=10 00:35:10.809 iops : min= 2516, max= 2676, avg=2604.30, stdev=60.74, samples=10 00:35:10.809 lat (usec) : 1000=0.02% 00:35:10.809 lat (msec) : 2=0.35%, 4=97.21%, 10=2.43% 00:35:10.810 cpu : usr=96.22%, sys=3.48%, ctx=8, majf=0, minf=9 00:35:10.810 IO depths : 1=0.1%, 2=1.4%, 4=71.4%, 8=27.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:10.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.810 complete : 0=0.0%, 4=91.8%, 8=8.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.810 issued rwts: total=13024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.810 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:10.810 filename1: (groupid=0, jobs=1): err= 0: pid=2840165: Mon Dec 9 17:45:39 2024 00:35:10.810 read: IOPS=2657, BW=20.8MiB/s (21.8MB/s)(104MiB/5002msec) 00:35:10.810 slat (nsec): min=6062, max=63126, avg=8329.75, stdev=2895.01 00:35:10.810 clat (usec): min=1235, max=5488, avg=2985.91, stdev=421.46 00:35:10.810 lat (usec): min=1242, max=5495, avg=2994.24, stdev=421.14 00:35:10.810 clat percentiles (usec): 00:35:10.810 | 1.00th=[ 2024], 5.00th=[ 2311], 10.00th=[ 2507], 20.00th=[ 2737], 00:35:10.810 | 30.00th=[ 2835], 40.00th=[ 2966], 50.00th=[ 2966], 60.00th=[ 2999], 00:35:10.810 | 70.00th=[ 3032], 80.00th=[ 3228], 90.00th=[ 3458], 95.00th=[ 3720], 00:35:10.810 | 99.00th=[ 4424], 99.50th=[ 4621], 99.90th=[ 5080], 99.95th=[ 5211], 00:35:10.810 | 99.99th=[ 5473] 00:35:10.810 bw ( KiB/s): min=19840, max=22720, per=25.08%, avg=21259.20, stdev=820.92, samples=10 00:35:10.810 iops : min= 2480, max= 2840, avg=2657.40, stdev=102.61, samples=10 00:35:10.810 lat (msec) : 2=0.77%, 4=96.43%, 10=2.81% 00:35:10.810 cpu : usr=96.06%, sys=3.64%, ctx=8, majf=0, minf=9 00:35:10.810 IO depths : 1=0.2%, 2=2.0%, 4=69.7%, 8=28.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:10.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.810 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.810 issued rwts: total=13295,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.810 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:10.810 00:35:10.810 Run status group 0 (all jobs): 00:35:10.810 READ: bw=82.8MiB/s (86.8MB/s), 20.0MiB/s-21.7MiB/s (20.9MB/s-22.8MB/s), io=414MiB (434MB), run=5001-5002msec 00:35:10.810 17:45:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:35:10.810 17:45:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:10.810 17:45:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:10.810 17:45:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:10.810 17:45:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:10.810 17:45:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:10.810 17:45:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.810 17:45:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.810 17:45:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.810 17:45:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:10.810 17:45:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.810 17:45:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.810 17:45:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.810 17:45:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:10.810 17:45:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:10.810 17:45:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:10.810 17:45:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:10.810 17:45:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.810 17:45:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.810 17:45:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.810 17:45:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:10.810 17:45:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.810 17:45:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.810 17:45:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.810 00:35:10.810 real 0m24.444s 00:35:10.810 user 4m52.268s 00:35:10.810 sys 0m5.082s 00:35:10.810 17:45:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:10.810 17:45:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.810 ************************************ 00:35:10.810 END TEST fio_dif_rand_params 00:35:10.810 ************************************ 00:35:10.810 17:45:39 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:35:10.810 17:45:39 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:10.810 17:45:39 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:10.810 17:45:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:10.810 ************************************ 00:35:10.810 START TEST fio_dif_digest 00:35:10.810 ************************************ 00:35:10.810 17:45:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:35:10.810 17:45:39 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:35:10.810 17:45:39 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:35:10.810 17:45:39 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:35:10.810 17:45:39 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:35:10.810 17:45:39 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:35:10.810 17:45:39 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:35:10.810 17:45:39 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:35:10.810 17:45:39 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:35:10.810 17:45:39 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:35:10.810 17:45:39 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:35:10.810 17:45:39 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:35:10.810 17:45:39 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:35:10.810 17:45:39 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:35:10.810 17:45:39 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:35:10.810 17:45:39 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:35:10.810 17:45:39 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:10.810 17:45:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.810 17:45:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:10.810 bdev_null0 00:35:10.810 17:45:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.810 17:45:39 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:10.810 17:45:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.810 17:45:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:10.810 17:45:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.810 17:45:39 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:10.810 17:45:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.810 17:45:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:10.810 17:45:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.810 17:45:39 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:10.810 17:45:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.810 17:45:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:10.810 [2024-12-09 17:45:39.893455] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:10.810 17:45:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.810 17:45:39 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:35:10.810 17:45:39 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:35:10.810 17:45:39 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:10.810 17:45:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:35:10.810 17:45:39 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:10.810 17:45:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:35:10.810 17:45:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:10.810 17:45:39 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:35:10.810 17:45:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:10.810 17:45:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:10.810 { 00:35:10.810 "params": { 00:35:10.810 "name": "Nvme$subsystem", 00:35:10.810 "trtype": "$TEST_TRANSPORT", 00:35:10.810 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:10.810 "adrfam": "ipv4", 00:35:10.810 "trsvcid": "$NVMF_PORT", 00:35:10.810 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:10.810 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:10.810 "hdgst": ${hdgst:-false}, 00:35:10.810 "ddgst": ${ddgst:-false} 00:35:10.810 }, 00:35:10.810 "method": "bdev_nvme_attach_controller" 00:35:10.810 } 00:35:10.810 EOF 00:35:10.810 )") 00:35:10.810 17:45:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:35:10.810 17:45:39 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:35:10.810 17:45:39 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:35:10.810 17:45:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:10.810 17:45:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:35:10.810 17:45:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:10.810 17:45:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:35:10.810 17:45:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:35:10.810 17:45:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:10.810 17:45:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:35:10.810 17:45:39 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:35:10.810 17:45:39 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:35:10.810 17:45:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:10.810 17:45:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:35:10.810 17:45:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:10.810 17:45:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:35:10.810 17:45:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:35:10.811 17:45:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:10.811 "params": { 00:35:10.811 "name": "Nvme0", 00:35:10.811 "trtype": "tcp", 00:35:10.811 "traddr": "10.0.0.2", 00:35:10.811 "adrfam": "ipv4", 00:35:10.811 "trsvcid": "4420", 00:35:10.811 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:10.811 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:10.811 "hdgst": true, 00:35:10.811 "ddgst": true 00:35:10.811 }, 00:35:10.811 "method": "bdev_nvme_attach_controller" 00:35:10.811 }' 00:35:10.811 17:45:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:10.811 17:45:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:10.811 17:45:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:10.811 17:45:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:10.811 17:45:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:35:10.811 17:45:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:10.811 17:45:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:10.811 17:45:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:10.811 17:45:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:10.811 17:45:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:11.381 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:11.381 ... 00:35:11.381 fio-3.35 00:35:11.381 Starting 3 threads 00:35:23.584 00:35:23.584 filename0: (groupid=0, jobs=1): err= 0: pid=2841418: Mon Dec 9 17:45:50 2024 00:35:23.584 read: IOPS=295, BW=37.0MiB/s (38.8MB/s)(371MiB/10046msec) 00:35:23.584 slat (nsec): min=6575, max=80335, avg=21919.38, stdev=6841.58 00:35:23.584 clat (usec): min=7683, max=51788, avg=10104.79, stdev=1261.17 00:35:23.584 lat (usec): min=7705, max=51816, avg=10126.71, stdev=1261.33 00:35:23.584 clat percentiles (usec): 00:35:23.584 | 1.00th=[ 8455], 5.00th=[ 8848], 10.00th=[ 9110], 20.00th=[ 9372], 00:35:23.584 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10028], 60.00th=[10290], 00:35:23.584 | 70.00th=[10421], 80.00th=[10683], 90.00th=[11076], 95.00th=[11338], 00:35:23.584 | 99.00th=[11994], 99.50th=[12125], 99.90th=[14091], 99.95th=[46400], 00:35:23.584 | 99.99th=[51643] 00:35:23.584 bw ( KiB/s): min=35584, max=39424, per=35.58%, avg=38003.20, stdev=963.16, samples=20 00:35:23.584 iops : min= 278, max= 308, avg=296.90, stdev= 7.52, samples=20 00:35:23.584 lat (msec) : 10=46.68%, 20=53.25%, 50=0.03%, 100=0.03% 00:35:23.584 cpu : usr=95.26%, sys=4.01%, ctx=163, majf=0, minf=182 00:35:23.584 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:23.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.584 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.584 issued rwts: total=2971,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:23.584 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:23.584 filename0: (groupid=0, jobs=1): err= 0: pid=2841419: Mon Dec 9 17:45:50 2024 00:35:23.584 read: IOPS=265, BW=33.2MiB/s (34.9MB/s)(334MiB/10044msec) 00:35:23.584 slat (nsec): min=6430, max=48488, avg=19486.54, stdev=8453.13 00:35:23.584 clat (usec): min=8227, max=49786, avg=11243.35, stdev=1345.67 00:35:23.584 lat (usec): min=8240, max=49799, avg=11262.83, stdev=1346.38 00:35:23.584 clat percentiles (usec): 00:35:23.584 | 1.00th=[ 9110], 5.00th=[ 9765], 10.00th=[10159], 20.00th=[10552], 00:35:23.584 | 30.00th=[10814], 40.00th=[10945], 50.00th=[11207], 60.00th=[11338], 00:35:23.584 | 70.00th=[11600], 80.00th=[11994], 90.00th=[12387], 95.00th=[12780], 00:35:23.584 | 99.00th=[13566], 99.50th=[13829], 99.90th=[15139], 99.95th=[44827], 00:35:23.584 | 99.99th=[49546] 00:35:23.584 bw ( KiB/s): min=31488, max=38144, per=31.98%, avg=34163.20, stdev=1521.28, samples=20 00:35:23.584 iops : min= 246, max= 298, avg=266.90, stdev=11.88, samples=20 00:35:23.584 lat (msec) : 10=7.71%, 20=92.21%, 50=0.07% 00:35:23.584 cpu : usr=96.88%, sys=2.77%, ctx=19, majf=0, minf=104 00:35:23.584 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:23.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.584 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.584 issued rwts: total=2671,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:23.584 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:23.584 filename0: (groupid=0, jobs=1): err= 0: pid=2841420: Mon Dec 9 17:45:50 2024 00:35:23.584 read: IOPS=272, BW=34.1MiB/s (35.8MB/s)(343MiB/10045msec) 00:35:23.584 slat (usec): min=6, max=119, avg=20.31, stdev= 9.45 00:35:23.584 clat (usec): min=8461, max=50060, avg=10951.86, stdev=1284.17 00:35:23.584 lat (usec): min=8487, max=50074, avg=10972.18, stdev=1284.33 00:35:23.584 clat percentiles (usec): 00:35:23.584 | 1.00th=[ 9110], 5.00th=[ 9634], 10.00th=[ 9896], 20.00th=[10290], 00:35:23.584 | 30.00th=[10552], 40.00th=[10683], 50.00th=[10814], 60.00th=[11076], 00:35:23.584 | 70.00th=[11338], 80.00th=[11600], 90.00th=[11994], 95.00th=[12387], 00:35:23.584 | 99.00th=[13042], 99.50th=[13566], 99.90th=[14746], 99.95th=[45351], 00:35:23.584 | 99.99th=[50070] 00:35:23.584 bw ( KiB/s): min=33536, max=36352, per=32.83%, avg=35072.00, stdev=787.95, samples=20 00:35:23.584 iops : min= 262, max= 284, avg=274.00, stdev= 6.16, samples=20 00:35:23.584 lat (msec) : 10=11.74%, 20=88.18%, 50=0.04%, 100=0.04% 00:35:23.584 cpu : usr=97.20%, sys=2.46%, ctx=14, majf=0, minf=197 00:35:23.584 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:23.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.584 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.584 issued rwts: total=2742,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:23.584 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:23.584 00:35:23.584 Run status group 0 (all jobs): 00:35:23.584 READ: bw=104MiB/s (109MB/s), 33.2MiB/s-37.0MiB/s (34.9MB/s-38.8MB/s), io=1048MiB (1099MB), run=10044-10046msec 00:35:23.584 17:45:51 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:35:23.584 17:45:51 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:35:23.584 17:45:51 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:35:23.584 17:45:51 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:23.584 17:45:51 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:35:23.584 17:45:51 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:23.584 17:45:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.584 17:45:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:23.584 17:45:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.584 17:45:51 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:23.584 17:45:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.584 17:45:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:23.584 17:45:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.584 00:35:23.584 real 0m11.182s 00:35:23.584 user 0m35.675s 00:35:23.584 sys 0m1.241s 00:35:23.584 17:45:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:23.584 17:45:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:23.584 ************************************ 00:35:23.584 END TEST fio_dif_digest 00:35:23.584 ************************************ 00:35:23.584 17:45:51 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:35:23.584 17:45:51 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:35:23.584 17:45:51 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:23.584 17:45:51 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:35:23.584 17:45:51 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:23.584 17:45:51 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:35:23.584 17:45:51 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:23.584 17:45:51 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:23.584 rmmod nvme_tcp 00:35:23.584 rmmod nvme_fabrics 00:35:23.584 rmmod nvme_keyring 00:35:23.584 17:45:51 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:23.584 17:45:51 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:35:23.584 17:45:51 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:35:23.584 17:45:51 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 2832399 ']' 00:35:23.584 17:45:51 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 2832399 00:35:23.585 17:45:51 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 2832399 ']' 00:35:23.585 17:45:51 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 2832399 00:35:23.585 17:45:51 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:35:23.585 17:45:51 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:23.585 17:45:51 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2832399 00:35:23.585 17:45:51 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:23.585 17:45:51 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:23.585 17:45:51 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2832399' 00:35:23.585 killing process with pid 2832399 00:35:23.585 17:45:51 nvmf_dif -- common/autotest_common.sh@973 -- # kill 2832399 00:35:23.585 17:45:51 nvmf_dif -- common/autotest_common.sh@978 -- # wait 2832399 00:35:23.585 17:45:51 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:35:23.585 17:45:51 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:24.964 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:35:25.223 Waiting for block devices as requested 00:35:25.223 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:35:25.223 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:25.482 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:25.482 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:25.482 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:25.741 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:25.741 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:25.741 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:25.741 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:26.000 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:26.000 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:26.000 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:26.258 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:26.258 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:26.258 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:26.258 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:26.517 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:26.517 17:45:55 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:26.517 17:45:55 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:26.517 17:45:55 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:35:26.517 17:45:55 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:35:26.517 17:45:55 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:26.517 17:45:55 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:35:26.517 17:45:55 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:26.517 17:45:55 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:26.517 17:45:55 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:26.517 17:45:55 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:26.517 17:45:55 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:29.053 17:45:57 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:29.053 00:35:29.053 real 1m14.719s 00:35:29.053 user 7m11.001s 00:35:29.053 sys 0m20.528s 00:35:29.053 17:45:57 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:29.053 17:45:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:29.053 ************************************ 00:35:29.053 END TEST nvmf_dif 00:35:29.053 ************************************ 00:35:29.053 17:45:57 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:29.053 17:45:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:29.053 17:45:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:29.053 17:45:57 -- common/autotest_common.sh@10 -- # set +x 00:35:29.053 ************************************ 00:35:29.053 START TEST nvmf_abort_qd_sizes 00:35:29.053 ************************************ 00:35:29.053 17:45:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:29.053 * Looking for test storage... 00:35:29.053 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:29.053 17:45:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:29.053 17:45:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:35:29.053 17:45:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:29.053 17:45:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:29.053 17:45:57 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:29.053 17:45:57 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:29.053 17:45:57 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:29.053 17:45:57 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:35:29.053 17:45:57 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:35:29.053 17:45:57 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:35:29.053 17:45:57 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:35:29.053 17:45:57 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:35:29.053 17:45:57 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:35:29.053 17:45:57 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:35:29.053 17:45:57 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:29.053 17:45:57 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:35:29.053 17:45:57 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:35:29.053 17:45:57 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:29.053 17:45:57 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:29.053 17:45:57 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:35:29.053 17:45:57 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:35:29.053 17:45:57 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:29.053 17:45:57 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:35:29.053 17:45:57 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:35:29.053 17:45:57 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:35:29.053 17:45:57 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:35:29.053 17:45:57 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:29.053 17:45:57 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:35:29.053 17:45:57 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:35:29.053 17:45:57 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:29.053 17:45:57 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:29.053 17:45:57 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:35:29.053 17:45:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:29.053 17:45:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:29.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:29.053 --rc genhtml_branch_coverage=1 00:35:29.053 --rc genhtml_function_coverage=1 00:35:29.053 --rc genhtml_legend=1 00:35:29.053 --rc geninfo_all_blocks=1 00:35:29.053 --rc geninfo_unexecuted_blocks=1 00:35:29.053 00:35:29.053 ' 00:35:29.053 17:45:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:29.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:29.053 --rc genhtml_branch_coverage=1 00:35:29.053 --rc genhtml_function_coverage=1 00:35:29.053 --rc genhtml_legend=1 00:35:29.053 --rc geninfo_all_blocks=1 00:35:29.053 --rc geninfo_unexecuted_blocks=1 00:35:29.053 00:35:29.053 ' 00:35:29.053 17:45:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:29.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:29.053 --rc genhtml_branch_coverage=1 00:35:29.053 --rc genhtml_function_coverage=1 00:35:29.053 --rc genhtml_legend=1 00:35:29.053 --rc geninfo_all_blocks=1 00:35:29.053 --rc geninfo_unexecuted_blocks=1 00:35:29.053 00:35:29.053 ' 00:35:29.053 17:45:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:29.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:29.053 --rc genhtml_branch_coverage=1 00:35:29.053 --rc genhtml_function_coverage=1 00:35:29.053 --rc genhtml_legend=1 00:35:29.053 --rc geninfo_all_blocks=1 00:35:29.053 --rc geninfo_unexecuted_blocks=1 00:35:29.053 00:35:29.053 ' 00:35:29.053 17:45:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:29.053 17:45:57 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:35:29.053 17:45:57 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:29.053 17:45:57 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:29.053 17:45:57 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:29.053 17:45:57 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:29.054 17:45:57 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:29.054 17:45:57 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:29.054 17:45:57 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:29.054 17:45:57 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:29.054 17:45:57 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:29.054 17:45:57 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:29.054 17:45:57 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:35:29.054 17:45:57 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:35:29.054 17:45:57 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:29.054 17:45:57 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:29.054 17:45:57 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:29.054 17:45:57 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:29.054 17:45:57 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:29.054 17:45:57 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:35:29.054 17:45:57 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:29.054 17:45:57 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:29.054 17:45:57 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:29.054 17:45:57 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:29.054 17:45:57 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:29.054 17:45:57 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:29.054 17:45:57 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:35:29.054 17:45:57 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:29.054 17:45:57 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:35:29.054 17:45:57 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:29.054 17:45:57 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:29.054 17:45:57 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:29.054 17:45:57 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:29.054 17:45:57 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:29.054 17:45:57 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:29.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:29.054 17:45:57 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:29.054 17:45:57 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:29.054 17:45:57 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:29.054 17:45:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:35:29.054 17:45:57 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:29.054 17:45:57 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:29.054 17:45:57 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:29.054 17:45:57 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:29.054 17:45:57 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:29.054 17:45:57 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:29.054 17:45:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:29.054 17:45:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:29.054 17:45:57 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:29.054 17:45:57 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:29.054 17:45:57 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:35:29.054 17:45:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:35:35.626 Found 0000:af:00.0 (0x8086 - 0x159b) 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:35:35.626 Found 0000:af:00.1 (0x8086 - 0x159b) 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:35:35.626 Found net devices under 0000:af:00.0: cvl_0_0 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:35:35.626 Found net devices under 0000:af:00.1: cvl_0_1 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:35.626 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:35.627 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:35.627 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:35.627 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:35.627 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:35.627 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:35.627 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:35.627 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:35.627 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:35.627 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:35.627 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:35.627 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:35.627 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:35.627 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:35.627 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:35.627 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:35.627 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.352 ms 00:35:35.627 00:35:35.627 --- 10.0.0.2 ping statistics --- 00:35:35.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:35.627 rtt min/avg/max/mdev = 0.352/0.352/0.352/0.000 ms 00:35:35.627 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:35.627 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:35.627 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:35:35.627 00:35:35.627 --- 10.0.0.1 ping statistics --- 00:35:35.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:35.627 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:35:35.627 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:35.627 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:35:35.627 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:35:35.627 17:46:03 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:37.532 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:35:37.791 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:37.791 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:37.791 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:37.791 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:37.791 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:37.791 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:37.791 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:37.791 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:37.791 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:37.791 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:37.791 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:37.791 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:37.791 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:37.791 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:37.791 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:37.791 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:38.728 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:35:38.728 17:46:07 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:38.728 17:46:07 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:38.728 17:46:07 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:38.728 17:46:07 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:38.728 17:46:07 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:38.728 17:46:07 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:38.728 17:46:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:35:38.729 17:46:07 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:38.729 17:46:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:38.729 17:46:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:38.729 17:46:07 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=2849353 00:35:38.729 17:46:07 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 2849353 00:35:38.729 17:46:07 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:35:38.729 17:46:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 2849353 ']' 00:35:38.729 17:46:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:38.729 17:46:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:38.729 17:46:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:38.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:38.729 17:46:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:38.729 17:46:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:38.986 [2024-12-09 17:46:07.945941] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:35:38.986 [2024-12-09 17:46:07.945985] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:38.986 [2024-12-09 17:46:08.022178] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:38.986 [2024-12-09 17:46:08.064222] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:38.986 [2024-12-09 17:46:08.064259] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:38.986 [2024-12-09 17:46:08.064266] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:38.986 [2024-12-09 17:46:08.064272] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:38.986 [2024-12-09 17:46:08.064277] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:38.986 [2024-12-09 17:46:08.065810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:38.986 [2024-12-09 17:46:08.065850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:38.986 [2024-12-09 17:46:08.065961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:38.986 [2024-12-09 17:46:08.065962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:38.986 17:46:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:38.986 17:46:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:35:38.986 17:46:08 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:38.986 17:46:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:38.986 17:46:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:39.244 17:46:08 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:39.244 17:46:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:35:39.244 17:46:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:35:39.244 17:46:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:35:39.244 17:46:08 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:35:39.244 17:46:08 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:35:39.244 17:46:08 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 0000:5f:00.0 ]] 00:35:39.244 17:46:08 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:35:39.244 17:46:08 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:35:39.244 17:46:08 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:35:39.244 17:46:08 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:35:39.244 17:46:08 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:35:39.244 17:46:08 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:35:39.244 17:46:08 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:35:39.244 17:46:08 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5f:00.0 ]] 00:35:39.244 17:46:08 nvmf_abort_qd_sizes -- scripts/common.sh@324 -- # continue 00:35:39.244 17:46:08 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:35:39.244 17:46:08 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:35:39.244 17:46:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:35:39.244 17:46:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:35:39.244 17:46:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:35:39.244 17:46:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:39.244 17:46:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:39.244 17:46:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:39.244 ************************************ 00:35:39.244 START TEST spdk_target_abort 00:35:39.244 ************************************ 00:35:39.244 17:46:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:35:39.244 17:46:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:35:39.244 17:46:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:35:39.244 17:46:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.244 17:46:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:42.518 spdk_targetn1 00:35:42.518 17:46:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.518 17:46:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:42.518 17:46:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.518 17:46:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:42.518 [2024-12-09 17:46:11.084150] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:42.518 17:46:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.518 17:46:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:35:42.518 17:46:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.518 17:46:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:42.518 17:46:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.518 17:46:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:35:42.518 17:46:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.518 17:46:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:42.518 17:46:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.518 17:46:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:35:42.518 17:46:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.518 17:46:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:42.518 [2024-12-09 17:46:11.128432] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:42.518 17:46:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.518 17:46:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:35:42.518 17:46:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:42.518 17:46:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:42.518 17:46:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:35:42.518 17:46:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:42.518 17:46:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:42.518 17:46:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:42.518 17:46:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:42.518 17:46:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:42.518 17:46:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:42.518 17:46:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:42.518 17:46:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:42.518 17:46:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:42.518 17:46:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:42.518 17:46:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:35:42.518 17:46:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:42.518 17:46:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:42.518 17:46:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:42.518 17:46:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:42.518 17:46:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:42.518 17:46:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:45.791 Initializing NVMe Controllers 00:35:45.791 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:45.791 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:45.791 Initialization complete. Launching workers. 00:35:45.791 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 14875, failed: 0 00:35:45.791 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1333, failed to submit 13542 00:35:45.791 success 707, unsuccessful 626, failed 0 00:35:45.791 17:46:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:45.791 17:46:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:49.063 Initializing NVMe Controllers 00:35:49.063 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:49.063 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:49.063 Initialization complete. Launching workers. 00:35:49.063 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8851, failed: 0 00:35:49.063 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1260, failed to submit 7591 00:35:49.063 success 336, unsuccessful 924, failed 0 00:35:49.063 17:46:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:49.063 17:46:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:52.336 Initializing NVMe Controllers 00:35:52.336 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:52.336 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:52.336 Initialization complete. Launching workers. 00:35:52.336 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38787, failed: 0 00:35:52.336 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2731, failed to submit 36056 00:35:52.336 success 584, unsuccessful 2147, failed 0 00:35:52.336 17:46:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:35:52.336 17:46:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.336 17:46:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:52.336 17:46:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.336 17:46:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:35:52.336 17:46:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.336 17:46:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:53.268 17:46:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.268 17:46:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2849353 00:35:53.268 17:46:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 2849353 ']' 00:35:53.268 17:46:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 2849353 00:35:53.268 17:46:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:35:53.268 17:46:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:53.268 17:46:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2849353 00:35:53.268 17:46:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:53.268 17:46:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:53.268 17:46:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2849353' 00:35:53.268 killing process with pid 2849353 00:35:53.268 17:46:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 2849353 00:35:53.268 17:46:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 2849353 00:35:53.268 00:35:53.268 real 0m14.082s 00:35:53.268 user 0m53.669s 00:35:53.268 sys 0m2.596s 00:35:53.268 17:46:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:53.268 17:46:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:53.268 ************************************ 00:35:53.268 END TEST spdk_target_abort 00:35:53.268 ************************************ 00:35:53.268 17:46:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:35:53.268 17:46:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:53.268 17:46:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:53.268 17:46:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:53.268 ************************************ 00:35:53.268 START TEST kernel_target_abort 00:35:53.268 ************************************ 00:35:53.268 17:46:22 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:35:53.268 17:46:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:35:53.268 17:46:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:35:53.268 17:46:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:53.268 17:46:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:53.268 17:46:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:53.268 17:46:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:53.268 17:46:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:53.268 17:46:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:53.268 17:46:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:53.268 17:46:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:53.268 17:46:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:53.268 17:46:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:35:53.268 17:46:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:35:53.268 17:46:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:35:53.268 17:46:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:53.268 17:46:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:53.268 17:46:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:53.268 17:46:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:35:53.268 17:46:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:35:53.268 17:46:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:35:53.268 17:46:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:53.268 17:46:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:55.802 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:35:56.369 Waiting for block devices as requested 00:35:56.369 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:35:56.369 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:56.369 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:56.628 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:56.628 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:56.628 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:56.887 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:56.887 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:56.887 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:56.887 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:57.146 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:57.146 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:57.146 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:57.405 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:57.405 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:57.405 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:57.664 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:57.664 17:46:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:35:57.664 17:46:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:57.664 17:46:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:35:57.664 17:46:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:35:57.664 17:46:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:57.664 17:46:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:35:57.664 17:46:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:35:57.664 17:46:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:35:57.664 17:46:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:57.664 No valid GPT data, bailing 00:35:57.664 17:46:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:57.664 17:46:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:35:57.664 17:46:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:35:57.664 17:46:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:35:57.664 17:46:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:35:57.664 17:46:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:35:57.664 17:46:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:35:57.664 17:46:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:35:57.664 17:46:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:35:57.664 17:46:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:35:57.664 17:46:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:35:57.664 17:46:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:35:57.664 17:46:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:35:57.664 No valid GPT data, bailing 00:35:57.664 17:46:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:35:57.664 17:46:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:35:57.664 17:46:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:35:57.664 17:46:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:35:57.664 17:46:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:35:57.664 17:46:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n2 ]] 00:35:57.664 17:46:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n2 00:35:57.664 17:46:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:35:57.664 17:46:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:35:57.664 17:46:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ host-managed != none ]] 00:35:57.664 17:46:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # continue 00:35:57.664 17:46:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:35:57.664 17:46:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:57.664 17:46:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:57.664 17:46:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:57.664 17:46:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:35:57.664 17:46:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:35:57.664 17:46:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:35:57.664 17:46:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:35:57.664 17:46:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:35:57.664 17:46:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:35:57.664 17:46:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:35:57.664 17:46:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:35:57.664 17:46:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:57.923 17:46:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:35:57.923 00:35:57.923 Discovery Log Number of Records 2, Generation counter 2 00:35:57.923 =====Discovery Log Entry 0====== 00:35:57.923 trtype: tcp 00:35:57.923 adrfam: ipv4 00:35:57.923 subtype: current discovery subsystem 00:35:57.923 treq: not specified, sq flow control disable supported 00:35:57.923 portid: 1 00:35:57.923 trsvcid: 4420 00:35:57.923 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:57.923 traddr: 10.0.0.1 00:35:57.923 eflags: none 00:35:57.923 sectype: none 00:35:57.923 =====Discovery Log Entry 1====== 00:35:57.923 trtype: tcp 00:35:57.923 adrfam: ipv4 00:35:57.923 subtype: nvme subsystem 00:35:57.923 treq: not specified, sq flow control disable supported 00:35:57.923 portid: 1 00:35:57.923 trsvcid: 4420 00:35:57.923 subnqn: nqn.2016-06.io.spdk:testnqn 00:35:57.923 traddr: 10.0.0.1 00:35:57.923 eflags: none 00:35:57.923 sectype: none 00:35:57.923 17:46:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:35:57.923 17:46:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:57.923 17:46:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:57.923 17:46:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:35:57.923 17:46:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:57.923 17:46:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:57.923 17:46:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:57.923 17:46:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:57.923 17:46:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:57.923 17:46:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:57.923 17:46:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:57.923 17:46:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:57.923 17:46:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:57.923 17:46:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:57.923 17:46:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:35:57.923 17:46:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:57.923 17:46:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:35:57.923 17:46:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:57.923 17:46:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:57.923 17:46:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:57.923 17:46:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:01.208 Initializing NVMe Controllers 00:36:01.208 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:01.209 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:01.209 Initialization complete. Launching workers. 00:36:01.209 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 81918, failed: 0 00:36:01.209 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 81918, failed to submit 0 00:36:01.209 success 0, unsuccessful 81918, failed 0 00:36:01.209 17:46:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:01.209 17:46:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:04.494 Initializing NVMe Controllers 00:36:04.494 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:04.494 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:04.494 Initialization complete. Launching workers. 00:36:04.494 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 138668, failed: 0 00:36:04.494 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 32650, failed to submit 106018 00:36:04.494 success 0, unsuccessful 32650, failed 0 00:36:04.494 17:46:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:04.494 17:46:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:07.779 Initializing NVMe Controllers 00:36:07.779 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:07.779 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:07.779 Initialization complete. Launching workers. 00:36:07.779 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 129513, failed: 0 00:36:07.779 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 32406, failed to submit 97107 00:36:07.779 success 0, unsuccessful 32406, failed 0 00:36:07.779 17:46:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:36:07.779 17:46:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:36:07.779 17:46:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:36:07.779 17:46:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:07.779 17:46:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:07.779 17:46:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:07.779 17:46:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:07.779 17:46:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:36:07.779 17:46:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:36:07.779 17:46:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:09.684 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:36:10.327 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:36:10.327 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:36:10.327 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:36:10.327 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:36:10.327 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:36:10.327 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:36:10.327 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:36:10.327 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:36:10.327 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:36:10.327 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:36:10.327 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:36:10.327 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:36:10.327 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:36:10.327 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:36:10.327 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:36:10.327 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:36:11.265 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:36:11.265 00:36:11.265 real 0m17.889s 00:36:11.265 user 0m8.852s 00:36:11.265 sys 0m5.400s 00:36:11.265 17:46:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:11.265 17:46:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:11.265 ************************************ 00:36:11.265 END TEST kernel_target_abort 00:36:11.265 ************************************ 00:36:11.265 17:46:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:36:11.265 17:46:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:36:11.265 17:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:11.265 17:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:36:11.265 17:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:11.265 17:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:36:11.265 17:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:11.265 17:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:11.265 rmmod nvme_tcp 00:36:11.265 rmmod nvme_fabrics 00:36:11.265 rmmod nvme_keyring 00:36:11.265 17:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:11.265 17:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:36:11.265 17:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:36:11.265 17:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 2849353 ']' 00:36:11.265 17:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 2849353 00:36:11.265 17:46:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 2849353 ']' 00:36:11.265 17:46:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 2849353 00:36:11.265 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2849353) - No such process 00:36:11.265 17:46:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 2849353 is not found' 00:36:11.265 Process with pid 2849353 is not found 00:36:11.265 17:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:36:11.265 17:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:13.802 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:36:14.370 Waiting for block devices as requested 00:36:14.370 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:36:14.370 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:14.370 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:14.629 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:14.629 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:14.629 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:14.888 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:14.888 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:14.888 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:14.888 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:15.146 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:15.146 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:15.146 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:15.405 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:15.405 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:15.405 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:15.405 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:15.664 17:46:44 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:15.664 17:46:44 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:15.664 17:46:44 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:36:15.664 17:46:44 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:36:15.664 17:46:44 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:15.664 17:46:44 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:36:15.664 17:46:44 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:15.664 17:46:44 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:15.664 17:46:44 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:15.664 17:46:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:15.664 17:46:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:17.567 17:46:46 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:17.567 00:36:17.567 real 0m48.989s 00:36:17.567 user 1m7.109s 00:36:17.567 sys 0m16.828s 00:36:17.567 17:46:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:17.826 17:46:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:17.826 ************************************ 00:36:17.826 END TEST nvmf_abort_qd_sizes 00:36:17.826 ************************************ 00:36:17.826 17:46:46 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:17.826 17:46:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:17.826 17:46:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:17.826 17:46:46 -- common/autotest_common.sh@10 -- # set +x 00:36:17.826 ************************************ 00:36:17.826 START TEST keyring_file 00:36:17.826 ************************************ 00:36:17.827 17:46:46 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:17.827 * Looking for test storage... 00:36:17.827 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:36:17.827 17:46:46 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:17.827 17:46:46 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:36:17.827 17:46:46 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:17.827 17:46:46 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:17.827 17:46:46 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:17.827 17:46:46 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:17.827 17:46:46 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:17.827 17:46:46 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:36:17.827 17:46:46 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:36:17.827 17:46:46 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:36:17.827 17:46:46 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:36:17.827 17:46:46 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:36:17.827 17:46:46 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:36:17.827 17:46:46 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:36:17.827 17:46:46 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:17.827 17:46:46 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:36:17.827 17:46:46 keyring_file -- scripts/common.sh@345 -- # : 1 00:36:17.827 17:46:46 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:17.827 17:46:46 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:17.827 17:46:46 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:36:17.827 17:46:46 keyring_file -- scripts/common.sh@353 -- # local d=1 00:36:17.827 17:46:46 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:17.827 17:46:46 keyring_file -- scripts/common.sh@355 -- # echo 1 00:36:17.827 17:46:46 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:36:17.827 17:46:46 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:36:17.827 17:46:46 keyring_file -- scripts/common.sh@353 -- # local d=2 00:36:17.827 17:46:46 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:17.827 17:46:46 keyring_file -- scripts/common.sh@355 -- # echo 2 00:36:17.827 17:46:46 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:36:17.827 17:46:46 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:17.827 17:46:46 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:17.827 17:46:46 keyring_file -- scripts/common.sh@368 -- # return 0 00:36:17.827 17:46:46 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:17.827 17:46:46 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:17.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:17.827 --rc genhtml_branch_coverage=1 00:36:17.827 --rc genhtml_function_coverage=1 00:36:17.827 --rc genhtml_legend=1 00:36:17.827 --rc geninfo_all_blocks=1 00:36:17.827 --rc geninfo_unexecuted_blocks=1 00:36:17.827 00:36:17.827 ' 00:36:17.827 17:46:46 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:17.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:17.827 --rc genhtml_branch_coverage=1 00:36:17.827 --rc genhtml_function_coverage=1 00:36:17.827 --rc genhtml_legend=1 00:36:17.827 --rc geninfo_all_blocks=1 00:36:17.827 --rc geninfo_unexecuted_blocks=1 00:36:17.827 00:36:17.827 ' 00:36:17.827 17:46:46 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:17.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:17.827 --rc genhtml_branch_coverage=1 00:36:17.827 --rc genhtml_function_coverage=1 00:36:17.827 --rc genhtml_legend=1 00:36:17.827 --rc geninfo_all_blocks=1 00:36:17.827 --rc geninfo_unexecuted_blocks=1 00:36:17.827 00:36:17.827 ' 00:36:17.827 17:46:46 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:17.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:17.827 --rc genhtml_branch_coverage=1 00:36:17.827 --rc genhtml_function_coverage=1 00:36:17.827 --rc genhtml_legend=1 00:36:17.827 --rc geninfo_all_blocks=1 00:36:17.827 --rc geninfo_unexecuted_blocks=1 00:36:17.827 00:36:17.827 ' 00:36:17.827 17:46:46 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:36:17.827 17:46:46 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:17.827 17:46:46 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:36:17.827 17:46:46 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:17.827 17:46:46 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:17.827 17:46:46 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:17.827 17:46:46 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:17.827 17:46:46 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:17.827 17:46:46 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:17.827 17:46:47 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:17.827 17:46:47 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:17.827 17:46:47 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:17.827 17:46:47 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:18.087 17:46:47 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:36:18.087 17:46:47 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:36:18.087 17:46:47 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:18.087 17:46:47 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:18.087 17:46:47 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:18.087 17:46:47 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:18.087 17:46:47 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:18.087 17:46:47 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:36:18.087 17:46:47 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:18.087 17:46:47 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:18.087 17:46:47 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:18.087 17:46:47 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:18.087 17:46:47 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:18.087 17:46:47 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:18.087 17:46:47 keyring_file -- paths/export.sh@5 -- # export PATH 00:36:18.087 17:46:47 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:18.087 17:46:47 keyring_file -- nvmf/common.sh@51 -- # : 0 00:36:18.087 17:46:47 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:18.087 17:46:47 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:18.087 17:46:47 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:18.087 17:46:47 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:18.087 17:46:47 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:18.087 17:46:47 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:18.087 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:18.087 17:46:47 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:18.087 17:46:47 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:18.087 17:46:47 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:18.087 17:46:47 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:18.087 17:46:47 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:18.087 17:46:47 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:18.087 17:46:47 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:36:18.087 17:46:47 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:36:18.087 17:46:47 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:36:18.087 17:46:47 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:18.087 17:46:47 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:18.087 17:46:47 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:18.087 17:46:47 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:18.087 17:46:47 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:18.087 17:46:47 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:18.087 17:46:47 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.5qPwmVteKb 00:36:18.087 17:46:47 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:18.087 17:46:47 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:18.087 17:46:47 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:36:18.087 17:46:47 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:18.087 17:46:47 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:36:18.087 17:46:47 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:36:18.087 17:46:47 keyring_file -- nvmf/common.sh@733 -- # python - 00:36:18.087 17:46:47 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.5qPwmVteKb 00:36:18.087 17:46:47 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.5qPwmVteKb 00:36:18.087 17:46:47 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.5qPwmVteKb 00:36:18.087 17:46:47 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:36:18.087 17:46:47 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:18.087 17:46:47 keyring_file -- keyring/common.sh@17 -- # name=key1 00:36:18.087 17:46:47 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:18.087 17:46:47 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:18.087 17:46:47 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:18.087 17:46:47 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.huUnw5E0dk 00:36:18.087 17:46:47 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:18.087 17:46:47 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:18.087 17:46:47 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:36:18.087 17:46:47 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:18.087 17:46:47 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:36:18.087 17:46:47 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:36:18.087 17:46:47 keyring_file -- nvmf/common.sh@733 -- # python - 00:36:18.087 17:46:47 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.huUnw5E0dk 00:36:18.087 17:46:47 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.huUnw5E0dk 00:36:18.087 17:46:47 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.huUnw5E0dk 00:36:18.087 17:46:47 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:36:18.087 17:46:47 keyring_file -- keyring/file.sh@30 -- # tgtpid=2858184 00:36:18.087 17:46:47 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2858184 00:36:18.087 17:46:47 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2858184 ']' 00:36:18.087 17:46:47 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:18.087 17:46:47 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:18.087 17:46:47 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:18.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:18.087 17:46:47 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:18.087 17:46:47 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:18.087 [2024-12-09 17:46:47.166745] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:36:18.087 [2024-12-09 17:46:47.166791] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2858184 ] 00:36:18.087 [2024-12-09 17:46:47.237676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:18.346 [2024-12-09 17:46:47.279432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:18.346 17:46:47 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:18.346 17:46:47 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:36:18.346 17:46:47 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:36:18.346 17:46:47 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.346 17:46:47 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:18.346 [2024-12-09 17:46:47.509514] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:18.605 null0 00:36:18.605 [2024-12-09 17:46:47.541562] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:18.605 [2024-12-09 17:46:47.541858] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:18.605 17:46:47 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.605 17:46:47 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:18.605 17:46:47 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:36:18.605 17:46:47 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:18.605 17:46:47 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:18.605 17:46:47 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:18.605 17:46:47 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:18.605 17:46:47 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:18.605 17:46:47 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:18.605 17:46:47 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.605 17:46:47 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:18.605 [2024-12-09 17:46:47.569628] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:36:18.605 request: 00:36:18.605 { 00:36:18.605 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:36:18.605 "secure_channel": false, 00:36:18.605 "listen_address": { 00:36:18.605 "trtype": "tcp", 00:36:18.605 "traddr": "127.0.0.1", 00:36:18.605 "trsvcid": "4420" 00:36:18.605 }, 00:36:18.605 "method": "nvmf_subsystem_add_listener", 00:36:18.605 "req_id": 1 00:36:18.605 } 00:36:18.605 Got JSON-RPC error response 00:36:18.605 response: 00:36:18.605 { 00:36:18.605 "code": -32602, 00:36:18.605 "message": "Invalid parameters" 00:36:18.605 } 00:36:18.605 17:46:47 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:18.605 17:46:47 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:36:18.605 17:46:47 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:18.605 17:46:47 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:18.605 17:46:47 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:18.605 17:46:47 keyring_file -- keyring/file.sh@47 -- # bperfpid=2858231 00:36:18.605 17:46:47 keyring_file -- keyring/file.sh@49 -- # waitforlisten 2858231 /var/tmp/bperf.sock 00:36:18.605 17:46:47 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:36:18.605 17:46:47 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2858231 ']' 00:36:18.605 17:46:47 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:18.605 17:46:47 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:18.605 17:46:47 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:18.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:18.605 17:46:47 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:18.605 17:46:47 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:18.605 [2024-12-09 17:46:47.625510] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:36:18.605 [2024-12-09 17:46:47.625555] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2858231 ] 00:36:18.605 [2024-12-09 17:46:47.700549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:18.605 [2024-12-09 17:46:47.741234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:18.863 17:46:47 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:18.863 17:46:47 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:36:18.863 17:46:47 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.5qPwmVteKb 00:36:18.863 17:46:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.5qPwmVteKb 00:36:18.863 17:46:48 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.huUnw5E0dk 00:36:18.863 17:46:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.huUnw5E0dk 00:36:19.121 17:46:48 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:36:19.121 17:46:48 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:36:19.121 17:46:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:19.121 17:46:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:19.121 17:46:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:19.379 17:46:48 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.5qPwmVteKb == \/\t\m\p\/\t\m\p\.\5\q\P\w\m\V\t\e\K\b ]] 00:36:19.379 17:46:48 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:36:19.379 17:46:48 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:36:19.379 17:46:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:19.379 17:46:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:19.379 17:46:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:19.636 17:46:48 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.huUnw5E0dk == \/\t\m\p\/\t\m\p\.\h\u\U\n\w\5\E\0\d\k ]] 00:36:19.636 17:46:48 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:36:19.636 17:46:48 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:19.636 17:46:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:19.636 17:46:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:19.636 17:46:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:19.636 17:46:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:19.636 17:46:48 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:36:19.636 17:46:48 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:36:19.636 17:46:48 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:19.636 17:46:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:19.636 17:46:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:19.636 17:46:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:19.636 17:46:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:19.895 17:46:48 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:36:19.895 17:46:48 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:19.895 17:46:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:20.153 [2024-12-09 17:46:49.157004] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:20.153 nvme0n1 00:36:20.153 17:46:49 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:36:20.153 17:46:49 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:20.153 17:46:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:20.153 17:46:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:20.153 17:46:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:20.153 17:46:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:20.411 17:46:49 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:36:20.411 17:46:49 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:36:20.411 17:46:49 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:20.411 17:46:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:20.411 17:46:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:20.411 17:46:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:20.411 17:46:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:20.669 17:46:49 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:36:20.669 17:46:49 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:20.669 Running I/O for 1 seconds... 00:36:21.602 19156.00 IOPS, 74.83 MiB/s 00:36:21.602 Latency(us) 00:36:21.602 [2024-12-09T16:46:50.781Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:21.602 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:36:21.602 nvme0n1 : 1.00 19198.26 74.99 0.00 0.00 6654.80 2808.69 11172.33 00:36:21.602 [2024-12-09T16:46:50.781Z] =================================================================================================================== 00:36:21.602 [2024-12-09T16:46:50.781Z] Total : 19198.26 74.99 0.00 0.00 6654.80 2808.69 11172.33 00:36:21.602 { 00:36:21.602 "results": [ 00:36:21.602 { 00:36:21.602 "job": "nvme0n1", 00:36:21.602 "core_mask": "0x2", 00:36:21.602 "workload": "randrw", 00:36:21.602 "percentage": 50, 00:36:21.602 "status": "finished", 00:36:21.602 "queue_depth": 128, 00:36:21.602 "io_size": 4096, 00:36:21.602 "runtime": 1.004518, 00:36:21.602 "iops": 19198.262251149306, 00:36:21.602 "mibps": 74.99321191855198, 00:36:21.602 "io_failed": 0, 00:36:21.602 "io_timeout": 0, 00:36:21.602 "avg_latency_us": 6654.800232107362, 00:36:21.602 "min_latency_us": 2808.6857142857143, 00:36:21.602 "max_latency_us": 11172.327619047619 00:36:21.602 } 00:36:21.602 ], 00:36:21.603 "core_count": 1 00:36:21.603 } 00:36:21.603 17:46:50 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:21.603 17:46:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:21.860 17:46:50 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:36:21.860 17:46:50 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:21.860 17:46:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:21.860 17:46:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:21.860 17:46:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:21.860 17:46:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:22.118 17:46:51 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:36:22.118 17:46:51 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:36:22.118 17:46:51 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:22.118 17:46:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:22.118 17:46:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:22.118 17:46:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:22.118 17:46:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:22.377 17:46:51 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:36:22.377 17:46:51 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:22.377 17:46:51 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:36:22.377 17:46:51 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:22.377 17:46:51 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:36:22.377 17:46:51 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:22.377 17:46:51 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:36:22.377 17:46:51 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:22.377 17:46:51 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:22.377 17:46:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:22.377 [2024-12-09 17:46:51.523307] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:22.377 [2024-12-09 17:46:51.523587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2591770 (107): Transport endpoint is not connected 00:36:22.377 [2024-12-09 17:46:51.524581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2591770 (9): Bad file descriptor 00:36:22.377 [2024-12-09 17:46:51.525582] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:36:22.377 [2024-12-09 17:46:51.525591] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:22.377 [2024-12-09 17:46:51.525599] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:36:22.377 [2024-12-09 17:46:51.525607] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:36:22.377 request: 00:36:22.377 { 00:36:22.377 "name": "nvme0", 00:36:22.377 "trtype": "tcp", 00:36:22.377 "traddr": "127.0.0.1", 00:36:22.377 "adrfam": "ipv4", 00:36:22.377 "trsvcid": "4420", 00:36:22.377 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:22.377 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:22.377 "prchk_reftag": false, 00:36:22.377 "prchk_guard": false, 00:36:22.377 "hdgst": false, 00:36:22.377 "ddgst": false, 00:36:22.377 "psk": "key1", 00:36:22.377 "allow_unrecognized_csi": false, 00:36:22.377 "method": "bdev_nvme_attach_controller", 00:36:22.377 "req_id": 1 00:36:22.377 } 00:36:22.377 Got JSON-RPC error response 00:36:22.377 response: 00:36:22.377 { 00:36:22.377 "code": -5, 00:36:22.377 "message": "Input/output error" 00:36:22.377 } 00:36:22.377 17:46:51 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:36:22.377 17:46:51 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:22.377 17:46:51 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:22.377 17:46:51 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:22.377 17:46:51 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:36:22.377 17:46:51 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:22.377 17:46:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:22.377 17:46:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:22.377 17:46:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:22.377 17:46:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:22.635 17:46:51 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:36:22.635 17:46:51 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:36:22.635 17:46:51 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:22.635 17:46:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:22.635 17:46:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:22.635 17:46:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:22.635 17:46:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:22.893 17:46:51 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:36:22.893 17:46:51 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:36:22.893 17:46:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:23.151 17:46:52 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:36:23.151 17:46:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:36:23.410 17:46:52 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:36:23.410 17:46:52 keyring_file -- keyring/file.sh@78 -- # jq length 00:36:23.410 17:46:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:23.410 17:46:52 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:36:23.410 17:46:52 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.5qPwmVteKb 00:36:23.410 17:46:52 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.5qPwmVteKb 00:36:23.410 17:46:52 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:36:23.410 17:46:52 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.5qPwmVteKb 00:36:23.410 17:46:52 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:36:23.410 17:46:52 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:23.410 17:46:52 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:36:23.410 17:46:52 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:23.410 17:46:52 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.5qPwmVteKb 00:36:23.410 17:46:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.5qPwmVteKb 00:36:23.669 [2024-12-09 17:46:52.734818] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.5qPwmVteKb': 0100660 00:36:23.669 [2024-12-09 17:46:52.734842] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:36:23.669 request: 00:36:23.669 { 00:36:23.669 "name": "key0", 00:36:23.669 "path": "/tmp/tmp.5qPwmVteKb", 00:36:23.669 "method": "keyring_file_add_key", 00:36:23.669 "req_id": 1 00:36:23.669 } 00:36:23.669 Got JSON-RPC error response 00:36:23.669 response: 00:36:23.669 { 00:36:23.669 "code": -1, 00:36:23.669 "message": "Operation not permitted" 00:36:23.669 } 00:36:23.669 17:46:52 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:36:23.669 17:46:52 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:23.669 17:46:52 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:23.669 17:46:52 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:23.669 17:46:52 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.5qPwmVteKb 00:36:23.669 17:46:52 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.5qPwmVteKb 00:36:23.669 17:46:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.5qPwmVteKb 00:36:23.927 17:46:52 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.5qPwmVteKb 00:36:23.927 17:46:52 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:36:23.927 17:46:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:23.927 17:46:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:23.927 17:46:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:23.927 17:46:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:23.927 17:46:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:24.186 17:46:53 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:36:24.186 17:46:53 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:24.186 17:46:53 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:36:24.186 17:46:53 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:24.186 17:46:53 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:36:24.186 17:46:53 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:24.186 17:46:53 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:36:24.186 17:46:53 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:24.186 17:46:53 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:24.186 17:46:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:24.186 [2024-12-09 17:46:53.344437] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.5qPwmVteKb': No such file or directory 00:36:24.186 [2024-12-09 17:46:53.344462] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:36:24.186 [2024-12-09 17:46:53.344479] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:36:24.186 [2024-12-09 17:46:53.344486] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:36:24.186 [2024-12-09 17:46:53.344493] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:24.186 [2024-12-09 17:46:53.344500] bdev_nvme.c:6796:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:36:24.186 request: 00:36:24.186 { 00:36:24.186 "name": "nvme0", 00:36:24.186 "trtype": "tcp", 00:36:24.186 "traddr": "127.0.0.1", 00:36:24.186 "adrfam": "ipv4", 00:36:24.186 "trsvcid": "4420", 00:36:24.186 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:24.186 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:24.186 "prchk_reftag": false, 00:36:24.186 "prchk_guard": false, 00:36:24.186 "hdgst": false, 00:36:24.186 "ddgst": false, 00:36:24.186 "psk": "key0", 00:36:24.186 "allow_unrecognized_csi": false, 00:36:24.186 "method": "bdev_nvme_attach_controller", 00:36:24.186 "req_id": 1 00:36:24.186 } 00:36:24.186 Got JSON-RPC error response 00:36:24.186 response: 00:36:24.186 { 00:36:24.186 "code": -19, 00:36:24.186 "message": "No such device" 00:36:24.186 } 00:36:24.186 17:46:53 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:36:24.186 17:46:53 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:24.186 17:46:53 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:24.186 17:46:53 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:24.186 17:46:53 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:36:24.186 17:46:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:24.444 17:46:53 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:24.444 17:46:53 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:24.444 17:46:53 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:24.444 17:46:53 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:24.444 17:46:53 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:24.444 17:46:53 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:24.444 17:46:53 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.b0QZCCfC2O 00:36:24.444 17:46:53 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:24.444 17:46:53 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:24.444 17:46:53 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:36:24.444 17:46:53 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:24.444 17:46:53 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:36:24.444 17:46:53 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:36:24.444 17:46:53 keyring_file -- nvmf/common.sh@733 -- # python - 00:36:24.444 17:46:53 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.b0QZCCfC2O 00:36:24.444 17:46:53 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.b0QZCCfC2O 00:36:24.444 17:46:53 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.b0QZCCfC2O 00:36:24.444 17:46:53 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.b0QZCCfC2O 00:36:24.444 17:46:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.b0QZCCfC2O 00:36:24.703 17:46:53 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:24.703 17:46:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:24.961 nvme0n1 00:36:24.961 17:46:54 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:36:24.961 17:46:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:24.961 17:46:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:24.961 17:46:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:24.961 17:46:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:24.961 17:46:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:25.219 17:46:54 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:36:25.219 17:46:54 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:36:25.219 17:46:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:25.477 17:46:54 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:36:25.477 17:46:54 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:36:25.477 17:46:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:25.477 17:46:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:25.477 17:46:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:25.735 17:46:54 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:36:25.735 17:46:54 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:36:25.735 17:46:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:25.735 17:46:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:25.735 17:46:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:25.735 17:46:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:25.735 17:46:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:25.735 17:46:54 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:36:25.735 17:46:54 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:25.735 17:46:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:25.993 17:46:55 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:36:25.993 17:46:55 keyring_file -- keyring/file.sh@105 -- # jq length 00:36:25.993 17:46:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:26.250 17:46:55 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:36:26.250 17:46:55 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.b0QZCCfC2O 00:36:26.250 17:46:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.b0QZCCfC2O 00:36:26.507 17:46:55 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.huUnw5E0dk 00:36:26.507 17:46:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.huUnw5E0dk 00:36:26.507 17:46:55 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:26.507 17:46:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:26.765 nvme0n1 00:36:26.765 17:46:55 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:36:26.765 17:46:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:36:27.023 17:46:56 keyring_file -- keyring/file.sh@113 -- # config='{ 00:36:27.023 "subsystems": [ 00:36:27.023 { 00:36:27.023 "subsystem": "keyring", 00:36:27.023 "config": [ 00:36:27.023 { 00:36:27.023 "method": "keyring_file_add_key", 00:36:27.023 "params": { 00:36:27.023 "name": "key0", 00:36:27.023 "path": "/tmp/tmp.b0QZCCfC2O" 00:36:27.023 } 00:36:27.023 }, 00:36:27.023 { 00:36:27.023 "method": "keyring_file_add_key", 00:36:27.023 "params": { 00:36:27.023 "name": "key1", 00:36:27.023 "path": "/tmp/tmp.huUnw5E0dk" 00:36:27.023 } 00:36:27.023 } 00:36:27.023 ] 00:36:27.023 }, 00:36:27.023 { 00:36:27.023 "subsystem": "iobuf", 00:36:27.023 "config": [ 00:36:27.023 { 00:36:27.023 "method": "iobuf_set_options", 00:36:27.023 "params": { 00:36:27.023 "small_pool_count": 8192, 00:36:27.023 "large_pool_count": 1024, 00:36:27.023 "small_bufsize": 8192, 00:36:27.023 "large_bufsize": 135168, 00:36:27.023 "enable_numa": false 00:36:27.023 } 00:36:27.023 } 00:36:27.023 ] 00:36:27.023 }, 00:36:27.023 { 00:36:27.024 "subsystem": "sock", 00:36:27.024 "config": [ 00:36:27.024 { 00:36:27.024 "method": "sock_set_default_impl", 00:36:27.024 "params": { 00:36:27.024 "impl_name": "posix" 00:36:27.024 } 00:36:27.024 }, 00:36:27.024 { 00:36:27.024 "method": "sock_impl_set_options", 00:36:27.024 "params": { 00:36:27.024 "impl_name": "ssl", 00:36:27.024 "recv_buf_size": 4096, 00:36:27.024 "send_buf_size": 4096, 00:36:27.024 "enable_recv_pipe": true, 00:36:27.024 "enable_quickack": false, 00:36:27.024 "enable_placement_id": 0, 00:36:27.024 "enable_zerocopy_send_server": true, 00:36:27.024 "enable_zerocopy_send_client": false, 00:36:27.024 "zerocopy_threshold": 0, 00:36:27.024 "tls_version": 0, 00:36:27.024 "enable_ktls": false 00:36:27.024 } 00:36:27.024 }, 00:36:27.024 { 00:36:27.024 "method": "sock_impl_set_options", 00:36:27.024 "params": { 00:36:27.024 "impl_name": "posix", 00:36:27.024 "recv_buf_size": 2097152, 00:36:27.024 "send_buf_size": 2097152, 00:36:27.024 "enable_recv_pipe": true, 00:36:27.024 "enable_quickack": false, 00:36:27.024 "enable_placement_id": 0, 00:36:27.024 "enable_zerocopy_send_server": true, 00:36:27.024 "enable_zerocopy_send_client": false, 00:36:27.024 "zerocopy_threshold": 0, 00:36:27.024 "tls_version": 0, 00:36:27.024 "enable_ktls": false 00:36:27.024 } 00:36:27.024 } 00:36:27.024 ] 00:36:27.024 }, 00:36:27.024 { 00:36:27.024 "subsystem": "vmd", 00:36:27.024 "config": [] 00:36:27.024 }, 00:36:27.024 { 00:36:27.024 "subsystem": "accel", 00:36:27.024 "config": [ 00:36:27.024 { 00:36:27.024 "method": "accel_set_options", 00:36:27.024 "params": { 00:36:27.024 "small_cache_size": 128, 00:36:27.024 "large_cache_size": 16, 00:36:27.024 "task_count": 2048, 00:36:27.024 "sequence_count": 2048, 00:36:27.024 "buf_count": 2048 00:36:27.024 } 00:36:27.024 } 00:36:27.024 ] 00:36:27.024 }, 00:36:27.024 { 00:36:27.024 "subsystem": "bdev", 00:36:27.024 "config": [ 00:36:27.024 { 00:36:27.024 "method": "bdev_set_options", 00:36:27.024 "params": { 00:36:27.024 "bdev_io_pool_size": 65535, 00:36:27.024 "bdev_io_cache_size": 256, 00:36:27.024 "bdev_auto_examine": true, 00:36:27.024 "iobuf_small_cache_size": 128, 00:36:27.024 "iobuf_large_cache_size": 16 00:36:27.024 } 00:36:27.024 }, 00:36:27.024 { 00:36:27.024 "method": "bdev_raid_set_options", 00:36:27.024 "params": { 00:36:27.024 "process_window_size_kb": 1024, 00:36:27.024 "process_max_bandwidth_mb_sec": 0 00:36:27.024 } 00:36:27.024 }, 00:36:27.024 { 00:36:27.024 "method": "bdev_iscsi_set_options", 00:36:27.024 "params": { 00:36:27.024 "timeout_sec": 30 00:36:27.024 } 00:36:27.024 }, 00:36:27.024 { 00:36:27.024 "method": "bdev_nvme_set_options", 00:36:27.024 "params": { 00:36:27.024 "action_on_timeout": "none", 00:36:27.024 "timeout_us": 0, 00:36:27.024 "timeout_admin_us": 0, 00:36:27.024 "keep_alive_timeout_ms": 10000, 00:36:27.024 "arbitration_burst": 0, 00:36:27.024 "low_priority_weight": 0, 00:36:27.024 "medium_priority_weight": 0, 00:36:27.024 "high_priority_weight": 0, 00:36:27.024 "nvme_adminq_poll_period_us": 10000, 00:36:27.024 "nvme_ioq_poll_period_us": 0, 00:36:27.024 "io_queue_requests": 512, 00:36:27.024 "delay_cmd_submit": true, 00:36:27.024 "transport_retry_count": 4, 00:36:27.024 "bdev_retry_count": 3, 00:36:27.024 "transport_ack_timeout": 0, 00:36:27.024 "ctrlr_loss_timeout_sec": 0, 00:36:27.024 "reconnect_delay_sec": 0, 00:36:27.024 "fast_io_fail_timeout_sec": 0, 00:36:27.024 "disable_auto_failback": false, 00:36:27.024 "generate_uuids": false, 00:36:27.024 "transport_tos": 0, 00:36:27.024 "nvme_error_stat": false, 00:36:27.024 "rdma_srq_size": 0, 00:36:27.024 "io_path_stat": false, 00:36:27.024 "allow_accel_sequence": false, 00:36:27.024 "rdma_max_cq_size": 0, 00:36:27.024 "rdma_cm_event_timeout_ms": 0, 00:36:27.024 "dhchap_digests": [ 00:36:27.024 "sha256", 00:36:27.024 "sha384", 00:36:27.024 "sha512" 00:36:27.024 ], 00:36:27.024 "dhchap_dhgroups": [ 00:36:27.024 "null", 00:36:27.024 "ffdhe2048", 00:36:27.024 "ffdhe3072", 00:36:27.024 "ffdhe4096", 00:36:27.024 "ffdhe6144", 00:36:27.024 "ffdhe8192" 00:36:27.024 ] 00:36:27.024 } 00:36:27.024 }, 00:36:27.024 { 00:36:27.024 "method": "bdev_nvme_attach_controller", 00:36:27.024 "params": { 00:36:27.024 "name": "nvme0", 00:36:27.024 "trtype": "TCP", 00:36:27.024 "adrfam": "IPv4", 00:36:27.024 "traddr": "127.0.0.1", 00:36:27.024 "trsvcid": "4420", 00:36:27.024 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:27.024 "prchk_reftag": false, 00:36:27.024 "prchk_guard": false, 00:36:27.024 "ctrlr_loss_timeout_sec": 0, 00:36:27.024 "reconnect_delay_sec": 0, 00:36:27.024 "fast_io_fail_timeout_sec": 0, 00:36:27.024 "psk": "key0", 00:36:27.024 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:27.024 "hdgst": false, 00:36:27.024 "ddgst": false, 00:36:27.024 "multipath": "multipath" 00:36:27.024 } 00:36:27.024 }, 00:36:27.024 { 00:36:27.024 "method": "bdev_nvme_set_hotplug", 00:36:27.024 "params": { 00:36:27.024 "period_us": 100000, 00:36:27.024 "enable": false 00:36:27.024 } 00:36:27.024 }, 00:36:27.024 { 00:36:27.024 "method": "bdev_wait_for_examine" 00:36:27.024 } 00:36:27.024 ] 00:36:27.024 }, 00:36:27.024 { 00:36:27.024 "subsystem": "nbd", 00:36:27.024 "config": [] 00:36:27.024 } 00:36:27.024 ] 00:36:27.024 }' 00:36:27.024 17:46:56 keyring_file -- keyring/file.sh@115 -- # killprocess 2858231 00:36:27.024 17:46:56 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2858231 ']' 00:36:27.024 17:46:56 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2858231 00:36:27.024 17:46:56 keyring_file -- common/autotest_common.sh@959 -- # uname 00:36:27.024 17:46:56 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:27.024 17:46:56 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2858231 00:36:27.024 17:46:56 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:27.024 17:46:56 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:27.024 17:46:56 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2858231' 00:36:27.024 killing process with pid 2858231 00:36:27.024 17:46:56 keyring_file -- common/autotest_common.sh@973 -- # kill 2858231 00:36:27.024 Received shutdown signal, test time was about 1.000000 seconds 00:36:27.024 00:36:27.024 Latency(us) 00:36:27.024 [2024-12-09T16:46:56.203Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:27.024 [2024-12-09T16:46:56.203Z] =================================================================================================================== 00:36:27.024 [2024-12-09T16:46:56.203Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:27.024 17:46:56 keyring_file -- common/autotest_common.sh@978 -- # wait 2858231 00:36:27.283 17:46:56 keyring_file -- keyring/file.sh@118 -- # bperfpid=2859767 00:36:27.283 17:46:56 keyring_file -- keyring/file.sh@120 -- # waitforlisten 2859767 /var/tmp/bperf.sock 00:36:27.283 17:46:56 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2859767 ']' 00:36:27.283 17:46:56 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:27.283 17:46:56 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:36:27.283 17:46:56 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:27.283 17:46:56 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:27.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:27.283 17:46:56 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:27.283 17:46:56 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:36:27.283 "subsystems": [ 00:36:27.283 { 00:36:27.283 "subsystem": "keyring", 00:36:27.283 "config": [ 00:36:27.283 { 00:36:27.283 "method": "keyring_file_add_key", 00:36:27.283 "params": { 00:36:27.283 "name": "key0", 00:36:27.283 "path": "/tmp/tmp.b0QZCCfC2O" 00:36:27.283 } 00:36:27.283 }, 00:36:27.283 { 00:36:27.283 "method": "keyring_file_add_key", 00:36:27.283 "params": { 00:36:27.283 "name": "key1", 00:36:27.283 "path": "/tmp/tmp.huUnw5E0dk" 00:36:27.283 } 00:36:27.283 } 00:36:27.283 ] 00:36:27.283 }, 00:36:27.283 { 00:36:27.283 "subsystem": "iobuf", 00:36:27.283 "config": [ 00:36:27.283 { 00:36:27.283 "method": "iobuf_set_options", 00:36:27.283 "params": { 00:36:27.283 "small_pool_count": 8192, 00:36:27.283 "large_pool_count": 1024, 00:36:27.283 "small_bufsize": 8192, 00:36:27.283 "large_bufsize": 135168, 00:36:27.283 "enable_numa": false 00:36:27.283 } 00:36:27.283 } 00:36:27.283 ] 00:36:27.283 }, 00:36:27.283 { 00:36:27.283 "subsystem": "sock", 00:36:27.283 "config": [ 00:36:27.283 { 00:36:27.283 "method": "sock_set_default_impl", 00:36:27.283 "params": { 00:36:27.283 "impl_name": "posix" 00:36:27.283 } 00:36:27.283 }, 00:36:27.283 { 00:36:27.283 "method": "sock_impl_set_options", 00:36:27.283 "params": { 00:36:27.283 "impl_name": "ssl", 00:36:27.283 "recv_buf_size": 4096, 00:36:27.283 "send_buf_size": 4096, 00:36:27.283 "enable_recv_pipe": true, 00:36:27.283 "enable_quickack": false, 00:36:27.283 "enable_placement_id": 0, 00:36:27.283 "enable_zerocopy_send_server": true, 00:36:27.283 "enable_zerocopy_send_client": false, 00:36:27.283 "zerocopy_threshold": 0, 00:36:27.283 "tls_version": 0, 00:36:27.283 "enable_ktls": false 00:36:27.283 } 00:36:27.283 }, 00:36:27.283 { 00:36:27.283 "method": "sock_impl_set_options", 00:36:27.283 "params": { 00:36:27.283 "impl_name": "posix", 00:36:27.283 "recv_buf_size": 2097152, 00:36:27.283 "send_buf_size": 2097152, 00:36:27.283 "enable_recv_pipe": true, 00:36:27.283 "enable_quickack": false, 00:36:27.283 "enable_placement_id": 0, 00:36:27.283 "enable_zerocopy_send_server": true, 00:36:27.283 "enable_zerocopy_send_client": false, 00:36:27.283 "zerocopy_threshold": 0, 00:36:27.283 "tls_version": 0, 00:36:27.283 "enable_ktls": false 00:36:27.283 } 00:36:27.283 } 00:36:27.283 ] 00:36:27.283 }, 00:36:27.283 { 00:36:27.283 "subsystem": "vmd", 00:36:27.283 "config": [] 00:36:27.283 }, 00:36:27.284 { 00:36:27.284 "subsystem": "accel", 00:36:27.284 "config": [ 00:36:27.284 { 00:36:27.284 "method": "accel_set_options", 00:36:27.284 "params": { 00:36:27.284 "small_cache_size": 128, 00:36:27.284 "large_cache_size": 16, 00:36:27.284 "task_count": 2048, 00:36:27.284 "sequence_count": 2048, 00:36:27.284 "buf_count": 2048 00:36:27.284 } 00:36:27.284 } 00:36:27.284 ] 00:36:27.284 }, 00:36:27.284 { 00:36:27.284 "subsystem": "bdev", 00:36:27.284 "config": [ 00:36:27.284 { 00:36:27.284 "method": "bdev_set_options", 00:36:27.284 "params": { 00:36:27.284 "bdev_io_pool_size": 65535, 00:36:27.284 "bdev_io_cache_size": 256, 00:36:27.284 "bdev_auto_examine": true, 00:36:27.284 "iobuf_small_cache_size": 128, 00:36:27.284 "iobuf_large_cache_size": 16 00:36:27.284 } 00:36:27.284 }, 00:36:27.284 { 00:36:27.284 "method": "bdev_raid_set_options", 00:36:27.284 "params": { 00:36:27.284 "process_window_size_kb": 1024, 00:36:27.284 "process_max_bandwidth_mb_sec": 0 00:36:27.284 } 00:36:27.284 }, 00:36:27.284 { 00:36:27.284 "method": "bdev_iscsi_set_options", 00:36:27.284 "params": { 00:36:27.284 "timeout_sec": 30 00:36:27.284 } 00:36:27.284 }, 00:36:27.284 { 00:36:27.284 "method": "bdev_nvme_set_options", 00:36:27.284 "params": { 00:36:27.284 "action_on_timeout": "none", 00:36:27.284 "timeout_us": 0, 00:36:27.284 "timeout_admin_us": 0, 00:36:27.284 "keep_alive_timeout_ms": 10000, 00:36:27.284 "arbitration_burst": 0, 00:36:27.284 "low_priority_weight": 0, 00:36:27.284 "medium_priority_weight": 0, 00:36:27.284 "high_priority_weight": 0, 00:36:27.284 "nvme_adminq_poll_period_us": 10000, 00:36:27.284 "nvme_ioq_poll_period_us": 0, 00:36:27.284 "io_queue_requests": 512, 00:36:27.284 "delay_cmd_submit": true, 00:36:27.284 "transport_retry_count": 4, 00:36:27.284 "bdev_retry_count": 3, 00:36:27.284 "transport_ack_timeout": 0, 00:36:27.284 "ctrlr_loss_timeout_sec": 0, 00:36:27.284 "reconnect_delay_sec": 0, 00:36:27.284 "fast_io_fail_timeout_sec": 0, 00:36:27.284 "disable_auto_failback": false, 00:36:27.284 "generate_uuids": false, 00:36:27.284 "transport_tos": 0, 00:36:27.284 "nvme_error_stat": false, 00:36:27.284 "rdma_srq_size": 0, 00:36:27.284 "io_path_stat": false, 00:36:27.284 "allow_accel_sequence": false, 00:36:27.284 "rdma_max_cq_size": 0, 00:36:27.284 "rdma_cm_event_timeout_ms": 0, 00:36:27.284 "dhchap_digests": [ 00:36:27.284 "sha256", 00:36:27.284 "sha384", 00:36:27.284 "sha512" 00:36:27.284 ], 00:36:27.284 "dhchap_dhgroups": [ 00:36:27.284 "null", 00:36:27.284 "ffdhe2048", 00:36:27.284 "ffdhe3072", 00:36:27.284 "ffdhe4096", 00:36:27.284 "ffdhe6144", 00:36:27.284 "ffdhe8192" 00:36:27.284 ] 00:36:27.284 } 00:36:27.284 }, 00:36:27.284 { 00:36:27.284 "method": "bdev_nvme_attach_controller", 00:36:27.284 "params": { 00:36:27.284 "name": "nvme0", 00:36:27.284 "trtype": "TCP", 00:36:27.284 "adrfam": "IPv4", 00:36:27.284 "traddr": "127.0.0.1", 00:36:27.284 "trsvcid": "4420", 00:36:27.284 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:27.284 "prchk_reftag": false, 00:36:27.284 "prchk_guard": false, 00:36:27.284 "ctrlr_loss_timeout_sec": 0, 00:36:27.284 "reconnect_delay_sec": 0, 00:36:27.284 "fast_io_fail_timeout_sec": 0, 00:36:27.284 "psk": "key0", 00:36:27.284 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:27.284 "hdgst": false, 00:36:27.284 "ddgst": false, 00:36:27.284 "multipath": "multipath" 00:36:27.284 } 00:36:27.284 }, 00:36:27.284 { 00:36:27.284 "method": "bdev_nvme_set_hotplug", 00:36:27.284 "params": { 00:36:27.284 "period_us": 100000, 00:36:27.284 "enable": false 00:36:27.284 } 00:36:27.284 }, 00:36:27.284 { 00:36:27.284 "method": "bdev_wait_for_examine" 00:36:27.284 } 00:36:27.284 ] 00:36:27.284 }, 00:36:27.284 { 00:36:27.284 "subsystem": "nbd", 00:36:27.284 "config": [] 00:36:27.284 } 00:36:27.284 ] 00:36:27.284 }' 00:36:27.284 17:46:56 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:27.284 [2024-12-09 17:46:56.396101] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:36:27.284 [2024-12-09 17:46:56.396149] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2859767 ] 00:36:27.543 [2024-12-09 17:46:56.468082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:27.543 [2024-12-09 17:46:56.508644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:27.543 [2024-12-09 17:46:56.670159] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:28.108 17:46:57 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:28.108 17:46:57 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:36:28.108 17:46:57 keyring_file -- keyring/file.sh@121 -- # jq length 00:36:28.108 17:46:57 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:36:28.108 17:46:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:28.366 17:46:57 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:36:28.366 17:46:57 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:36:28.366 17:46:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:28.366 17:46:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:28.366 17:46:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:28.366 17:46:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:28.366 17:46:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:28.624 17:46:57 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:36:28.624 17:46:57 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:36:28.624 17:46:57 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:28.624 17:46:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:28.624 17:46:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:28.624 17:46:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:28.624 17:46:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:28.883 17:46:57 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:36:28.883 17:46:57 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:36:28.883 17:46:57 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:36:28.883 17:46:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:36:28.883 17:46:58 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:36:28.883 17:46:58 keyring_file -- keyring/file.sh@1 -- # cleanup 00:36:28.883 17:46:58 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.b0QZCCfC2O /tmp/tmp.huUnw5E0dk 00:36:28.883 17:46:58 keyring_file -- keyring/file.sh@20 -- # killprocess 2859767 00:36:28.883 17:46:58 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2859767 ']' 00:36:28.883 17:46:58 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2859767 00:36:28.883 17:46:58 keyring_file -- common/autotest_common.sh@959 -- # uname 00:36:28.883 17:46:58 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:28.883 17:46:58 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2859767 00:36:29.142 17:46:58 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:29.142 17:46:58 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:29.142 17:46:58 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2859767' 00:36:29.142 killing process with pid 2859767 00:36:29.142 17:46:58 keyring_file -- common/autotest_common.sh@973 -- # kill 2859767 00:36:29.142 Received shutdown signal, test time was about 1.000000 seconds 00:36:29.142 00:36:29.142 Latency(us) 00:36:29.142 [2024-12-09T16:46:58.321Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:29.142 [2024-12-09T16:46:58.321Z] =================================================================================================================== 00:36:29.142 [2024-12-09T16:46:58.321Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:36:29.142 17:46:58 keyring_file -- common/autotest_common.sh@978 -- # wait 2859767 00:36:29.142 17:46:58 keyring_file -- keyring/file.sh@21 -- # killprocess 2858184 00:36:29.142 17:46:58 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2858184 ']' 00:36:29.142 17:46:58 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2858184 00:36:29.142 17:46:58 keyring_file -- common/autotest_common.sh@959 -- # uname 00:36:29.142 17:46:58 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:29.142 17:46:58 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2858184 00:36:29.142 17:46:58 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:29.142 17:46:58 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:29.142 17:46:58 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2858184' 00:36:29.142 killing process with pid 2858184 00:36:29.142 17:46:58 keyring_file -- common/autotest_common.sh@973 -- # kill 2858184 00:36:29.142 17:46:58 keyring_file -- common/autotest_common.sh@978 -- # wait 2858184 00:36:29.711 00:36:29.711 real 0m11.767s 00:36:29.711 user 0m29.307s 00:36:29.711 sys 0m2.660s 00:36:29.711 17:46:58 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:29.711 17:46:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:29.711 ************************************ 00:36:29.711 END TEST keyring_file 00:36:29.711 ************************************ 00:36:29.711 17:46:58 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:36:29.711 17:46:58 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:36:29.711 17:46:58 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:29.711 17:46:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:29.711 17:46:58 -- common/autotest_common.sh@10 -- # set +x 00:36:29.711 ************************************ 00:36:29.711 START TEST keyring_linux 00:36:29.711 ************************************ 00:36:29.711 17:46:58 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:36:29.711 Joined session keyring: 1035113369 00:36:29.711 * Looking for test storage... 00:36:29.711 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:36:29.711 17:46:58 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:29.711 17:46:58 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:36:29.711 17:46:58 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:29.711 17:46:58 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:29.711 17:46:58 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:29.711 17:46:58 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:29.711 17:46:58 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:29.711 17:46:58 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:36:29.711 17:46:58 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:36:29.711 17:46:58 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:36:29.711 17:46:58 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:36:29.711 17:46:58 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:36:29.711 17:46:58 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:36:29.711 17:46:58 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:36:29.711 17:46:58 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:29.711 17:46:58 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:36:29.711 17:46:58 keyring_linux -- scripts/common.sh@345 -- # : 1 00:36:29.711 17:46:58 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:29.711 17:46:58 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:29.711 17:46:58 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:36:29.711 17:46:58 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:36:29.711 17:46:58 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:29.711 17:46:58 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:36:29.711 17:46:58 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:36:29.711 17:46:58 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:36:29.711 17:46:58 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:36:29.711 17:46:58 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:29.711 17:46:58 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:36:29.711 17:46:58 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:36:29.711 17:46:58 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:29.711 17:46:58 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:29.711 17:46:58 keyring_linux -- scripts/common.sh@368 -- # return 0 00:36:29.711 17:46:58 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:29.711 17:46:58 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:29.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:29.711 --rc genhtml_branch_coverage=1 00:36:29.711 --rc genhtml_function_coverage=1 00:36:29.711 --rc genhtml_legend=1 00:36:29.711 --rc geninfo_all_blocks=1 00:36:29.711 --rc geninfo_unexecuted_blocks=1 00:36:29.711 00:36:29.711 ' 00:36:29.711 17:46:58 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:29.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:29.711 --rc genhtml_branch_coverage=1 00:36:29.711 --rc genhtml_function_coverage=1 00:36:29.711 --rc genhtml_legend=1 00:36:29.711 --rc geninfo_all_blocks=1 00:36:29.711 --rc geninfo_unexecuted_blocks=1 00:36:29.711 00:36:29.711 ' 00:36:29.711 17:46:58 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:29.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:29.711 --rc genhtml_branch_coverage=1 00:36:29.711 --rc genhtml_function_coverage=1 00:36:29.711 --rc genhtml_legend=1 00:36:29.711 --rc geninfo_all_blocks=1 00:36:29.711 --rc geninfo_unexecuted_blocks=1 00:36:29.711 00:36:29.711 ' 00:36:29.711 17:46:58 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:29.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:29.711 --rc genhtml_branch_coverage=1 00:36:29.711 --rc genhtml_function_coverage=1 00:36:29.711 --rc genhtml_legend=1 00:36:29.711 --rc geninfo_all_blocks=1 00:36:29.711 --rc geninfo_unexecuted_blocks=1 00:36:29.711 00:36:29.711 ' 00:36:29.711 17:46:58 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:36:29.711 17:46:58 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:29.711 17:46:58 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:36:29.711 17:46:58 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:29.711 17:46:58 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:29.711 17:46:58 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:29.711 17:46:58 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:29.711 17:46:58 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:29.711 17:46:58 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:29.711 17:46:58 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:29.711 17:46:58 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:29.711 17:46:58 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:29.711 17:46:58 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:29.711 17:46:58 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:36:29.711 17:46:58 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:36:29.711 17:46:58 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:29.711 17:46:58 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:29.711 17:46:58 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:29.711 17:46:58 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:29.711 17:46:58 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:29.711 17:46:58 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:36:29.711 17:46:58 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:29.711 17:46:58 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:29.711 17:46:58 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:29.711 17:46:58 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:29.711 17:46:58 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:29.711 17:46:58 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:29.711 17:46:58 keyring_linux -- paths/export.sh@5 -- # export PATH 00:36:29.711 17:46:58 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:29.712 17:46:58 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:36:29.712 17:46:58 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:29.712 17:46:58 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:29.712 17:46:58 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:29.712 17:46:58 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:29.712 17:46:58 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:29.712 17:46:58 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:29.712 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:29.712 17:46:58 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:29.712 17:46:58 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:29.712 17:46:58 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:29.712 17:46:58 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:29.712 17:46:58 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:29.712 17:46:58 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:29.712 17:46:58 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:36:29.712 17:46:58 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:36:29.712 17:46:58 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:36:29.712 17:46:58 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:36:29.712 17:46:58 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:29.712 17:46:58 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:36:29.712 17:46:58 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:29.712 17:46:58 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:29.712 17:46:58 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:36:29.712 17:46:58 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:29.712 17:46:58 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:29.712 17:46:58 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:36:29.712 17:46:58 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:29.712 17:46:58 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:36:29.712 17:46:58 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:36:29.712 17:46:58 keyring_linux -- nvmf/common.sh@733 -- # python - 00:36:29.971 17:46:58 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:36:29.971 17:46:58 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:36:29.971 /tmp/:spdk-test:key0 00:36:29.971 17:46:58 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:36:29.971 17:46:58 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:29.971 17:46:58 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:36:29.971 17:46:58 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:29.971 17:46:58 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:29.971 17:46:58 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:36:29.971 17:46:58 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:29.971 17:46:58 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:29.971 17:46:58 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:36:29.971 17:46:58 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:29.971 17:46:58 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:36:29.971 17:46:58 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:36:29.971 17:46:58 keyring_linux -- nvmf/common.sh@733 -- # python - 00:36:29.971 17:46:58 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:36:29.971 17:46:58 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:36:29.971 /tmp/:spdk-test:key1 00:36:29.971 17:46:58 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:36:29.971 17:46:58 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2860241 00:36:29.971 17:46:58 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2860241 00:36:29.971 17:46:58 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 2860241 ']' 00:36:29.971 17:46:58 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:29.971 17:46:58 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:29.971 17:46:58 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:29.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:29.971 17:46:58 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:29.971 17:46:58 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:29.971 [2024-12-09 17:46:58.992062] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:36:29.971 [2024-12-09 17:46:58.992107] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2860241 ] 00:36:29.971 [2024-12-09 17:46:59.065272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:29.971 [2024-12-09 17:46:59.106682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:30.230 17:46:59 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:30.230 17:46:59 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:36:30.230 17:46:59 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:36:30.230 17:46:59 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.230 17:46:59 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:30.230 [2024-12-09 17:46:59.326003] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:30.230 null0 00:36:30.230 [2024-12-09 17:46:59.358056] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:30.230 [2024-12-09 17:46:59.358357] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:30.230 17:46:59 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.230 17:46:59 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:36:30.230 27811151 00:36:30.230 17:46:59 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:36:30.230 427964229 00:36:30.230 17:46:59 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2860383 00:36:30.230 17:46:59 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2860383 /var/tmp/bperf.sock 00:36:30.230 17:46:59 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:36:30.230 17:46:59 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 2860383 ']' 00:36:30.230 17:46:59 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:30.230 17:46:59 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:30.230 17:46:59 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:30.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:30.230 17:46:59 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:30.230 17:46:59 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:30.488 [2024-12-09 17:46:59.432774] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization... 00:36:30.488 [2024-12-09 17:46:59.432816] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2860383 ] 00:36:30.488 [2024-12-09 17:46:59.508101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:30.488 [2024-12-09 17:46:59.548981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:30.488 17:46:59 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:30.488 17:46:59 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:36:30.488 17:46:59 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:36:30.489 17:46:59 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:36:30.747 17:46:59 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:36:30.747 17:46:59 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:31.005 17:47:00 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:36:31.005 17:47:00 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:36:31.005 [2024-12-09 17:47:00.169462] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:31.263 nvme0n1 00:36:31.263 17:47:00 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:36:31.263 17:47:00 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:36:31.263 17:47:00 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:31.263 17:47:00 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:31.263 17:47:00 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:31.263 17:47:00 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:31.521 17:47:00 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:36:31.521 17:47:00 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:31.521 17:47:00 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:36:31.521 17:47:00 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:36:31.521 17:47:00 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:31.521 17:47:00 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:31.521 17:47:00 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:36:31.521 17:47:00 keyring_linux -- keyring/linux.sh@25 -- # sn=27811151 00:36:31.521 17:47:00 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:36:31.521 17:47:00 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:31.521 17:47:00 keyring_linux -- keyring/linux.sh@26 -- # [[ 27811151 == \2\7\8\1\1\1\5\1 ]] 00:36:31.521 17:47:00 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 27811151 00:36:31.521 17:47:00 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:36:31.521 17:47:00 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:31.779 Running I/O for 1 seconds... 00:36:32.713 21745.00 IOPS, 84.94 MiB/s 00:36:32.713 Latency(us) 00:36:32.713 [2024-12-09T16:47:01.892Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:32.713 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:32.713 nvme0n1 : 1.01 21745.96 84.95 0.00 0.00 5866.85 4587.52 14667.58 00:36:32.713 [2024-12-09T16:47:01.892Z] =================================================================================================================== 00:36:32.713 [2024-12-09T16:47:01.892Z] Total : 21745.96 84.95 0.00 0.00 5866.85 4587.52 14667.58 00:36:32.713 { 00:36:32.713 "results": [ 00:36:32.713 { 00:36:32.713 "job": "nvme0n1", 00:36:32.713 "core_mask": "0x2", 00:36:32.713 "workload": "randread", 00:36:32.713 "status": "finished", 00:36:32.713 "queue_depth": 128, 00:36:32.713 "io_size": 4096, 00:36:32.713 "runtime": 1.005888, 00:36:32.713 "iops": 21745.959788763757, 00:36:32.713 "mibps": 84.94515542485843, 00:36:32.713 "io_failed": 0, 00:36:32.713 "io_timeout": 0, 00:36:32.713 "avg_latency_us": 5866.846910051943, 00:36:32.713 "min_latency_us": 4587.52, 00:36:32.713 "max_latency_us": 14667.580952380953 00:36:32.713 } 00:36:32.713 ], 00:36:32.713 "core_count": 1 00:36:32.713 } 00:36:32.713 17:47:01 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:32.713 17:47:01 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:32.971 17:47:01 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:36:32.971 17:47:01 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:36:32.971 17:47:01 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:32.971 17:47:01 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:32.971 17:47:01 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:32.971 17:47:01 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:33.229 17:47:02 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:36:33.229 17:47:02 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:33.229 17:47:02 keyring_linux -- keyring/linux.sh@23 -- # return 00:36:33.229 17:47:02 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:33.229 17:47:02 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:36:33.229 17:47:02 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:33.229 17:47:02 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:36:33.229 17:47:02 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:33.229 17:47:02 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:36:33.229 17:47:02 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:33.229 17:47:02 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:33.229 17:47:02 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:33.229 [2024-12-09 17:47:02.356728] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:33.229 [2024-12-09 17:47:02.357309] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8c6500 (107): Transport endpoint is not connected 00:36:33.229 [2024-12-09 17:47:02.358304] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8c6500 (9): Bad file descriptor 00:36:33.229 [2024-12-09 17:47:02.359305] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:36:33.229 [2024-12-09 17:47:02.359316] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:33.229 [2024-12-09 17:47:02.359323] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:36:33.229 [2024-12-09 17:47:02.359331] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:36:33.229 request: 00:36:33.229 { 00:36:33.229 "name": "nvme0", 00:36:33.229 "trtype": "tcp", 00:36:33.229 "traddr": "127.0.0.1", 00:36:33.229 "adrfam": "ipv4", 00:36:33.229 "trsvcid": "4420", 00:36:33.229 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:33.229 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:33.229 "prchk_reftag": false, 00:36:33.229 "prchk_guard": false, 00:36:33.229 "hdgst": false, 00:36:33.229 "ddgst": false, 00:36:33.229 "psk": ":spdk-test:key1", 00:36:33.229 "allow_unrecognized_csi": false, 00:36:33.229 "method": "bdev_nvme_attach_controller", 00:36:33.229 "req_id": 1 00:36:33.229 } 00:36:33.229 Got JSON-RPC error response 00:36:33.229 response: 00:36:33.229 { 00:36:33.229 "code": -5, 00:36:33.229 "message": "Input/output error" 00:36:33.229 } 00:36:33.229 17:47:02 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:36:33.229 17:47:02 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:33.229 17:47:02 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:33.229 17:47:02 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:33.229 17:47:02 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:36:33.229 17:47:02 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:33.229 17:47:02 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:36:33.229 17:47:02 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:36:33.229 17:47:02 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:36:33.229 17:47:02 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:33.229 17:47:02 keyring_linux -- keyring/linux.sh@33 -- # sn=27811151 00:36:33.229 17:47:02 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 27811151 00:36:33.229 1 links removed 00:36:33.229 17:47:02 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:33.229 17:47:02 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:36:33.229 17:47:02 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:36:33.229 17:47:02 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:36:33.230 17:47:02 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:36:33.230 17:47:02 keyring_linux -- keyring/linux.sh@33 -- # sn=427964229 00:36:33.230 17:47:02 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 427964229 00:36:33.230 1 links removed 00:36:33.230 17:47:02 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2860383 00:36:33.230 17:47:02 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 2860383 ']' 00:36:33.230 17:47:02 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 2860383 00:36:33.230 17:47:02 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:36:33.230 17:47:02 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:33.230 17:47:02 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2860383 00:36:33.488 17:47:02 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:33.488 17:47:02 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:33.488 17:47:02 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2860383' 00:36:33.488 killing process with pid 2860383 00:36:33.488 17:47:02 keyring_linux -- common/autotest_common.sh@973 -- # kill 2860383 00:36:33.488 Received shutdown signal, test time was about 1.000000 seconds 00:36:33.488 00:36:33.488 Latency(us) 00:36:33.488 [2024-12-09T16:47:02.667Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:33.488 [2024-12-09T16:47:02.667Z] =================================================================================================================== 00:36:33.488 [2024-12-09T16:47:02.667Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:33.488 17:47:02 keyring_linux -- common/autotest_common.sh@978 -- # wait 2860383 00:36:33.488 17:47:02 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2860241 00:36:33.488 17:47:02 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 2860241 ']' 00:36:33.488 17:47:02 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 2860241 00:36:33.488 17:47:02 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:36:33.488 17:47:02 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:33.488 17:47:02 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2860241 00:36:33.488 17:47:02 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:33.488 17:47:02 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:33.489 17:47:02 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2860241' 00:36:33.489 killing process with pid 2860241 00:36:33.489 17:47:02 keyring_linux -- common/autotest_common.sh@973 -- # kill 2860241 00:36:33.489 17:47:02 keyring_linux -- common/autotest_common.sh@978 -- # wait 2860241 00:36:34.057 00:36:34.057 real 0m4.294s 00:36:34.057 user 0m8.061s 00:36:34.057 sys 0m1.447s 00:36:34.057 17:47:02 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:34.057 17:47:02 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:34.057 ************************************ 00:36:34.057 END TEST keyring_linux 00:36:34.057 ************************************ 00:36:34.057 17:47:02 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:36:34.057 17:47:02 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:36:34.057 17:47:02 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:36:34.057 17:47:02 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:36:34.057 17:47:02 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:36:34.057 17:47:02 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:36:34.057 17:47:02 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:36:34.057 17:47:02 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:36:34.057 17:47:02 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:36:34.057 17:47:02 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:36:34.057 17:47:02 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:36:34.057 17:47:02 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:36:34.057 17:47:02 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:36:34.057 17:47:02 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:36:34.057 17:47:02 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:36:34.057 17:47:02 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:36:34.057 17:47:02 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:36:34.057 17:47:02 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:34.057 17:47:02 -- common/autotest_common.sh@10 -- # set +x 00:36:34.057 17:47:02 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:36:34.057 17:47:02 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:36:34.057 17:47:02 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:36:34.057 17:47:02 -- common/autotest_common.sh@10 -- # set +x 00:36:39.382 INFO: APP EXITING 00:36:39.382 INFO: killing all VMs 00:36:39.382 INFO: killing vhost app 00:36:39.382 INFO: EXIT DONE 00:36:41.918 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:36:42.176 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:36:42.176 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:36:42.435 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:36:42.435 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:36:42.435 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:36:42.435 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:36:42.435 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:36:42.435 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:36:42.435 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:36:42.435 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:36:42.435 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:36:42.435 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:36:42.435 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:36:42.435 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:36:42.694 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:36:42.694 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:36:42.694 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:36:45.230 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:36:45.489 Cleaning 00:36:45.489 Removing: /var/run/dpdk/spdk0/config 00:36:45.489 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:36:45.489 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:36:45.748 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:36:45.748 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:36:45.748 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:36:45.748 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:36:45.748 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:36:45.748 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:36:45.748 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:36:45.748 Removing: /var/run/dpdk/spdk0/hugepage_info 00:36:45.748 Removing: /var/run/dpdk/spdk1/config 00:36:45.748 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:36:45.748 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:36:45.748 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:36:45.748 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:36:45.748 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:36:45.748 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:36:45.748 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:36:45.748 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:36:45.748 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:36:45.748 Removing: /var/run/dpdk/spdk1/hugepage_info 00:36:45.748 Removing: /var/run/dpdk/spdk2/config 00:36:45.748 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:36:45.748 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:36:45.748 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:36:45.748 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:36:45.748 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:36:45.748 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:36:45.748 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:36:45.748 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:36:45.748 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:36:45.748 Removing: /var/run/dpdk/spdk2/hugepage_info 00:36:45.748 Removing: /var/run/dpdk/spdk3/config 00:36:45.748 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:36:45.748 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:36:45.748 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:36:45.748 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:36:45.748 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:36:45.748 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:36:45.748 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:36:45.748 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:36:45.748 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:36:45.748 Removing: /var/run/dpdk/spdk3/hugepage_info 00:36:45.748 Removing: /var/run/dpdk/spdk4/config 00:36:45.748 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:36:45.748 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:36:45.748 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:36:45.748 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:36:45.748 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:36:45.748 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:36:45.748 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:36:45.748 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:36:45.748 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:36:45.748 Removing: /var/run/dpdk/spdk4/hugepage_info 00:36:45.748 Removing: /dev/shm/bdev_svc_trace.1 00:36:45.748 Removing: /dev/shm/nvmf_trace.0 00:36:45.748 Removing: /dev/shm/spdk_tgt_trace.pid2383966 00:36:45.748 Removing: /var/run/dpdk/spdk0 00:36:45.748 Removing: /var/run/dpdk/spdk1 00:36:45.748 Removing: /var/run/dpdk/spdk2 00:36:45.748 Removing: /var/run/dpdk/spdk3 00:36:45.748 Removing: /var/run/dpdk/spdk4 00:36:45.748 Removing: /var/run/dpdk/spdk_pid2381325 00:36:45.748 Removing: /var/run/dpdk/spdk_pid2382768 00:36:45.748 Removing: /var/run/dpdk/spdk_pid2383966 00:36:45.748 Removing: /var/run/dpdk/spdk_pid2384599 00:36:45.748 Removing: /var/run/dpdk/spdk_pid2385532 00:36:45.748 Removing: /var/run/dpdk/spdk_pid2385556 00:36:46.007 Removing: /var/run/dpdk/spdk_pid2386519 00:36:46.007 Removing: /var/run/dpdk/spdk_pid2386735 00:36:46.007 Removing: /var/run/dpdk/spdk_pid2386954 00:36:46.007 Removing: /var/run/dpdk/spdk_pid2388590 00:36:46.007 Removing: /var/run/dpdk/spdk_pid2389870 00:36:46.007 Removing: /var/run/dpdk/spdk_pid2390163 00:36:46.007 Removing: /var/run/dpdk/spdk_pid2390450 00:36:46.007 Removing: /var/run/dpdk/spdk_pid2390750 00:36:46.007 Removing: /var/run/dpdk/spdk_pid2391041 00:36:46.007 Removing: /var/run/dpdk/spdk_pid2391287 00:36:46.007 Removing: /var/run/dpdk/spdk_pid2391538 00:36:46.007 Removing: /var/run/dpdk/spdk_pid2391815 00:36:46.007 Removing: /var/run/dpdk/spdk_pid2392553 00:36:46.007 Removing: /var/run/dpdk/spdk_pid2395513 00:36:46.007 Removing: /var/run/dpdk/spdk_pid2395766 00:36:46.007 Removing: /var/run/dpdk/spdk_pid2396025 00:36:46.007 Removing: /var/run/dpdk/spdk_pid2396029 00:36:46.007 Removing: /var/run/dpdk/spdk_pid2396525 00:36:46.007 Removing: /var/run/dpdk/spdk_pid2396528 00:36:46.007 Removing: /var/run/dpdk/spdk_pid2397016 00:36:46.007 Removing: /var/run/dpdk/spdk_pid2397027 00:36:46.007 Removing: /var/run/dpdk/spdk_pid2397280 00:36:46.007 Removing: /var/run/dpdk/spdk_pid2397511 00:36:46.007 Removing: /var/run/dpdk/spdk_pid2397764 00:36:46.007 Removing: /var/run/dpdk/spdk_pid2397770 00:36:46.007 Removing: /var/run/dpdk/spdk_pid2398331 00:36:46.007 Removing: /var/run/dpdk/spdk_pid2398575 00:36:46.007 Removing: /var/run/dpdk/spdk_pid2398871 00:36:46.007 Removing: /var/run/dpdk/spdk_pid2402545 00:36:46.007 Removing: /var/run/dpdk/spdk_pid2406988 00:36:46.007 Removing: /var/run/dpdk/spdk_pid2417130 00:36:46.007 Removing: /var/run/dpdk/spdk_pid2417710 00:36:46.007 Removing: /var/run/dpdk/spdk_pid2421970 00:36:46.007 Removing: /var/run/dpdk/spdk_pid2422294 00:36:46.007 Removing: /var/run/dpdk/spdk_pid2427034 00:36:46.007 Removing: /var/run/dpdk/spdk_pid2432856 00:36:46.007 Removing: /var/run/dpdk/spdk_pid2435432 00:36:46.007 Removing: /var/run/dpdk/spdk_pid2445749 00:36:46.007 Removing: /var/run/dpdk/spdk_pid2454587 00:36:46.008 Removing: /var/run/dpdk/spdk_pid2456395 00:36:46.008 Removing: /var/run/dpdk/spdk_pid2457317 00:36:46.008 Removing: /var/run/dpdk/spdk_pid2474766 00:36:46.008 Removing: /var/run/dpdk/spdk_pid2478799 00:36:46.008 Removing: /var/run/dpdk/spdk_pid2524494 00:36:46.008 Removing: /var/run/dpdk/spdk_pid2529623 00:36:46.008 Removing: /var/run/dpdk/spdk_pid2535333 00:36:46.008 Removing: /var/run/dpdk/spdk_pid2541741 00:36:46.008 Removing: /var/run/dpdk/spdk_pid2541749 00:36:46.008 Removing: /var/run/dpdk/spdk_pid2542652 00:36:46.008 Removing: /var/run/dpdk/spdk_pid2543552 00:36:46.008 Removing: /var/run/dpdk/spdk_pid2544303 00:36:46.008 Removing: /var/run/dpdk/spdk_pid2544913 00:36:46.008 Removing: /var/run/dpdk/spdk_pid2544920 00:36:46.008 Removing: /var/run/dpdk/spdk_pid2545153 00:36:46.008 Removing: /var/run/dpdk/spdk_pid2545374 00:36:46.008 Removing: /var/run/dpdk/spdk_pid2545379 00:36:46.008 Removing: /var/run/dpdk/spdk_pid2546291 00:36:46.008 Removing: /var/run/dpdk/spdk_pid2547011 00:36:46.008 Removing: /var/run/dpdk/spdk_pid2547889 00:36:46.008 Removing: /var/run/dpdk/spdk_pid2548566 00:36:46.008 Removing: /var/run/dpdk/spdk_pid2548568 00:36:46.008 Removing: /var/run/dpdk/spdk_pid2548793 00:36:46.008 Removing: /var/run/dpdk/spdk_pid2549810 00:36:46.008 Removing: /var/run/dpdk/spdk_pid2550785 00:36:46.008 Removing: /var/run/dpdk/spdk_pid2558988 00:36:46.008 Removing: /var/run/dpdk/spdk_pid2587980 00:36:46.008 Removing: /var/run/dpdk/spdk_pid2592435 00:36:46.008 Removing: /var/run/dpdk/spdk_pid2594020 00:36:46.267 Removing: /var/run/dpdk/spdk_pid2595833 00:36:46.267 Removing: /var/run/dpdk/spdk_pid2595857 00:36:46.267 Removing: /var/run/dpdk/spdk_pid2596082 00:36:46.267 Removing: /var/run/dpdk/spdk_pid2596241 00:36:46.267 Removing: /var/run/dpdk/spdk_pid2596676 00:36:46.267 Removing: /var/run/dpdk/spdk_pid2598925 00:36:46.267 Removing: /var/run/dpdk/spdk_pid2599895 00:36:46.267 Removing: /var/run/dpdk/spdk_pid2600262 00:36:46.267 Removing: /var/run/dpdk/spdk_pid2602459 00:36:46.267 Removing: /var/run/dpdk/spdk_pid2602938 00:36:46.267 Removing: /var/run/dpdk/spdk_pid2603441 00:36:46.267 Removing: /var/run/dpdk/spdk_pid2607690 00:36:46.267 Removing: /var/run/dpdk/spdk_pid2613136 00:36:46.267 Removing: /var/run/dpdk/spdk_pid2613138 00:36:46.267 Removing: /var/run/dpdk/spdk_pid2613139 00:36:46.267 Removing: /var/run/dpdk/spdk_pid2616956 00:36:46.267 Removing: /var/run/dpdk/spdk_pid2625570 00:36:46.267 Removing: /var/run/dpdk/spdk_pid2629684 00:36:46.267 Removing: /var/run/dpdk/spdk_pid2635837 00:36:46.267 Removing: /var/run/dpdk/spdk_pid2637057 00:36:46.267 Removing: /var/run/dpdk/spdk_pid2638440 00:36:46.267 Removing: /var/run/dpdk/spdk_pid2639984 00:36:46.267 Removing: /var/run/dpdk/spdk_pid2645147 00:36:46.267 Removing: /var/run/dpdk/spdk_pid2649448 00:36:46.267 Removing: /var/run/dpdk/spdk_pid2653328 00:36:46.267 Removing: /var/run/dpdk/spdk_pid2660740 00:36:46.267 Removing: /var/run/dpdk/spdk_pid2660746 00:36:46.267 Removing: /var/run/dpdk/spdk_pid2665392 00:36:46.267 Removing: /var/run/dpdk/spdk_pid2665639 00:36:46.267 Removing: /var/run/dpdk/spdk_pid2665816 00:36:46.267 Removing: /var/run/dpdk/spdk_pid2666100 00:36:46.267 Removing: /var/run/dpdk/spdk_pid2666161 00:36:46.267 Removing: /var/run/dpdk/spdk_pid2670654 00:36:46.267 Removing: /var/run/dpdk/spdk_pid2671171 00:36:46.267 Removing: /var/run/dpdk/spdk_pid2675625 00:36:46.267 Removing: /var/run/dpdk/spdk_pid2678306 00:36:46.267 Removing: /var/run/dpdk/spdk_pid2683519 00:36:46.267 Removing: /var/run/dpdk/spdk_pid2688756 00:36:46.267 Removing: /var/run/dpdk/spdk_pid2697952 00:36:46.267 Removing: /var/run/dpdk/spdk_pid2705107 00:36:46.267 Removing: /var/run/dpdk/spdk_pid2705118 00:36:46.267 Removing: /var/run/dpdk/spdk_pid2724031 00:36:46.267 Removing: /var/run/dpdk/spdk_pid2724501 00:36:46.267 Removing: /var/run/dpdk/spdk_pid2725117 00:36:46.267 Removing: /var/run/dpdk/spdk_pid2725649 00:36:46.267 Removing: /var/run/dpdk/spdk_pid2726377 00:36:46.267 Removing: /var/run/dpdk/spdk_pid2726850 00:36:46.267 Removing: /var/run/dpdk/spdk_pid2727358 00:36:46.267 Removing: /var/run/dpdk/spdk_pid2728004 00:36:46.267 Removing: /var/run/dpdk/spdk_pid2732221 00:36:46.267 Removing: /var/run/dpdk/spdk_pid2732457 00:36:46.267 Removing: /var/run/dpdk/spdk_pid2738453 00:36:46.267 Removing: /var/run/dpdk/spdk_pid2738511 00:36:46.267 Removing: /var/run/dpdk/spdk_pid2744429 00:36:46.267 Removing: /var/run/dpdk/spdk_pid2748630 00:36:46.267 Removing: /var/run/dpdk/spdk_pid2758151 00:36:46.267 Removing: /var/run/dpdk/spdk_pid2758707 00:36:46.267 Removing: /var/run/dpdk/spdk_pid2762920 00:36:46.267 Removing: /var/run/dpdk/spdk_pid2763165 00:36:46.267 Removing: /var/run/dpdk/spdk_pid2767305 00:36:46.267 Removing: /var/run/dpdk/spdk_pid2772954 00:36:46.267 Removing: /var/run/dpdk/spdk_pid2775503 00:36:46.267 Removing: /var/run/dpdk/spdk_pid2785339 00:36:46.267 Removing: /var/run/dpdk/spdk_pid2794481 00:36:46.267 Removing: /var/run/dpdk/spdk_pid2796239 00:36:46.267 Removing: /var/run/dpdk/spdk_pid2797137 00:36:46.526 Removing: /var/run/dpdk/spdk_pid2813125 00:36:46.526 Removing: /var/run/dpdk/spdk_pid2816899 00:36:46.526 Removing: /var/run/dpdk/spdk_pid2819667 00:36:46.526 Removing: /var/run/dpdk/spdk_pid2827426 00:36:46.526 Removing: /var/run/dpdk/spdk_pid2827431 00:36:46.526 Removing: /var/run/dpdk/spdk_pid2832447 00:36:46.526 Removing: /var/run/dpdk/spdk_pid2834511 00:36:46.526 Removing: /var/run/dpdk/spdk_pid2836842 00:36:46.526 Removing: /var/run/dpdk/spdk_pid2838086 00:36:46.526 Removing: /var/run/dpdk/spdk_pid2840034 00:36:46.526 Removing: /var/run/dpdk/spdk_pid2841104 00:36:46.526 Removing: /var/run/dpdk/spdk_pid2850015 00:36:46.526 Removing: /var/run/dpdk/spdk_pid2850479 00:36:46.526 Removing: /var/run/dpdk/spdk_pid2850935 00:36:46.526 Removing: /var/run/dpdk/spdk_pid2853321 00:36:46.526 Removing: /var/run/dpdk/spdk_pid2853875 00:36:46.526 Removing: /var/run/dpdk/spdk_pid2854333 00:36:46.526 Removing: /var/run/dpdk/spdk_pid2858184 00:36:46.526 Removing: /var/run/dpdk/spdk_pid2858231 00:36:46.526 Removing: /var/run/dpdk/spdk_pid2859767 00:36:46.526 Removing: /var/run/dpdk/spdk_pid2860241 00:36:46.526 Removing: /var/run/dpdk/spdk_pid2860383 00:36:46.526 Clean 00:36:46.526 17:47:15 -- common/autotest_common.sh@1453 -- # return 0 00:36:46.526 17:47:15 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:36:46.526 17:47:15 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:46.526 17:47:15 -- common/autotest_common.sh@10 -- # set +x 00:36:46.526 17:47:15 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:36:46.526 17:47:15 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:46.526 17:47:15 -- common/autotest_common.sh@10 -- # set +x 00:36:46.526 17:47:15 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:46.526 17:47:15 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:36:46.526 17:47:15 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:36:46.526 17:47:15 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:36:46.526 17:47:15 -- spdk/autotest.sh@398 -- # hostname 00:36:46.526 17:47:15 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-03 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:36:46.785 geninfo: WARNING: invalid characters removed from testname! 00:37:08.715 17:47:36 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:10.093 17:47:38 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:12.099 17:47:40 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:13.999 17:47:42 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:15.375 17:47:44 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:17.279 17:47:46 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:19.183 17:47:48 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:37:19.183 17:47:48 -- spdk/autorun.sh@1 -- $ timing_finish 00:37:19.183 17:47:48 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:37:19.183 17:47:48 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:37:19.183 17:47:48 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:37:19.183 17:47:48 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:19.183 + [[ -n 2302893 ]] 00:37:19.183 + sudo kill 2302893 00:37:19.192 [Pipeline] } 00:37:19.207 [Pipeline] // stage 00:37:19.212 [Pipeline] } 00:37:19.226 [Pipeline] // timeout 00:37:19.231 [Pipeline] } 00:37:19.245 [Pipeline] // catchError 00:37:19.250 [Pipeline] } 00:37:19.264 [Pipeline] // wrap 00:37:19.270 [Pipeline] } 00:37:19.282 [Pipeline] // catchError 00:37:19.290 [Pipeline] stage 00:37:19.292 [Pipeline] { (Epilogue) 00:37:19.304 [Pipeline] catchError 00:37:19.306 [Pipeline] { 00:37:19.318 [Pipeline] echo 00:37:19.320 Cleanup processes 00:37:19.325 [Pipeline] sh 00:37:19.610 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:19.610 2871409 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:19.624 [Pipeline] sh 00:37:19.909 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:19.909 ++ grep -v 'sudo pgrep' 00:37:19.909 ++ awk '{print $1}' 00:37:19.909 + sudo kill -9 00:37:19.909 + true 00:37:19.921 [Pipeline] sh 00:37:20.205 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:37:32.429 [Pipeline] sh 00:37:32.716 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:37:32.716 Artifacts sizes are good 00:37:32.730 [Pipeline] archiveArtifacts 00:37:32.737 Archiving artifacts 00:37:32.857 [Pipeline] sh 00:37:33.142 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:37:33.156 [Pipeline] cleanWs 00:37:33.165 [WS-CLEANUP] Deleting project workspace... 00:37:33.165 [WS-CLEANUP] Deferred wipeout is used... 00:37:33.172 [WS-CLEANUP] done 00:37:33.174 [Pipeline] } 00:37:33.190 [Pipeline] // catchError 00:37:33.201 [Pipeline] sh 00:37:33.520 + logger -p user.info -t JENKINS-CI 00:37:33.529 [Pipeline] } 00:37:33.542 [Pipeline] // stage 00:37:33.547 [Pipeline] } 00:37:33.561 [Pipeline] // node 00:37:33.566 [Pipeline] End of Pipeline 00:37:33.604 Finished: SUCCESS